NOTE: libkvm, w, ps, 'top', and any other utility which depends on struct

proc or any VM system structure will have to be rebuilt!!!

Much needed overhaul of the VM system. Included in this first round of
changes:

1) Improved pager interfaces: init, alloc, dealloc, getpages, putpages,
   haspage, and sync operations are supported. The haspage interface now
   provides information about clusterability. All pager routines now take
   struct vm_object's instead of "pagers".

2) Improved data structures. In the previous paradigm, there is constant
   confusion caused by pagers being both a data structure ("allocate a
   pager") and a collection of routines. The idea of a pager structure has
   escentially been eliminated. Objects now have types, and this type is
   used to index the appropriate pager. In most cases, items in the pager
   structure were duplicated in the object data structure and thus were
   unnecessary. In the few cases that remained, a un_pager structure union
   was created in the object to contain these items.

3) Because of the cleanup of #1 & #2, a lot of unnecessary layering can now
   be removed. For instance, vm_object_enter(), vm_object_lookup(),
   vm_object_remove(), and the associated object hash list were some of the
   things that were removed.

4) simple_lock's removed. Discussion with several people reveals that the
   SMP locking primitives used in the VM system aren't likely the mechanism
   that we'll be adopting. Even if it were, the locking that was in the code
   was very inadequate and would have to be mostly re-done anyway. The
   locking in a uni-processor kernel was a no-op but went a long way toward
   making the code difficult to read and debug.

5) Places that attempted to kludge-up the fact that we don't have kernel
   thread support have been fixed to reflect the reality that we are really
   dealing with processes, not threads. The VM system didn't have complete
   thread support, so the comments and mis-named routines were just wrong.
   We now use tsleep and wakeup directly in the lock routines, for instance.

6) Where appropriate, the pagers have been improved, especially in the
   pager_alloc routines. Most of the pager_allocs have been rewritten and
   are now faster and easier to maintain.

7) The pagedaemon pageout clustering algorithm has been rewritten and
   now tries harder to output an even number of pages before and after
   the requested page. This is sort of the reverse of the ideal pagein
   algorithm and should provide better overall performance.

8) Unnecessary (incorrect) casts to caddr_t in calls to tsleep & wakeup
   have been removed. Some other unnecessary casts have also been removed.

9) Some almost useless debugging code removed.

10) Terminology of shadow objects vs. backing objects straightened out.
    The fact that the vm_object data structure escentially had this
    backwards really confused things. The use of "shadow" and "backing
    object" throughout the code is now internally consistent and correct
    in the Mach terminology.

11) Several minor bug fixes, including one in the vm daemon that caused
    0 RSS objects to not get purged as intended.

12) A "default pager" has now been created which cleans up the transition
    of objects to the "swap" type. The previous checks throughout the code
    for swp->pg_data != NULL were really ugly. This change also provides
    the rudiments for future backing of "anonymous" memory by something
    other than the swap pager (via the vnode pager, for example), and it
    allows the decision about which of these pagers to use to be made
    dynamically (although will need some additional decision code to do
    this, of course).

13) (dyson) MAP_COPY has been deprecated and the corresponding "copy
    object" code has been removed. MAP_COPY was undocumented and non-
    standard. It was furthermore broken in several ways which caused its
    behavior to degrade to MAP_PRIVATE. Binaries that use MAP_COPY will
    continue to work correctly, but via the slightly different semantics
    of MAP_PRIVATE.

14) (dyson) Sharing maps have been removed. It's marginal usefulness in a
    threads design can be worked around in other ways. Both #12 and #13
    were done to simplify the code and improve readability and maintain-
    ability. (As were most all of these changes)

TODO:

1) Rewrite most of the vnode pager to use VOP_GETPAGES/PUTPAGES. Doing
   this will reduce the vnode pager to a mere fraction of its current size.

2) Rewrite vm_fault and the swap/vnode pagers to use the clustering
   information provided by the new haspage pager interface. This will
   substantially reduce the overhead by eliminating a large number of
   VOP_BMAP() calls. The VOP_BMAP() filesystem interface should be
   improved to provide both a "behind" and "ahead" indication of
   contiguousness.

3) Implement the extended features of pager_haspage in swap_pager_haspage().
   It currently just says 0 pages ahead/behind.

4) Re-implement the swap device (swstrategy) in a more elegant way, perhaps
   via a much more general mechanism that could also be used for disk
   striping of regular filesystems.

5) Do something to improve the architecture of vm_object_collapse(). The
   fact that it makes calls into the swap pager and knows too much about
   how the swap pager operates really bothers me. It also doesn't allow
   for collapsing of non-swap pager objects ("unnamed" objects backed by
   other pagers).
This commit is contained in:
David Greenman 1995-07-13 08:48:48 +00:00
parent 33d23de425
commit 24a1cce34f
Notes: svn2git 2020-12-20 02:59:44 +00:00
svn path=/head/; revision=9507
54 changed files with 1138 additions and 3172 deletions

View File

@ -35,7 +35,7 @@
* SUCH DAMAGE.
*
* from: @(#)machdep.c 7.4 (Berkeley) 6/3/91
* $Id: machdep.c,v 1.129 1995/06/26 07:39:52 bde Exp $
* $Id: machdep.c,v 1.130 1995/06/28 04:46:11 davidg Exp $
*/
#include "npx.h"
@ -77,6 +77,7 @@
#include <vm/vm.h>
#include <vm/vm_kern.h>
#include <vm/vm_page.h>
#include <vm/vm_pager.h>
#include <sys/exec.h>
#include <sys/vnode.h>

View File

@ -39,7 +39,7 @@
* SUCH DAMAGE.
*
* from: @(#)pmap.c 7.7 (Berkeley) 5/12/91
* $Id: pmap.c,v 1.57 1995/05/11 19:26:11 rgrimes Exp $
* $Id: pmap.c,v 1.58 1995/05/30 07:59:38 rgrimes Exp $
*/
/*
@ -369,7 +369,6 @@ pmap_bootstrap(firstaddr, loadaddr)
kernel_pmap->pm_pdir = (pd_entry_t *) (KERNBASE + IdlePTD);
simple_lock_init(&kernel_pmap->pm_lock);
kernel_pmap->pm_count = 1;
nkpt = NKPT;
@ -535,7 +534,6 @@ pmap_pinit(pmap)
((int) pmap_kextract((vm_offset_t) pmap->pm_pdir)) | PG_V | PG_KW;
pmap->pm_count = 1;
simple_lock_init(&pmap->pm_lock);
}
/*
@ -605,9 +603,7 @@ pmap_destroy(pmap)
if (pmap == NULL)
return;
simple_lock(&pmap->pm_lock);
count = --pmap->pm_count;
simple_unlock(&pmap->pm_lock);
if (count == 0) {
pmap_release(pmap);
free((caddr_t) pmap, M_VMPMAP);
@ -634,9 +630,7 @@ pmap_reference(pmap)
pmap_t pmap;
{
if (pmap != NULL) {
simple_lock(&pmap->pm_lock);
pmap->pm_count++;
simple_unlock(&pmap->pm_lock);
}
}
@ -1469,8 +1463,6 @@ pmap_object_init_pt(pmap, addr, object, offset, size)
(object->resident_page_count > (MAX_INIT_PT / NBPG)))) {
return;
}
if (!vm_object_lock_try(object))
return;
/*
* if we are processing a major portion of the object, then scan the
@ -1520,7 +1512,6 @@ pmap_object_init_pt(pmap, addr, object, offset, size)
}
}
}
vm_object_unlock(object);
}
#if 0

View File

@ -38,7 +38,7 @@
*
* from: @(#)vm_machdep.c 7.3 (Berkeley) 5/13/91
* Utah $Hdr: vm_machdep.c 1.16.1.1 89/06/23$
* $Id: vm_machdep.c,v 1.38 1995/05/18 09:17:07 davidg Exp $
* $Id: vm_machdep.c,v 1.39 1995/05/30 07:59:46 rgrimes Exp $
*/
#include "npx.h"
@ -56,6 +56,7 @@
#include <vm/vm.h>
#include <vm/vm_kern.h>
#include <vm/vm_page.h>
#include <i386/isa/isa.h>

View File

@ -42,7 +42,7 @@
*
* from: hp300: @(#)pmap.h 7.2 (Berkeley) 12/16/90
* from: @(#)pmap.h 7.4 (Berkeley) 5/12/91
* $Id: pmap.h,v 1.25 1995/03/26 23:42:55 davidg Exp $
* $Id: pmap.h,v 1.26 1995/05/30 08:00:48 rgrimes Exp $
*/
#ifndef _MACHINE_PMAP_H_
@ -148,7 +148,6 @@ struct pmap {
boolean_t pm_pdchanged; /* pdir changed */
short pm_dref; /* page directory ref count */
short pm_count; /* pmap reference count */
simple_lock_data_t pm_lock; /* lock on pmap */
struct pmap_statistics pm_stats; /* pmap statistics */
long pm_ptpages; /* more stats: PT pages */
};

View File

@ -336,6 +336,7 @@ ufs/ufs/ufs_lookup.c standard
ufs/ufs/ufs_quota.c standard
ufs/ufs/ufs_vfsops.c standard
ufs/ufs/ufs_vnops.c standard
vm/default_pager.c standard
vm/device_pager.c standard
vm/kern_lock.c standard
vm/swap_pager.c standard

View File

@ -37,7 +37,7 @@
*
* @(#)procfs_mem.c 8.4 (Berkeley) 1/21/94
*
* $Id: procfs_mem.c,v 1.7 1995/05/30 08:07:09 rgrimes Exp $
* $Id: procfs_mem.c,v 1.8 1995/06/28 04:51:06 davidg Exp $
*/
/*
@ -152,7 +152,7 @@ procfs_rwmem(p, uio)
/*
* Fault the page in...
*/
if (!error && writing && object->shadow) {
if (!error && writing && object->backing_object) {
m = vm_page_lookup(object, off);
if (m == 0 || (m->flags & PG_COPYONWRITE))
error = vm_fault(map, pageno,

View File

@ -35,7 +35,7 @@
* SUCH DAMAGE.
*
* from: @(#)machdep.c 7.4 (Berkeley) 6/3/91
* $Id: machdep.c,v 1.129 1995/06/26 07:39:52 bde Exp $
* $Id: machdep.c,v 1.130 1995/06/28 04:46:11 davidg Exp $
*/
#include "npx.h"
@ -77,6 +77,7 @@
#include <vm/vm.h>
#include <vm/vm_kern.h>
#include <vm/vm_page.h>
#include <vm/vm_pager.h>
#include <sys/exec.h>
#include <sys/vnode.h>

View File

@ -39,7 +39,7 @@
* SUCH DAMAGE.
*
* from: @(#)pmap.c 7.7 (Berkeley) 5/12/91
* $Id: pmap.c,v 1.57 1995/05/11 19:26:11 rgrimes Exp $
* $Id: pmap.c,v 1.58 1995/05/30 07:59:38 rgrimes Exp $
*/
/*
@ -369,7 +369,6 @@ pmap_bootstrap(firstaddr, loadaddr)
kernel_pmap->pm_pdir = (pd_entry_t *) (KERNBASE + IdlePTD);
simple_lock_init(&kernel_pmap->pm_lock);
kernel_pmap->pm_count = 1;
nkpt = NKPT;
@ -535,7 +534,6 @@ pmap_pinit(pmap)
((int) pmap_kextract((vm_offset_t) pmap->pm_pdir)) | PG_V | PG_KW;
pmap->pm_count = 1;
simple_lock_init(&pmap->pm_lock);
}
/*
@ -605,9 +603,7 @@ pmap_destroy(pmap)
if (pmap == NULL)
return;
simple_lock(&pmap->pm_lock);
count = --pmap->pm_count;
simple_unlock(&pmap->pm_lock);
if (count == 0) {
pmap_release(pmap);
free((caddr_t) pmap, M_VMPMAP);
@ -634,9 +630,7 @@ pmap_reference(pmap)
pmap_t pmap;
{
if (pmap != NULL) {
simple_lock(&pmap->pm_lock);
pmap->pm_count++;
simple_unlock(&pmap->pm_lock);
}
}
@ -1469,8 +1463,6 @@ pmap_object_init_pt(pmap, addr, object, offset, size)
(object->resident_page_count > (MAX_INIT_PT / NBPG)))) {
return;
}
if (!vm_object_lock_try(object))
return;
/*
* if we are processing a major portion of the object, then scan the
@ -1520,7 +1512,6 @@ pmap_object_init_pt(pmap, addr, object, offset, size)
}
}
}
vm_object_unlock(object);
}
#if 0

View File

@ -38,7 +38,7 @@
*
* from: @(#)vm_machdep.c 7.3 (Berkeley) 5/13/91
* Utah $Hdr: vm_machdep.c 1.16.1.1 89/06/23$
* $Id: vm_machdep.c,v 1.38 1995/05/18 09:17:07 davidg Exp $
* $Id: vm_machdep.c,v 1.39 1995/05/30 07:59:46 rgrimes Exp $
*/
#include "npx.h"
@ -56,6 +56,7 @@
#include <vm/vm.h>
#include <vm/vm_kern.h>
#include <vm/vm_page.h>
#include <i386/isa/isa.h>

View File

@ -42,7 +42,7 @@
*
* from: hp300: @(#)pmap.h 7.2 (Berkeley) 12/16/90
* from: @(#)pmap.h 7.4 (Berkeley) 5/12/91
* $Id: pmap.h,v 1.25 1995/03/26 23:42:55 davidg Exp $
* $Id: pmap.h,v 1.26 1995/05/30 08:00:48 rgrimes Exp $
*/
#ifndef _MACHINE_PMAP_H_
@ -148,7 +148,6 @@ struct pmap {
boolean_t pm_pdchanged; /* pdir changed */
short pm_dref; /* page directory ref count */
short pm_count; /* pmap reference count */
simple_lock_data_t pm_lock; /* lock on pmap */
struct pmap_statistics pm_stats; /* pmap statistics */
long pm_ptpages; /* more stats: PT pages */
};

View File

@ -36,7 +36,7 @@
* SUCH DAMAGE.
*
* @(#)vfs_subr.c 8.13 (Berkeley) 4/18/94
* $Id: vfs_subr.c,v 1.32 1995/06/28 12:00:55 davidg Exp $
* $Id: vfs_subr.c,v 1.33 1995/07/08 04:10:32 davidg Exp $
*/
/*
@ -512,10 +512,8 @@ vinvalbuf(vp, flags, cred, p, slpflag, slptimeo)
*/
object = vp->v_object;
if (object != NULL) {
vm_object_lock(object);
vm_object_page_remove(object, 0, object->size,
(flags & V_SAVE) ? TRUE : FALSE);
vm_object_unlock(object);
}
if (!(flags & V_SAVEMETA) &&
(vp->v_dirtyblkhd.lh_first || vp->v_cleanblkhd.lh_first))
@ -1533,11 +1531,7 @@ loop:
continue;
if (vp->v_object &&
(((vm_object_t) vp->v_object)->flags & OBJ_WRITEABLE)) {
if (vget(vp, 1))
goto loop;
_vm_object_page_clean(vp->v_object,
0, 0, TRUE);
vput(vp);
vm_object_page_clean(vp->v_object, 0, 0, TRUE, TRUE);
}
}
}

View File

@ -36,7 +36,7 @@
* SUCH DAMAGE.
*
* @(#)vfs_syscalls.c 8.13 (Berkeley) 4/15/94
* $Id: vfs_syscalls.c,v 1.26 1995/06/28 07:06:40 davidg Exp $
* $Id: vfs_syscalls.c,v 1.27 1995/06/28 12:00:57 davidg Exp $
*/
#include <sys/param.h>
@ -1785,7 +1785,7 @@ fsync(p, uap, retval)
vp = (struct vnode *)fp->f_data;
VOP_LOCK(vp);
if (vp->v_object) {
_vm_object_page_clean(vp->v_object, 0, 0 ,0);
vm_object_page_clean(vp->v_object, 0, 0 ,0, FALSE);
}
error = VOP_FSYNC(vp, fp->f_cred, MNT_WAIT, p);
VOP_UNLOCK(vp);

View File

@ -36,7 +36,7 @@
* SUCH DAMAGE.
*
* @(#)vfs_subr.c 8.13 (Berkeley) 4/18/94
* $Id: vfs_subr.c,v 1.32 1995/06/28 12:00:55 davidg Exp $
* $Id: vfs_subr.c,v 1.33 1995/07/08 04:10:32 davidg Exp $
*/
/*
@ -512,10 +512,8 @@ vinvalbuf(vp, flags, cred, p, slpflag, slptimeo)
*/
object = vp->v_object;
if (object != NULL) {
vm_object_lock(object);
vm_object_page_remove(object, 0, object->size,
(flags & V_SAVE) ? TRUE : FALSE);
vm_object_unlock(object);
}
if (!(flags & V_SAVEMETA) &&
(vp->v_dirtyblkhd.lh_first || vp->v_cleanblkhd.lh_first))
@ -1533,11 +1531,7 @@ loop:
continue;
if (vp->v_object &&
(((vm_object_t) vp->v_object)->flags & OBJ_WRITEABLE)) {
if (vget(vp, 1))
goto loop;
_vm_object_page_clean(vp->v_object,
0, 0, TRUE);
vput(vp);
vm_object_page_clean(vp->v_object, 0, 0, TRUE, TRUE);
}
}
}

View File

@ -36,7 +36,7 @@
* SUCH DAMAGE.
*
* @(#)vfs_syscalls.c 8.13 (Berkeley) 4/15/94
* $Id: vfs_syscalls.c,v 1.26 1995/06/28 07:06:40 davidg Exp $
* $Id: vfs_syscalls.c,v 1.27 1995/06/28 12:00:57 davidg Exp $
*/
#include <sys/param.h>
@ -1785,7 +1785,7 @@ fsync(p, uap, retval)
vp = (struct vnode *)fp->f_data;
VOP_LOCK(vp);
if (vp->v_object) {
_vm_object_page_clean(vp->v_object, 0, 0 ,0);
vm_object_page_clean(vp->v_object, 0, 0 ,0, FALSE);
}
error = VOP_FSYNC(vp, fp->f_cred, MNT_WAIT, p);
VOP_UNLOCK(vp);

View File

@ -36,7 +36,7 @@
* SUCH DAMAGE.
*
* @(#)vfs_vnops.c 8.2 (Berkeley) 1/21/94
* $Id: vfs_vnops.c,v 1.13 1995/06/28 12:32:47 davidg Exp $
* $Id: vfs_vnops.c,v 1.14 1995/07/09 06:57:53 davidg Exp $
*/
#include <sys/param.h>
@ -151,14 +151,10 @@ vn_open(ndp, fmode, cmode)
error = VOP_OPEN(vp, fmode, cred, p);
if (error)
goto bad;
if (fmode & FWRITE)
vp->v_writecount++;
/*
* this is here for VMIO support
*/
if (vp->v_type == VREG) {
vm_object_t object;
vm_pager_t pager;
retry:
if ((vp->v_flag & VVMIO) == 0) {
error = VOP_GETATTR(vp, vap, cred, p);
@ -168,6 +164,7 @@ retry:
panic("vn_open: failed to allocate object");
vp->v_flag |= VVMIO;
} else {
vm_object_t object;
if ((object = vp->v_object) &&
(object->flags & OBJ_DEAD)) {
VOP_UNLOCK(vp);
@ -177,12 +174,11 @@ retry:
}
if (!object)
panic("vn_open: VMIO object missing");
pager = object->pager;
if (!pager)
panic("vn_open: VMIO pager missing");
(void) vm_object_lookup(pager);
vm_object_reference(object);
}
}
if (fmode & FWRITE)
vp->v_writecount++;
return (0);
bad:
vput(vp);

View File

@ -37,7 +37,7 @@
*
* @(#)procfs_mem.c 8.4 (Berkeley) 1/21/94
*
* $Id: procfs_mem.c,v 1.7 1995/05/30 08:07:09 rgrimes Exp $
* $Id: procfs_mem.c,v 1.8 1995/06/28 04:51:06 davidg Exp $
*/
/*
@ -152,7 +152,7 @@ procfs_rwmem(p, uio)
/*
* Fault the page in...
*/
if (!error && writing && object->shadow) {
if (!error && writing && object->backing_object) {
m = vm_page_lookup(object, off);
if (m == 0 || (m->flags & PG_COPYONWRITE))
error = vm_fault(map, pageno,

View File

@ -34,7 +34,7 @@
* SUCH DAMAGE.
*
* @(#)nfs_subs.c 8.3 (Berkeley) 1/4/94
* $Id: nfs_subs.c,v 1.18 1995/06/28 12:01:05 davidg Exp $
* $Id: nfs_subs.c,v 1.19 1995/07/09 06:57:59 davidg Exp $
*/
/*
@ -59,6 +59,7 @@
#endif
#include <vm/vm.h>
#include <vm/vnode_pager.h>
#include <nfs/rpcv2.h>
#include <nfs/nfsproto.h>
@ -72,8 +73,6 @@
#include <miscfs/specfs/specdev.h>
#include <vm/vnode_pager.h>
#include <netinet/in.h>
#ifdef ISO
#include <netiso/iso.h>
@ -1898,7 +1897,6 @@ nfsrv_errmap(nd, err)
int
nfsrv_vmio(struct vnode *vp) {
vm_object_t object;
vm_pager_t pager;
if ((vp == NULL) || (vp->v_type != VREG))
return 1;
@ -1923,10 +1921,7 @@ retry:
}
if (!object)
panic("nfsrv_vmio: VMIO object missing");
pager = object->pager;
if (!pager)
panic("nfsrv_vmio: VMIO pager missing");
(void) vm_object_lookup(pager);
vm_object_reference(object);
}
return 0;
}

View File

@ -34,7 +34,7 @@
* SUCH DAMAGE.
*
* @(#)nfs_subs.c 8.3 (Berkeley) 1/4/94
* $Id: nfs_subs.c,v 1.18 1995/06/28 12:01:05 davidg Exp $
* $Id: nfs_subs.c,v 1.19 1995/07/09 06:57:59 davidg Exp $
*/
/*
@ -59,6 +59,7 @@
#endif
#include <vm/vm.h>
#include <vm/vnode_pager.h>
#include <nfs/rpcv2.h>
#include <nfs/nfsproto.h>
@ -72,8 +73,6 @@
#include <miscfs/specfs/specdev.h>
#include <vm/vnode_pager.h>
#include <netinet/in.h>
#ifdef ISO
#include <netiso/iso.h>
@ -1898,7 +1897,6 @@ nfsrv_errmap(nd, err)
int
nfsrv_vmio(struct vnode *vp) {
vm_object_t object;
vm_pager_t pager;
if ((vp == NULL) || (vp->v_type != VREG))
return 1;
@ -1923,10 +1921,7 @@ retry:
}
if (!object)
panic("nfsrv_vmio: VMIO object missing");
pager = object->pager;
if (!pager)
panic("nfsrv_vmio: VMIO pager missing");
(void) vm_object_lookup(pager);
vm_object_reference(object);
}
return 0;
}

View File

@ -34,7 +34,7 @@
* SUCH DAMAGE.
*
* @(#)nfs_vnops.c 8.5 (Berkeley) 2/13/94
* $Id: nfs_vnops.c,v 1.17 1995/06/28 07:06:52 davidg Exp $
* $Id: nfs_vnops.c,v 1.18 1995/06/28 17:33:39 dfr Exp $
*/
/*
@ -59,6 +59,7 @@
#include <ufs/ufs/dir.h>
#include <vm/vm.h>
#include <vm/vnode_pager.h>
#include <miscfs/specfs/specdev.h>
#include <miscfs/fifofs/fifo.h>

View File

@ -34,7 +34,7 @@
* SUCH DAMAGE.
*
* @(#)nfs_subs.c 8.3 (Berkeley) 1/4/94
* $Id: nfs_subs.c,v 1.18 1995/06/28 12:01:05 davidg Exp $
* $Id: nfs_subs.c,v 1.19 1995/07/09 06:57:59 davidg Exp $
*/
/*
@ -59,6 +59,7 @@
#endif
#include <vm/vm.h>
#include <vm/vnode_pager.h>
#include <nfs/rpcv2.h>
#include <nfs/nfsproto.h>
@ -72,8 +73,6 @@
#include <miscfs/specfs/specdev.h>
#include <vm/vnode_pager.h>
#include <netinet/in.h>
#ifdef ISO
#include <netiso/iso.h>
@ -1898,7 +1897,6 @@ nfsrv_errmap(nd, err)
int
nfsrv_vmio(struct vnode *vp) {
vm_object_t object;
vm_pager_t pager;
if ((vp == NULL) || (vp->v_type != VREG))
return 1;
@ -1923,10 +1921,7 @@ retry:
}
if (!object)
panic("nfsrv_vmio: VMIO object missing");
pager = object->pager;
if (!pager)
panic("nfsrv_vmio: VMIO pager missing");
(void) vm_object_lookup(pager);
vm_object_reference(object);
}
return 0;
}

View File

@ -34,7 +34,7 @@
* SUCH DAMAGE.
*
* @(#)nfs_vnops.c 8.5 (Berkeley) 2/13/94
* $Id: nfs_vnops.c,v 1.17 1995/06/28 07:06:52 davidg Exp $
* $Id: nfs_vnops.c,v 1.18 1995/06/28 17:33:39 dfr Exp $
*/
/*
@ -59,6 +59,7 @@
#include <ufs/ufs/dir.h>
#include <vm/vm.h>
#include <vm/vnode_pager.h>
#include <miscfs/specfs/specdev.h>
#include <miscfs/fifofs/fifo.h>

View File

@ -34,7 +34,7 @@
* SUCH DAMAGE.
*
* @(#)nfs_subs.c 8.3 (Berkeley) 1/4/94
* $Id: nfs_subs.c,v 1.18 1995/06/28 12:01:05 davidg Exp $
* $Id: nfs_subs.c,v 1.19 1995/07/09 06:57:59 davidg Exp $
*/
/*
@ -59,6 +59,7 @@
#endif
#include <vm/vm.h>
#include <vm/vnode_pager.h>
#include <nfs/rpcv2.h>
#include <nfs/nfsproto.h>
@ -72,8 +73,6 @@
#include <miscfs/specfs/specdev.h>
#include <vm/vnode_pager.h>
#include <netinet/in.h>
#ifdef ISO
#include <netiso/iso.h>
@ -1898,7 +1897,6 @@ nfsrv_errmap(nd, err)
int
nfsrv_vmio(struct vnode *vp) {
vm_object_t object;
vm_pager_t pager;
if ((vp == NULL) || (vp->v_type != VREG))
return 1;
@ -1923,10 +1921,7 @@ retry:
}
if (!object)
panic("nfsrv_vmio: VMIO object missing");
pager = object->pager;
if (!pager)
panic("nfsrv_vmio: VMIO pager missing");
(void) vm_object_lookup(pager);
vm_object_reference(object);
}
return 0;
}

View File

@ -31,7 +31,7 @@
* SUCH DAMAGE.
*
* @(#)mman.h 8.1 (Berkeley) 6/2/93
* $Id: mman.h,v 1.6 1995/03/25 17:28:30 davidg Exp $
* $Id: mman.h,v 1.7 1995/05/14 19:19:07 nate Exp $
*/
#ifndef _SYS_MMAN_H_
@ -48,9 +48,9 @@
* Flags contain sharing type and options.
* Sharing types; choose one.
*/
#define MAP_SHARED 0x0001 /* share changes */
#define MAP_PRIVATE 0x0002 /* changes are private */
#define MAP_COPY 0x0004 /* "copy" region at mmap time */
#define MAP_SHARED 0x0001 /* share changes */
#define MAP_PRIVATE 0x0002 /* changes are private */
#define MAP_COPY MAP_PRIVATE /* Obsolete */
/*
* Other flags

View File

@ -36,7 +36,7 @@
* SUCH DAMAGE.
*
* @(#)proc.h 8.8 (Berkeley) 1/21/94
* $Id: proc.h,v 1.16 1995/02/21 00:37:31 davidg Exp $
* $Id: proc.h,v 1.17 1995/03/16 18:16:22 bde Exp $
*/
#ifndef _SYS_PROC_H_
@ -136,9 +136,8 @@ struct proc {
struct vnode *p_textvp; /* Vnode of executable. */
char p_lock; /* Process lock count. */
char p_lock; /* Process lock (prevent swap) count. */
char p_pad2[3]; /* alignment */
long p_spare[2]; /* Pad to 256, avoid shifting eproc. XXX */
/* End area that is zeroed on creation. */
#define p_endzero p_startcopy
@ -161,8 +160,7 @@ struct proc {
struct rtprio p_rtprio; /* Realtime priority. */
/* End area that is copied on creation. */
#define p_endcopy p_thread
int p_thread; /* Id for this "thread"; Mach glue. XXX */
#define p_endcopy p_addr
struct user *p_addr; /* Kernel virtual addr of u-area (PROC ONLY). */
struct mdproc p_md; /* Any machine-dependent fields. */
@ -198,9 +196,7 @@ struct proc {
#define P_WEXIT 0x02000 /* Working on exiting. */
#define P_EXEC 0x04000 /* Process called exec. */
#define P_SWAPPING 0x40000 /* Process is being swapped. */
/* Should probably be changed into a hold count (They have. -DG). */
#define P_NOSWAP 0x08000 /* Another flag to prevent swap out. */
#define P_NOSWAP 0x08000 /* Flag to prevent swap out. */
#define P_PHYSIO 0x10000 /* Doing physical I/O. */
/* Should be moved to machine-dependent areas. */

View File

@ -31,7 +31,7 @@
* SUCH DAMAGE.
*
* @(#)ffs_vfsops.c 8.8 (Berkeley) 4/18/94
* $Id: ffs_vfsops.c,v 1.21 1995/05/30 08:15:03 rgrimes Exp $
* $Id: ffs_vfsops.c,v 1.22 1995/06/28 12:01:08 davidg Exp $
*/
#include <sys/param.h>
@ -700,11 +700,7 @@ loop:
ip = VTOI(vp);
if (vp->v_object &&
(((vm_object_t) vp->v_object)->flags & OBJ_WRITEABLE)) {
if (vget(vp, 1))
goto loop;
_vm_object_page_clean(vp->v_object,
0, 0, 0);
vput(vp);
vm_object_page_clean(vp->v_object, 0, 0, 0, TRUE);
}
if ((((ip->i_flag &

View File

@ -31,7 +31,7 @@
* SUCH DAMAGE.
*
* @(#)lfs_vnops.c 8.5 (Berkeley) 12/30/93
* $Id: lfs_vnops.c,v 1.10 1995/06/28 07:06:53 davidg Exp $
* $Id: lfs_vnops.c,v 1.11 1995/06/28 12:01:10 davidg Exp $
*/
#include <sys/param.h>
@ -239,7 +239,7 @@ lfs_fsync(ap)
* into the buffer cache.
*/
if (ap->a_vp->v_object)
_vm_object_page_clean(ap->a_vp->v_object, 0, 0, 0);
vm_object_page_clean(ap->a_vp->v_object, 0, 0, 0, TRUE);
error = (VOP_UPDATE(ap->a_vp, &tv, &tv,
ap->a_waitfor == MNT_WAIT ? LFS_SYNC : 0));

View File

@ -36,11 +36,7 @@
* SUCH DAMAGE.
*
* @(#)device_pager.c 8.1 (Berkeley) 6/11/93
* $Id: device_pager.c,v 1.10 1995/05/18 02:59:18 davidg Exp $
*/
/*
* Page to/from special files.
* $Id: device_pager.c,v 1.11 1995/05/30 08:15:46 rgrimes Exp $
*/
#include <sys/param.h>
@ -53,52 +49,35 @@
#include <vm/vm.h>
#include <vm/vm_kern.h>
#include <vm/vm_page.h>
#include <vm/vm_pager.h>
#include <vm/device_pager.h>
struct pagerlst dev_pager_list; /* list of managed devices */
struct pglist dev_pager_fakelist; /* list of available vm_page_t's */
struct pagerlst dev_pager_object_list; /* list of device pager objects */
TAILQ_HEAD(, vm_page) dev_pager_fakelist; /* list of available vm_page_t's */
#ifdef DEBUG
int dpagerdebug;
#define DDB_FOLLOW 0x01
#define DDB_INIT 0x02
#define DDB_ALLOC 0x04
#define DDB_FAIL 0x08
#endif
static vm_pager_t dev_pager_alloc __P((void *, vm_size_t, vm_prot_t, vm_offset_t));
static void dev_pager_dealloc __P((vm_pager_t));
static int dev_pager_getpage __P((vm_pager_t, vm_page_t, boolean_t));
static boolean_t dev_pager_haspage __P((vm_pager_t, vm_offset_t));
static void dev_pager_init __P((void));
static int dev_pager_putpage __P((vm_pager_t, vm_page_t, boolean_t));
static vm_page_t dev_pager_getfake __P((vm_offset_t));
static void dev_pager_putfake __P((vm_page_t));
static int dev_pager_alloc_lock, dev_pager_alloc_lock_want;
struct pagerops devicepagerops = {
dev_pager_init,
dev_pager_alloc,
dev_pager_dealloc,
dev_pager_getpage,
0,
dev_pager_putpage,
0,
dev_pager_haspage
dev_pager_getpages,
dev_pager_putpages,
dev_pager_haspage,
NULL
};
static void
void
dev_pager_init()
{
#ifdef DEBUG
if (dpagerdebug & DDB_FOLLOW)
printf("dev_pager_init()\n");
#endif
TAILQ_INIT(&dev_pager_list);
TAILQ_INIT(&dev_pager_object_list);
TAILQ_INIT(&dev_pager_fakelist);
}
static vm_pager_t
vm_object_t
dev_pager_alloc(handle, size, prot, foff)
void *handle;
vm_size_t size;
@ -106,25 +85,10 @@ dev_pager_alloc(handle, size, prot, foff)
vm_offset_t foff;
{
dev_t dev;
vm_pager_t pager;
int (*mapfunc) ();
vm_object_t object;
dev_pager_t devp;
unsigned int npages, off;
#ifdef DEBUG
if (dpagerdebug & DDB_FOLLOW)
printf("dev_pager_alloc(%x, %x, %x, %x)\n",
handle, size, prot, foff);
#endif
#ifdef DIAGNOSTIC
/*
* Pageout to device, should never happen.
*/
if (handle == NULL)
panic("dev_pager_alloc called");
#endif
/*
* Make sure this device can be mapped.
*/
@ -150,125 +114,75 @@ dev_pager_alloc(handle, size, prot, foff)
if ((*mapfunc) (dev, off, (int) prot) == -1)
return (NULL);
/*
* Lock to prevent object creation race contion.
*/
while (dev_pager_alloc_lock) {
dev_pager_alloc_lock_want++;
tsleep(&dev_pager_alloc_lock, PVM, "dvpall", 0);
dev_pager_alloc_lock_want--;
}
dev_pager_alloc_lock = 1;
/*
* Look up pager, creating as necessary.
*/
top:
pager = vm_pager_lookup(&dev_pager_list, handle);
if (pager == NULL) {
/*
* Allocate and initialize pager structs
*/
pager = (vm_pager_t) malloc(sizeof *pager, M_VMPAGER, M_WAITOK);
if (pager == NULL)
return (NULL);
devp = (dev_pager_t) malloc(sizeof *devp, M_VMPGDATA, M_WAITOK);
if (devp == NULL) {
free((caddr_t) pager, M_VMPAGER);
return (NULL);
}
pager->pg_handle = handle;
pager->pg_ops = &devicepagerops;
pager->pg_type = PG_DEVICE;
pager->pg_data = (caddr_t) devp;
TAILQ_INIT(&devp->devp_pglist);
object = vm_pager_object_lookup(&dev_pager_object_list, handle);
if (object == NULL) {
/*
* Allocate object and associate it with the pager.
*/
object = devp->devp_object = vm_object_allocate(foff + size);
object->flags &= ~OBJ_INTERNAL;
vm_object_enter(object, pager);
object->pager = pager;
/*
* Finally, put it on the managed list so other can find it.
* First we re-lookup in case someone else beat us to this
* point (due to blocking in the various mallocs). If so, we
* free everything and start over.
*/
if (vm_pager_lookup(&dev_pager_list, handle)) {
free((caddr_t) devp, M_VMPGDATA);
free((caddr_t) pager, M_VMPAGER);
goto top;
}
TAILQ_INSERT_TAIL(&dev_pager_list, pager, pg_list);
#ifdef DEBUG
if (dpagerdebug & DDB_ALLOC) {
printf("dev_pager_alloc: pager %x devp %x object %x\n",
pager, devp, object);
vm_object_print(object, FALSE);
}
#endif
object = vm_object_allocate(OBJT_DEVICE, foff + size);
object->handle = handle;
TAILQ_INIT(&object->un_pager.devp.devp_pglist);
TAILQ_INSERT_TAIL(&dev_pager_object_list, object, pager_object_list);
} else {
/*
* Gain a reference to the object.
*/
object = vm_object_lookup(pager);
vm_object_reference(object);
if (foff + size > object->size)
object->size = foff + size;
#ifdef DIAGNOSTIC
devp = (dev_pager_t) pager->pg_data;
if (object != devp->devp_object)
panic("dev_pager_setup: bad object");
#endif
}
return (pager);
dev_pager_alloc_lock = 0;
if (dev_pager_alloc_lock_want)
wakeup(&dev_pager_alloc_lock);
return (object);
}
static void
dev_pager_dealloc(pager)
vm_pager_t pager;
{
dev_pager_t devp;
void
dev_pager_dealloc(object)
vm_object_t object;
{
vm_page_t m;
#ifdef DEBUG
if (dpagerdebug & DDB_FOLLOW)
printf("dev_pager_dealloc(%x)\n", pager);
#endif
TAILQ_REMOVE(&dev_pager_list, pager, pg_list);
/*
* Get the object. Note: cannot use vm_object_lookup since object has
* already been removed from the hash chain.
*/
devp = (dev_pager_t) pager->pg_data;
object = devp->devp_object;
#ifdef DEBUG
if (dpagerdebug & DDB_ALLOC)
printf("dev_pager_dealloc: devp %x object %x\n", devp, object);
#endif
TAILQ_REMOVE(&dev_pager_object_list, object, pager_object_list);
/*
* Free up our fake pages.
*/
while ((m = devp->devp_pglist.tqh_first) != 0) {
TAILQ_REMOVE(&devp->devp_pglist, m, pageq);
while ((m = object->un_pager.devp.devp_pglist.tqh_first) != 0) {
TAILQ_REMOVE(&object->un_pager.devp.devp_pglist, m, pageq);
dev_pager_putfake(m);
}
free((caddr_t) devp, M_VMPGDATA);
free((caddr_t) pager, M_VMPAGER);
}
static int
dev_pager_getpage(pager, m, sync)
vm_pager_t pager;
vm_page_t m;
boolean_t sync;
int
dev_pager_getpages(object, m, count, reqpage)
vm_object_t object;
vm_page_t *m;
int count;
int reqpage;
{
register vm_object_t object;
vm_offset_t offset, paddr;
vm_page_t page;
dev_t dev;
int s;
int i, s;
int (*mapfunc) (), prot;
#ifdef DEBUG
if (dpagerdebug & DDB_FOLLOW)
printf("dev_pager_getpage(%x, %x)\n", pager, m);
#endif
object = m->object;
dev = (dev_t) (u_long) pager->pg_handle;
offset = m->offset + object->paging_offset;
dev = (dev_t) (u_long) object->handle;
offset = m[reqpage]->offset + object->paging_offset;
prot = PROT_READ; /* XXX should pass in? */
mapfunc = cdevsw[major(dev)].d_mmap;
@ -281,49 +195,44 @@ dev_pager_getpage(pager, m, sync)
panic("dev_pager_getpage: map function returns error");
#endif
/*
* Replace the passed in page with our own fake page and free up the
* original.
* Replace the passed in reqpage page with our own fake page and free up the
* all of the original pages.
*/
page = dev_pager_getfake(paddr);
TAILQ_INSERT_TAIL(&((dev_pager_t) pager->pg_data)->devp_pglist,
page, pageq);
vm_object_lock(object);
vm_page_lock_queues();
PAGE_WAKEUP(m);
vm_page_free(m);
vm_page_unlock_queues();
TAILQ_INSERT_TAIL(&object->un_pager.devp.devp_pglist, page, pageq);
for (i = 0; i < count; i++) {
PAGE_WAKEUP(m[i]);
vm_page_free(m[i]);
}
s = splhigh();
vm_page_insert(page, object, offset);
splx(s);
vm_object_unlock(object);
return (VM_PAGER_OK);
}
static int
dev_pager_putpage(pager, m, sync)
vm_pager_t pager;
vm_page_t m;
int
dev_pager_putpages(object, m, count, sync, rtvals)
vm_object_t object;
vm_page_t *m;
int count;
boolean_t sync;
int *rtvals;
{
#ifdef DEBUG
if (dpagerdebug & DDB_FOLLOW)
printf("dev_pager_putpage(%x, %x)\n", pager, m);
#endif
if (pager == NULL)
return 0;
panic("dev_pager_putpage called");
}
static boolean_t
dev_pager_haspage(pager, offset)
vm_pager_t pager;
boolean_t
dev_pager_haspage(object, offset, before, after)
vm_object_t object;
vm_offset_t offset;
int *before;
int *after;
{
#ifdef DEBUG
if (dpagerdebug & DDB_FOLLOW)
printf("dev_pager_haspage(%x, %x)\n", pager, offset);
#endif
if (before != NULL)
*before = 0;
if (after != NULL)
*after = 0;
return (TRUE);
}
@ -345,8 +254,8 @@ dev_pager_getfake(paddr)
TAILQ_REMOVE(&dev_pager_fakelist, m, pageq);
m->flags = PG_BUSY | PG_FICTITIOUS;
m->dirty = 0;
m->valid = VM_PAGE_BITS_ALL;
m->dirty = 0;
m->busy = 0;
m->bmapped = 0;
@ -360,9 +269,7 @@ static void
dev_pager_putfake(m)
vm_page_t m;
{
#ifdef DIAGNOSTIC
if (!(m->flags & PG_FICTITIOUS))
panic("dev_pager_putfake: bad page");
#endif
TAILQ_INSERT_TAIL(&dev_pager_fakelist, m, pageq);
}

View File

@ -36,19 +36,17 @@
* SUCH DAMAGE.
*
* @(#)device_pager.h 8.3 (Berkeley) 12/13/93
* $Id: device_pager.h,v 1.2 1994/08/02 07:55:07 davidg Exp $
* $Id: device_pager.h,v 1.3 1995/01/09 16:05:30 davidg Exp $
*/
#ifndef _DEVICE_PAGER_
#define _DEVICE_PAGER_ 1
/*
* Device pager private data.
*/
struct devpager {
struct pglist devp_pglist; /* list of pages allocated */
vm_object_t devp_object; /* object representing this device */
};
typedef struct devpager *dev_pager_t;
void dev_pager_init __P((void));
vm_object_t dev_pager_alloc __P((void *, vm_size_t, vm_prot_t, vm_offset_t));
void dev_pager_dealloc __P((vm_object_t));
int dev_pager_getpages __P((vm_object_t, vm_page_t *, int, int));
int dev_pager_putpages __P((vm_object_t, vm_page_t *, int, boolean_t, int *));
boolean_t dev_pager_haspage __P((vm_object_t, vm_offset_t, int *, int *));
#endif /* _DEVICE_PAGER_ */

View File

@ -61,7 +61,7 @@
* any improvements or extensions that they make and grant Carnegie the
* rights to redistribute these changes.
*
* $Id: kern_lock.c,v 1.5 1995/04/16 12:56:12 davidg Exp $
* $Id: kern_lock.c,v 1.6 1995/05/30 08:15:49 rgrimes Exp $
*/
/*
@ -76,89 +76,6 @@
#include <vm/vm.h>
typedef int *thread_t;
#define current_thread() ((thread_t)&curproc->p_thread)
/* XXX */
#if NCPUS > 1
/*
* Module: lock
* Function:
* Provide reader/writer sychronization.
* Implementation:
* Simple interlock on a bit. Readers first interlock
* increment the reader count, then let go. Writers hold
* the interlock (thus preventing further readers), and
* wait for already-accepted readers to go away.
*/
/*
* The simple-lock routines are the primitives out of which
* the lock package is built. The implementation is left
* to the machine-dependent code.
*/
#ifdef notdef
/*
* A sample implementation of simple locks.
* assumes:
* boolean_t test_and_set(boolean_t *)
* indivisibly sets the boolean to TRUE
* and returns its old value
* and that setting a boolean to FALSE is indivisible.
*/
/*
* simple_lock_init initializes a simple lock. A simple lock
* may only be used for exclusive locks.
*/
void
simple_lock_init(l)
simple_lock_t l;
{
*(boolean_t *) l = FALSE;
}
void
simple_lock(l)
simple_lock_t l;
{
while (test_and_set((boolean_t *) l))
continue;
}
void
simple_unlock(l)
simple_lock_t l;
{
*(boolean_t *) l = FALSE;
}
boolean_t
simple_lock_try(l)
simple_lock_t l;
{
return (!test_and_set((boolean_t *) l));
}
#endif /* notdef */
#endif /* NCPUS > 1 */
#if NCPUS > 1
int lock_wait_time = 100;
#else /* NCPUS > 1 */
/*
* It is silly to spin on a uni-processor as if we thought something magical
* would happen to the want_write bit while we are executing.
*/
int lock_wait_time;
#endif /* NCPUS > 1 */
/*
* Routine: lock_init
* Function:
@ -172,14 +89,8 @@ lock_init(l, can_sleep)
lock_t l;
boolean_t can_sleep;
{
bzero(l, sizeof(lock_data_t));
simple_lock_init(&l->interlock);
l->want_write = FALSE;
l->want_upgrade = FALSE;
l->read_count = 0;
bzero(l, sizeof(*l));
l->can_sleep = can_sleep;
l->thread = (char *) -1; /* XXX */
l->recursion_depth = 0;
}
void
@ -187,9 +98,7 @@ lock_sleepable(l, can_sleep)
lock_t l;
boolean_t can_sleep;
{
simple_lock(&l->interlock);
l->can_sleep = can_sleep;
simple_unlock(&l->interlock);
}
@ -203,32 +112,20 @@ void
lock_write(l)
register lock_t l;
{
register int i;
simple_lock(&l->interlock);
if (((thread_t) l->thread) == current_thread()) {
if (l->proc == curproc) {
/*
* Recursive lock.
*/
l->recursion_depth++;
simple_unlock(&l->interlock);
return;
}
/*
* Try to acquire the want_write bit.
*/
while (l->want_write) {
if ((i = lock_wait_time) > 0) {
simple_unlock(&l->interlock);
while (--i > 0 && l->want_write)
continue;
simple_lock(&l->interlock);
}
if (l->can_sleep && l->want_write) {
l->waiting = TRUE;
thread_sleep((int) l, &l->interlock, FALSE);
simple_lock(&l->interlock);
tsleep(l, PVM, "lckwt1", 0);
}
}
l->want_write = TRUE;
@ -236,28 +133,17 @@ lock_write(l)
/* Wait for readers (and upgrades) to finish */
while ((l->read_count != 0) || l->want_upgrade) {
if ((i = lock_wait_time) > 0) {
simple_unlock(&l->interlock);
while (--i > 0 && (l->read_count != 0 ||
l->want_upgrade))
continue;
simple_lock(&l->interlock);
}
if (l->can_sleep && (l->read_count != 0 || l->want_upgrade)) {
l->waiting = TRUE;
thread_sleep((int) l, &l->interlock, FALSE);
simple_lock(&l->interlock);
tsleep(l, PVM, "lckwt2", 0);
}
}
simple_unlock(&l->interlock);
}
void
lock_done(l)
register lock_t l;
{
simple_lock(&l->interlock);
if (l->read_count != 0)
l->read_count--;
else if (l->recursion_depth != 0)
@ -269,43 +155,29 @@ lock_done(l)
if (l->waiting) {
l->waiting = FALSE;
thread_wakeup((int) l);
wakeup(l);
}
simple_unlock(&l->interlock);
}
void
lock_read(l)
register lock_t l;
{
register int i;
simple_lock(&l->interlock);
if (((thread_t) l->thread) == current_thread()) {
if (l->proc == curproc) {
/*
* Recursive lock.
*/
l->read_count++;
simple_unlock(&l->interlock);
return;
}
while (l->want_write || l->want_upgrade) {
if ((i = lock_wait_time) > 0) {
simple_unlock(&l->interlock);
while (--i > 0 && (l->want_write || l->want_upgrade))
continue;
simple_lock(&l->interlock);
}
if (l->can_sleep && (l->want_write || l->want_upgrade)) {
l->waiting = TRUE;
thread_sleep((int) l, &l->interlock, FALSE);
simple_lock(&l->interlock);
tsleep(l, PVM, "lockrd", 0);
}
}
l->read_count++;
simple_unlock(&l->interlock);
}
/*
@ -324,16 +196,13 @@ lock_read_to_write(l)
{
register int i;
simple_lock(&l->interlock);
l->read_count--;
if (((thread_t) l->thread) == current_thread()) {
if (l->proc == curproc) {
/*
* Recursive lock.
*/
l->recursion_depth++;
simple_unlock(&l->interlock);
return (FALSE);
}
if (l->want_upgrade) {
@ -343,28 +212,19 @@ lock_read_to_write(l)
*/
if (l->waiting) {
l->waiting = FALSE;
thread_wakeup((int) l);
wakeup(l);
}
simple_unlock(&l->interlock);
return (TRUE);
}
l->want_upgrade = TRUE;
while (l->read_count != 0) {
if ((i = lock_wait_time) > 0) {
simple_unlock(&l->interlock);
while (--i > 0 && l->read_count != 0)
continue;
simple_lock(&l->interlock);
}
if (l->can_sleep && l->read_count != 0) {
l->waiting = TRUE;
thread_sleep((int) l, &l->interlock, FALSE);
simple_lock(&l->interlock);
tsleep(l, PVM, "lckrw", 0);
}
}
simple_unlock(&l->interlock);
return (FALSE);
}
@ -372,8 +232,6 @@ void
lock_write_to_read(l)
register lock_t l;
{
simple_lock(&l->interlock);
l->read_count++;
if (l->recursion_depth != 0)
l->recursion_depth--;
@ -384,9 +242,8 @@ lock_write_to_read(l)
if (l->waiting) {
l->waiting = FALSE;
thread_wakeup((int) l);
wakeup(l);
}
simple_unlock(&l->interlock);
}
@ -402,22 +259,17 @@ boolean_t
lock_try_write(l)
register lock_t l;
{
simple_lock(&l->interlock);
if (((thread_t) l->thread) == current_thread()) {
if (l->proc == curproc) {
/*
* Recursive lock
*/
l->recursion_depth++;
simple_unlock(&l->interlock);
return (TRUE);
}
if (l->want_write || l->want_upgrade || l->read_count) {
/*
* Can't get lock.
*/
simple_unlock(&l->interlock);
return (FALSE);
}
/*
@ -425,7 +277,6 @@ lock_try_write(l)
*/
l->want_write = TRUE;
simple_unlock(&l->interlock);
return (TRUE);
}
@ -441,22 +292,17 @@ boolean_t
lock_try_read(l)
register lock_t l;
{
simple_lock(&l->interlock);
if (((thread_t) l->thread) == current_thread()) {
if (l->proc == curproc) {
/*
* Recursive lock
*/
l->read_count++;
simple_unlock(&l->interlock);
return (TRUE);
}
if (l->want_write || l->want_upgrade) {
simple_unlock(&l->interlock);
return (FALSE);
}
l->read_count++;
simple_unlock(&l->interlock);
return (TRUE);
}
@ -474,20 +320,15 @@ boolean_t
lock_try_read_to_write(l)
register lock_t l;
{
simple_lock(&l->interlock);
if (((thread_t) l->thread) == current_thread()) {
if (l->proc == curproc) {
/*
* Recursive lock
*/
l->read_count--;
l->recursion_depth++;
simple_unlock(&l->interlock);
return (TRUE);
}
if (l->want_upgrade) {
simple_unlock(&l->interlock);
return (FALSE);
}
l->want_upgrade = TRUE;
@ -495,11 +336,9 @@ lock_try_read_to_write(l)
while (l->read_count != 0) {
l->waiting = TRUE;
thread_sleep((int) l, &l->interlock, FALSE);
simple_lock(&l->interlock);
tsleep(l, PVM, "lcktrw", 0);
}
simple_unlock(&l->interlock);
return (TRUE);
}
@ -511,12 +350,10 @@ void
lock_set_recursive(l)
lock_t l;
{
simple_lock(&l->interlock);
if (!l->want_write) {
panic("lock_set_recursive: don't have write lock");
}
l->thread = (char *) current_thread();
simple_unlock(&l->interlock);
l->proc = curproc;
}
/*
@ -526,11 +363,9 @@ void
lock_clear_recursive(l)
lock_t l;
{
simple_lock(&l->interlock);
if (((thread_t) l->thread) != current_thread()) {
panic("lock_clear_recursive: wrong thread");
if (l->proc != curproc) {
panic("lock_clear_recursive: wrong proc");
}
if (l->recursion_depth == 0)
l->thread = (char *) -1; /* XXX */
simple_unlock(&l->interlock);
l->proc = NULL;
}

View File

@ -61,7 +61,7 @@
* any improvements or extensions that they make and grant Carnegie the
* rights to redistribute these changes.
*
* $Id: lock.h,v 1.2 1994/08/02 07:55:11 davidg Exp $
* $Id: lock.h,v 1.3 1995/01/09 16:05:31 davidg Exp $
*/
/*
@ -71,83 +71,29 @@
#ifndef _LOCK_H_
#define _LOCK_H_
#define NCPUS 1 /* XXX */
/*
* A simple spin lock.
*/
struct slock {
int lock_data; /* in general 1 bit is sufficient */
};
typedef struct slock simple_lock_data_t;
typedef struct slock *simple_lock_t;
/*
* The general lock structure. Provides for multiple readers,
* upgrading from read to write, and sleeping until the lock
* can be gained.
*/
struct lock {
#ifdef vax
/*
* Efficient VAX implementation -- see field description below.
*/
unsigned int read_count:16, want_upgrade:1, want_write:1, waiting:1, can_sleep:1,:0;
simple_lock_data_t interlock;
#else /* vax */
#ifdef ns32000
/*
* Efficient ns32000 implementation -- see field description below.
*/
simple_lock_data_t interlock;
unsigned int read_count:16, want_upgrade:1, want_write:1, waiting:1, can_sleep:1,:0;
#else /* ns32000 */
/*
* Only the "interlock" field is used for hardware exclusion; other
* fields are modified with normal instructions after acquiring the
* interlock bit.
*/
simple_lock_data_t
interlock; /* Interlock for remaining fields */
boolean_t want_write; /* Writer is waiting, or locked for write */
boolean_t want_upgrade; /* Read-to-write upgrade waiting */
boolean_t waiting; /* Someone is sleeping on lock */
boolean_t can_sleep; /* Can attempts to lock go to sleep */
int read_count; /* Number of accepted readers */
#endif /* ns32000 */
#endif /* vax */
char *thread; /* Thread that has lock, if recursive locking
* allowed */
/*
* (should be thread_t, but but we then have mutually recursive
* definitions)
*/
struct proc *proc; /* If recursive locking, process that has lock */
int recursion_depth; /* Depth of recursion */
};
typedef struct lock lock_data_t;
typedef struct lock *lock_t;
#if NCPUS > 1
__BEGIN_DECLS
void simple_lock __P((simple_lock_t));
void simple_lock_init __P((simple_lock_t));
boolean_t simple_lock_try __P((simple_lock_t));
void simple_unlock __P((simple_lock_t));
__END_DECLS
#else /* No multiprocessor locking is necessary. */
#define simple_lock(l)
#define simple_lock_init(l)
#define simple_lock_try(l) (1) /* Always succeeds. */
#define simple_unlock(l)
#endif
/* Sleep locks must work even if no multiprocessing. */
#define lock_read_done(l) lock_done(l)

View File

@ -39,7 +39,7 @@
* from: Utah $Hdr: swap_pager.c 1.4 91/04/30$
*
* @(#)swap_pager.c 8.9 (Berkeley) 3/21/94
* $Id: swap_pager.c,v 1.40 1995/05/18 02:59:20 davidg Exp $
* $Id: swap_pager.c,v 1.41 1995/05/30 08:15:55 rgrimes Exp $
*/
/*
@ -71,9 +71,6 @@
#define NPENDINGIO 10
#endif
int swap_pager_input __P((sw_pager_t, vm_page_t *, int, int));
int swap_pager_output __P((sw_pager_t, vm_page_t *, int, int, int *));
int nswiodone;
int swap_pager_full;
extern int vm_swap_size;
@ -106,35 +103,35 @@ struct swpagerclean {
struct swpclean swap_pager_done; /* list of completed page cleans */
struct swpclean swap_pager_inuse; /* list of pending page cleans */
struct swpclean swap_pager_free; /* list of free pager clean structs */
struct pagerlst swap_pager_list; /* list of "named" anon regions */
struct pagerlst swap_pager_un_list; /* list of "unnamed" anon pagers */
struct pagerlst swap_pager_object_list; /* list of "named" anon region objects */
struct pagerlst swap_pager_un_object_list; /* list of "unnamed" anon region objects */
#define SWAP_FREE_NEEDED 0x1 /* need a swap block */
#define SWAP_FREE_NEEDED_BY_PAGEOUT 0x2
int swap_pager_needflags;
struct pagerlst *swp_qs[] = {
&swap_pager_list, &swap_pager_un_list, (struct pagerlst *) 0
&swap_pager_object_list, &swap_pager_un_object_list, (struct pagerlst *) 0
};
int swap_pager_putmulti();
/*
* pagerops for OBJT_SWAP - "swap pager".
*/
struct pagerops swappagerops = {
swap_pager_init,
swap_pager_alloc,
swap_pager_dealloc,
swap_pager_getpage,
swap_pager_getmulti,
swap_pager_putpage,
swap_pager_putmulti,
swap_pager_haspage
swap_pager_getpages,
swap_pager_putpages,
swap_pager_haspage,
swap_pager_sync
};
int npendingio = NPENDINGIO;
int require_swap_init;
void swap_pager_finish();
int dmmin, dmmax;
static inline void
swapsizecheck()
{
@ -149,10 +146,8 @@ swapsizecheck()
void
swap_pager_init()
{
dfltpagerops = &swappagerops;
TAILQ_INIT(&swap_pager_list);
TAILQ_INIT(&swap_pager_un_list);
TAILQ_INIT(&swap_pager_object_list);
TAILQ_INIT(&swap_pager_un_object_list);
/*
* Initialize clean lists
@ -161,8 +156,6 @@ swap_pager_init()
TAILQ_INIT(&swap_pager_done);
TAILQ_INIT(&swap_pager_free);
require_swap_init = 1;
/*
* Calculate the swap allocation constants.
*/
@ -172,88 +165,56 @@ swap_pager_init()
}
/*
* Allocate a pager structure and associated resources.
* Note that if we are called from the pageout daemon (handle == NULL)
* we should not wait for memory as it could resulting in deadlock.
*/
vm_pager_t
swap_pager_alloc(handle, size, prot, offset)
void *handle;
register vm_size_t size;
vm_prot_t prot;
vm_offset_t offset;
void
swap_pager_swap_init()
{
swp_clean_t spc;
struct buf *bp;
int i;
/*
* kva's are allocated here so that we dont need to keep doing
* kmem_alloc pageables at runtime
*/
for (i = 0, spc = swcleanlist; i < npendingio; i++, spc++) {
spc->spc_kva = kmem_alloc_pageable(pager_map, PAGE_SIZE * MAX_PAGEOUT_CLUSTER);
if (!spc->spc_kva) {
break;
}
spc->spc_bp = malloc(sizeof(*bp), M_TEMP, M_KERNEL);
if (!spc->spc_bp) {
kmem_free_wakeup(pager_map, spc->spc_kva, PAGE_SIZE);
break;
}
spc->spc_flags = 0;
TAILQ_INSERT_TAIL(&swap_pager_free, spc, spc_list);
}
}
int
swap_pager_swp_alloc(object, wait)
vm_object_t object;
int wait;
{
register vm_pager_t pager;
register sw_pager_t swp;
int waitok;
int i, j;
if (require_swap_init) {
swp_clean_t spc;
struct buf *bp;
if (object->pg_data != NULL)
panic("swap_pager_swp_alloc: swp already allocated");
/*
* kva's are allocated here so that we dont need to keep doing
* kmem_alloc pageables at runtime
*/
for (i = 0, spc = swcleanlist; i < npendingio; i++, spc++) {
spc->spc_kva = kmem_alloc_pageable(pager_map, PAGE_SIZE * MAX_PAGEOUT_CLUSTER);
if (!spc->spc_kva) {
break;
}
spc->spc_bp = malloc(sizeof(*bp), M_TEMP, M_KERNEL);
if (!spc->spc_bp) {
kmem_free_wakeup(pager_map, spc->spc_kva, PAGE_SIZE);
break;
}
spc->spc_flags = 0;
TAILQ_INSERT_TAIL(&swap_pager_free, spc, spc_list);
}
require_swap_init = 0;
if (size == 0)
return (NULL);
}
/*
* If this is a "named" anonymous region, look it up and return the
* appropriate pager if it exists.
*/
if (handle) {
pager = vm_pager_lookup(&swap_pager_list, handle);
if (pager != NULL) {
/*
* Use vm_object_lookup to gain a reference to the
* object and also to remove from the object cache.
*/
if (vm_object_lookup(pager) == NULL)
panic("swap_pager_alloc: bad object");
return (pager);
}
}
/*
* Pager doesn't exist, allocate swap management resources and
* initialize.
*/
waitok = handle ? M_WAITOK : M_KERNEL;
pager = (vm_pager_t) malloc(sizeof *pager, M_VMPAGER, waitok);
if (pager == NULL)
return (NULL);
swp = (sw_pager_t) malloc(sizeof *swp, M_VMPGDATA, waitok);
swp = (sw_pager_t) malloc(sizeof *swp, M_VMPGDATA, wait);
if (swp == NULL) {
free((caddr_t) pager, M_VMPAGER);
return (NULL);
return 1;
}
size = round_page(size);
swp->sw_osize = size;
swp->sw_nblocks = (btodb(size) + btodb(SWB_NPAGES * PAGE_SIZE) - 1) / btodb(SWB_NPAGES * PAGE_SIZE);
swp->sw_blocks = (sw_blk_t)
malloc(swp->sw_nblocks * sizeof(*swp->sw_blocks),
M_VMPGDATA, waitok);
swp->sw_nblocks = (btodb(object->size) + btodb(SWB_NPAGES * PAGE_SIZE) - 1) / btodb(SWB_NPAGES * PAGE_SIZE);
swp->sw_blocks = (sw_blk_t) malloc(swp->sw_nblocks * sizeof(*swp->sw_blocks), M_VMPGDATA, wait);
if (swp->sw_blocks == NULL) {
free((caddr_t) swp, M_VMPGDATA);
free((caddr_t) pager, M_VMPAGER);
return (NULL);
return 1;
}
for (i = 0; i < swp->sw_nblocks; i++) {
swp->sw_blocks[i].swb_valid = 0;
swp->sw_blocks[i].swb_locked = 0;
@ -263,30 +224,59 @@ swap_pager_alloc(handle, size, prot, offset)
swp->sw_poip = 0;
swp->sw_allocsize = 0;
if (handle) {
vm_object_t object;
swp->sw_flags = SW_NAMED;
TAILQ_INSERT_TAIL(&swap_pager_list, pager, pg_list);
/*
* Consistant with other pagers: return with object
* referenced. Can't do this with handle == NULL since it
* might be the pageout daemon calling.
*/
object = vm_object_allocate(offset + size);
object->flags &= ~OBJ_INTERNAL;
vm_object_enter(object, pager);
object->pager = pager;
object->pg_data = swp;
if (object->handle != NULL) {
TAILQ_INSERT_TAIL(&swap_pager_object_list, object, pager_object_list);
} else {
swp->sw_flags = 0;
TAILQ_INSERT_TAIL(&swap_pager_un_list, pager, pg_list);
TAILQ_INSERT_TAIL(&swap_pager_un_object_list, object, pager_object_list);
}
pager->pg_handle = handle;
pager->pg_ops = &swappagerops;
pager->pg_type = PG_SWAP;
pager->pg_data = (caddr_t) swp;
return (pager);
return 0;
}
/*
* Allocate a pager structure and associated resources.
* Note that if we are called from the pageout daemon (handle == NULL)
* we should not wait for memory as it could resulting in deadlock.
*/
vm_object_t
swap_pager_alloc(handle, size, prot, offset)
void *handle;
register vm_size_t size;
vm_prot_t prot;
vm_offset_t offset;
{
vm_object_t object;
int i;
/*
* If this is a "named" anonymous region, look it up and use the
* object if it exists, otherwise allocate a new one.
*/
if (handle) {
object = vm_pager_object_lookup(&swap_pager_object_list, handle);
if (object != NULL) {
vm_object_reference(object);
} else {
/*
* XXX - there is a race condition here. Two processes
* can request the same named object simultaneuously,
* and if one blocks for memory, the result is a disaster.
* Probably quite rare, but is yet another reason to just
* rip support of "named anonymous regions" out altogether.
*/
object = vm_object_allocate(OBJT_SWAP, offset + size);
object->handle = handle;
(void) swap_pager_swp_alloc(object, M_WAITOK);
}
} else {
object = vm_object_allocate(OBJT_SWAP, offset + size);
(void) swap_pager_swp_alloc(object, M_WAITOK);
}
return (object);
}
/*
@ -296,11 +286,12 @@ swap_pager_alloc(handle, size, prot, offset)
*/
inline static int *
swap_pager_diskaddr(swp, offset, valid)
sw_pager_t swp;
swap_pager_diskaddr(object, offset, valid)
vm_object_t object;
vm_offset_t offset;
int *valid;
{
sw_pager_t swp = object->pg_data;
register sw_blk_t swb;
int ix;
@ -308,7 +299,7 @@ swap_pager_diskaddr(swp, offset, valid)
*valid = 0;
ix = offset / (SWB_NPAGES * PAGE_SIZE);
if ((swp->sw_blocks == NULL) || (ix >= swp->sw_nblocks) ||
(offset >= swp->sw_osize)) {
(offset >= object->size)) {
return (FALSE);
}
swb = &swp->sw_blocks[ix];
@ -378,18 +369,19 @@ swap_pager_freeswapspace(sw_pager_t swp, unsigned from, unsigned to)
* this routine frees swap blocks from a specified pager
*/
void
_swap_pager_freespace(swp, start, size)
sw_pager_t swp;
swap_pager_freespace(object, start, size)
vm_object_t object;
vm_offset_t start;
vm_offset_t size;
{
sw_pager_t swp = object->pg_data;
vm_offset_t i;
int s;
s = splbio();
for (i = start; i < round_page(start + size); i += PAGE_SIZE) {
int valid;
int *addr = swap_pager_diskaddr(swp, i, &valid);
int *addr = swap_pager_diskaddr(object, i, &valid);
if (addr && *addr != SWB_EMPTY) {
swap_pager_freeswapspace(swp, *addr, *addr + btodb(PAGE_SIZE) - 1);
@ -402,15 +394,6 @@ _swap_pager_freespace(swp, start, size)
splx(s);
}
void
swap_pager_freespace(pager, start, size)
vm_pager_t pager;
vm_offset_t start;
vm_offset_t size;
{
_swap_pager_freespace((sw_pager_t) pager->pg_data, start, size);
}
static void
swap_pager_free_swap(swp)
sw_pager_t swp;
@ -477,7 +460,7 @@ swap_pager_free_swap(swp)
void
swap_pager_reclaim()
{
vm_pager_t p;
vm_object_t object;
sw_pager_t swp;
int i, j, k;
int s;
@ -493,7 +476,7 @@ swap_pager_reclaim()
*/
s = splbio();
if (in_reclaim) {
tsleep((caddr_t) &in_reclaim, PSWP, "swrclm", 0);
tsleep(&in_reclaim, PSWP, "swrclm", 0);
splx(s);
return;
}
@ -503,14 +486,14 @@ swap_pager_reclaim()
/* for each pager queue */
for (k = 0; swp_qs[k]; k++) {
p = swp_qs[k]->tqh_first;
while (p && (reclaimcount < MAXRECLAIM)) {
object = swp_qs[k]->tqh_first;
while (object && (reclaimcount < MAXRECLAIM)) {
/*
* see if any blocks associated with a pager has been
* allocated but not used (written)
*/
swp = (sw_pager_t) p->pg_data;
swp = (sw_pager_t) object->pg_data;
for (i = 0; i < swp->sw_nblocks; i++) {
sw_blk_t swb = &swp->sw_blocks[i];
@ -527,7 +510,7 @@ swap_pager_reclaim()
}
}
}
p = p->pg_list.tqe_next;
object = object->pager_object_list.tqe_next;
}
}
@ -541,7 +524,7 @@ rfinished:
}
splx(s);
in_reclaim = 0;
wakeup((caddr_t) &in_reclaim);
wakeup(&in_reclaim);
}
@ -551,10 +534,10 @@ rfinished:
*/
void
swap_pager_copy(srcpager, srcoffset, dstpager, dstoffset, offset)
vm_pager_t srcpager;
swap_pager_copy(srcobject, srcoffset, dstobject, dstoffset, offset)
vm_object_t srcobject;
vm_offset_t srcoffset;
vm_pager_t dstpager;
vm_object_t dstobject;
vm_offset_t dstoffset;
vm_offset_t offset;
{
@ -566,41 +549,37 @@ swap_pager_copy(srcpager, srcoffset, dstpager, dstoffset, offset)
if (vm_swap_size)
no_swap_space = 0;
if (no_swap_space)
return;
srcswp = (sw_pager_t) srcpager->pg_data;
srcswp = (sw_pager_t) srcobject->pg_data;
origsize = srcswp->sw_allocsize;
dstswp = (sw_pager_t) dstpager->pg_data;
dstswp = (sw_pager_t) dstobject->pg_data;
/*
* remove the source pager from the swap_pager internal queue
* remove the source object from the swap_pager internal queue
*/
s = splbio();
if (srcswp->sw_flags & SW_NAMED) {
TAILQ_REMOVE(&swap_pager_list, srcpager, pg_list);
srcswp->sw_flags &= ~SW_NAMED;
if (srcobject->handle == NULL) {
TAILQ_REMOVE(&swap_pager_un_object_list, srcobject, pager_object_list);
} else {
TAILQ_REMOVE(&swap_pager_un_list, srcpager, pg_list);
TAILQ_REMOVE(&swap_pager_object_list, srcobject, pager_object_list);
}
s = splbio();
while (srcswp->sw_poip) {
tsleep((caddr_t) srcswp, PVM, "spgout", 0);
tsleep(srcswp, PVM, "spgout", 0);
}
splx(s);
/*
* clean all of the pages that are currently active and finished
*/
(void) swap_pager_clean();
swap_pager_sync();
s = splbio();
/*
* transfer source to destination
*/
for (i = 0; i < dstswp->sw_osize; i += PAGE_SIZE) {
for (i = 0; i < dstobject->size; i += PAGE_SIZE) {
int srcvalid, dstvalid;
int *srcaddrp = swap_pager_diskaddr(srcswp, i + offset + srcoffset,
int *srcaddrp = swap_pager_diskaddr(srcobject, i + offset + srcoffset,
&srcvalid);
int *dstaddrp;
@ -614,7 +593,7 @@ swap_pager_copy(srcpager, srcoffset, dstpager, dstoffset, offset)
* dest.
*/
if (srcvalid) {
dstaddrp = swap_pager_diskaddr(dstswp, i + dstoffset,
dstaddrp = swap_pager_diskaddr(dstobject, i + dstoffset,
&dstvalid);
/*
* if the dest already has a valid block,
@ -657,43 +636,47 @@ swap_pager_copy(srcpager, srcoffset, dstpager, dstoffset, offset)
free((caddr_t) srcswp->sw_blocks, M_VMPGDATA);
srcswp->sw_blocks = 0;
free((caddr_t) srcswp, M_VMPGDATA);
srcpager->pg_data = 0;
free((caddr_t) srcpager, M_VMPAGER);
srcobject->pg_data = NULL;
return;
}
void
swap_pager_dealloc(pager)
vm_pager_t pager;
swap_pager_dealloc(object)
vm_object_t object;
{
register sw_pager_t swp;
int s;
swp = (sw_pager_t) object->pg_data;
/* "Can't" happen. */
if (swp == NULL)
panic("swap_pager_dealloc: no swp data");
/*
* Remove from list right away so lookups will fail if we block for
* pageout completion.
*/
s = splbio();
swp = (sw_pager_t) pager->pg_data;
if (swp->sw_flags & SW_NAMED) {
TAILQ_REMOVE(&swap_pager_list, pager, pg_list);
swp->sw_flags &= ~SW_NAMED;
if (object->handle == NULL) {
TAILQ_REMOVE(&swap_pager_un_object_list, object, pager_object_list);
} else {
TAILQ_REMOVE(&swap_pager_un_list, pager, pg_list);
TAILQ_REMOVE(&swap_pager_object_list, object, pager_object_list);
}
/*
* Wait for all pageouts to finish and remove all entries from
* cleaning list.
*/
s = splbio();
while (swp->sw_poip) {
tsleep((caddr_t) swp, PVM, "swpout", 0);
tsleep(swp, PVM, "swpout", 0);
}
splx(s);
(void) swap_pager_clean();
swap_pager_sync();
/*
* Free left over swap blocks
@ -708,88 +691,7 @@ swap_pager_dealloc(pager)
free((caddr_t) swp->sw_blocks, M_VMPGDATA);
swp->sw_blocks = 0;
free((caddr_t) swp, M_VMPGDATA);
pager->pg_data = 0;
free((caddr_t) pager, M_VMPAGER);
}
/*
* swap_pager_getmulti can get multiple pages.
*/
int
swap_pager_getmulti(pager, m, count, reqpage, sync)
vm_pager_t pager;
vm_page_t *m;
int count;
int reqpage;
boolean_t sync;
{
if (reqpage >= count)
panic("swap_pager_getmulti: reqpage >= count");
return swap_pager_input((sw_pager_t) pager->pg_data, m, count, reqpage);
}
/*
* swap_pager_getpage gets individual pages
*/
int
swap_pager_getpage(pager, m, sync)
vm_pager_t pager;
vm_page_t m;
boolean_t sync;
{
vm_page_t marray[1];
marray[0] = m;
return swap_pager_input((sw_pager_t) pager->pg_data, marray, 1, 0);
}
int
swap_pager_putmulti(pager, m, c, sync, rtvals)
vm_pager_t pager;
vm_page_t *m;
int c;
boolean_t sync;
int *rtvals;
{
int flags;
if (pager == NULL) {
(void) swap_pager_clean();
return VM_PAGER_OK;
}
flags = B_WRITE;
if (!sync)
flags |= B_ASYNC;
return swap_pager_output((sw_pager_t) pager->pg_data, m, c, flags, rtvals);
}
/*
* swap_pager_putpage writes individual pages
*/
int
swap_pager_putpage(pager, m, sync)
vm_pager_t pager;
vm_page_t m;
boolean_t sync;
{
int flags;
vm_page_t marray[1];
int rtvals[1];
if (pager == NULL) {
(void) swap_pager_clean();
return VM_PAGER_OK;
}
marray[0] = m;
flags = B_WRITE;
if (!sync)
flags |= B_ASYNC;
swap_pager_output((sw_pager_t) pager->pg_data, marray, 1, flags, rtvals);
return rtvals[0];
object->pg_data = 0;
}
static inline int
@ -811,17 +713,24 @@ swap_pager_block_offset(swp, offset)
}
/*
* _swap_pager_haspage returns TRUE if the pager has data that has
* swap_pager_haspage returns TRUE if the pager has data that has
* been written out.
*/
static boolean_t
_swap_pager_haspage(swp, offset)
sw_pager_t swp;
boolean_t
swap_pager_haspage(object, offset, before, after)
vm_object_t object;
vm_offset_t offset;
int *before;
int *after;
{
sw_pager_t swp = object->pg_data;
register sw_blk_t swb;
int ix;
if (before != NULL)
*before = 0;
if (after != NULL)
*after = 0;
ix = offset / (SWB_NPAGES * PAGE_SIZE);
if (swp->sw_blocks == NULL || ix >= swp->sw_nblocks) {
return (FALSE);
@ -835,19 +744,6 @@ _swap_pager_haspage(swp, offset)
return (FALSE);
}
/*
* swap_pager_haspage is the externally accessible version of
* _swap_pager_haspage above. this routine takes a vm_pager_t
* for an argument instead of sw_pager_t.
*/
boolean_t
swap_pager_haspage(pager, offset)
vm_pager_t pager;
vm_offset_t offset;
{
return _swap_pager_haspage((sw_pager_t) pager->pg_data, offset);
}
/*
* swap_pager_freepage is a convienience routine that clears the busy
* bit and deallocates a page.
@ -887,16 +783,17 @@ swap_pager_iodone1(bp)
{
bp->b_flags |= B_DONE;
bp->b_flags &= ~B_ASYNC;
wakeup((caddr_t) bp);
wakeup(bp);
}
int
swap_pager_input(swp, m, count, reqpage)
register sw_pager_t swp;
swap_pager_getpages(object, m, count, reqpage)
vm_object_t object;
vm_page_t *m;
int count, reqpage;
{
register sw_pager_t swp = object->pg_data;
register struct buf *bp;
sw_blk_t swb[count];
register int s;
@ -905,7 +802,6 @@ swap_pager_input(swp, m, count, reqpage)
vm_offset_t kva, off[count];
swp_clean_t spc;
vm_offset_t paging_offset;
vm_object_t object;
int reqaddr[count];
int sequential;
@ -1029,17 +925,17 @@ swap_pager_input(swp, m, count, reqpage)
if (swap_pager_free.tqh_first == NULL) {
s = splbio();
if (curproc == pageproc)
(void) swap_pager_clean();
swap_pager_sync();
else
pagedaemon_wakeup();
while (swap_pager_free.tqh_first == NULL) {
swap_pager_needflags |= SWAP_FREE_NEEDED;
if (curproc == pageproc)
swap_pager_needflags |= SWAP_FREE_NEEDED_BY_PAGEOUT;
tsleep((caddr_t) &swap_pager_free,
tsleep(&swap_pager_free,
PVM, "swpfre", 0);
if (curproc == pageproc)
(void) swap_pager_clean();
swap_pager_sync();
else
pagedaemon_wakeup();
}
@ -1091,7 +987,7 @@ swap_pager_input(swp, m, count, reqpage)
*/
s = splbio();
while ((bp->b_flags & B_DONE) == 0) {
tsleep((caddr_t) bp, PVM, "swread", 0);
tsleep(bp, PVM, "swread", 0);
}
if (bp->b_flags & B_ERROR) {
@ -1104,7 +1000,7 @@ swap_pager_input(swp, m, count, reqpage)
--swp->sw_piip;
if (swp->sw_piip == 0)
wakeup((caddr_t) swp);
wakeup(swp);
/*
@ -1124,7 +1020,7 @@ swap_pager_input(swp, m, count, reqpage)
if (spc) {
m[reqpage]->object->last_read = m[reqpage]->offset;
if (bp->b_flags & B_WANTED)
wakeup((caddr_t) bp);
wakeup(bp);
/*
* if we have used an spc, we need to free it.
*/
@ -1134,7 +1030,7 @@ swap_pager_input(swp, m, count, reqpage)
crfree(bp->b_wcred);
TAILQ_INSERT_TAIL(&swap_pager_free, spc, spc_list);
if (swap_pager_needflags & SWAP_FREE_NEEDED) {
wakeup((caddr_t) &swap_pager_free);
wakeup(&swap_pager_free);
}
if( swap_pager_needflags & SWAP_FREE_NEEDED_BY_PAGEOUT)
pagedaemon_wakeup();
@ -1185,7 +1081,7 @@ swap_pager_input(swp, m, count, reqpage)
for (i = 0; i < count; i++) {
m[i]->dirty = VM_PAGE_BITS_ALL;
}
_swap_pager_freespace(swp, m[0]->offset + paging_offset, count * PAGE_SIZE);
swap_pager_freespace(object, m[0]->offset + paging_offset, count * PAGE_SIZE);
}
} else {
swap_pager_ridpages(m, count, reqpage);
@ -1195,13 +1091,14 @@ swap_pager_input(swp, m, count, reqpage)
}
int
swap_pager_output(swp, m, count, flags, rtvals)
register sw_pager_t swp;
swap_pager_putpages(object, m, count, sync, rtvals)
vm_object_t object;
vm_page_t *m;
int count;
int flags;
boolean_t sync;
int *rtvals;
{
register sw_pager_t swp = object->pg_data;
register struct buf *bp;
sw_blk_t swb[count];
register int s;
@ -1210,7 +1107,6 @@ swap_pager_output(swp, m, count, flags, rtvals)
vm_offset_t kva, off, foff;
swp_clean_t spc;
vm_offset_t paging_offset;
vm_object_t object;
int reqaddr[count];
int failed;
@ -1341,8 +1237,8 @@ swap_pager_output(swp, m, count, flags, rtvals)
/*
* For synchronous writes, we clean up all completed async pageouts.
*/
if ((flags & B_ASYNC) == 0) {
swap_pager_clean();
if (sync == TRUE) {
swap_pager_sync();
}
kva = 0;
@ -1354,7 +1250,7 @@ swap_pager_output(swp, m, count, flags, rtvals)
swap_pager_free.tqh_first->spc_list.tqe_next->spc_list.tqe_next == NULL) {
s = splbio();
if (curproc == pageproc) {
(void) swap_pager_clean();
swap_pager_sync();
#if 0
splx(s);
return VM_PAGER_AGAIN;
@ -1367,14 +1263,13 @@ swap_pager_output(swp, m, count, flags, rtvals)
if (curproc == pageproc) {
swap_pager_needflags |= SWAP_FREE_NEEDED_BY_PAGEOUT;
if((cnt.v_free_count + cnt.v_cache_count) > cnt.v_free_reserved)
wakeup((caddr_t) &cnt.v_free_count);
wakeup(&cnt.v_free_count);
}
swap_pager_needflags |= SWAP_FREE_NEEDED;
tsleep((caddr_t) &swap_pager_free,
PVM, "swpfre", 0);
tsleep(&swap_pager_free, PVM, "swpfre", 0);
if (curproc == pageproc)
(void) swap_pager_clean();
swap_pager_sync();
else
pagedaemon_wakeup();
}
@ -1434,7 +1329,7 @@ swap_pager_output(swp, m, count, flags, rtvals)
* place a "cleaning" entry on the inuse queue.
*/
s = splbio();
if (flags & B_ASYNC) {
if (sync == FALSE) {
spc->spc_flags = 0;
spc->spc_swp = swp;
for (i = 0; i < count; i++)
@ -1461,9 +1356,9 @@ swap_pager_output(swp, m, count, flags, rtvals)
* perform the I/O
*/
VOP_STRATEGY(bp);
if ((flags & (B_READ | B_ASYNC)) == B_ASYNC) {
if (sync == FALSE) {
if ((bp->b_flags & B_DONE) == B_DONE) {
swap_pager_clean();
swap_pager_sync();
}
splx(s);
for (i = 0; i < count; i++) {
@ -1475,7 +1370,7 @@ swap_pager_output(swp, m, count, flags, rtvals)
* wait for the sync I/O to complete
*/
while ((bp->b_flags & B_DONE) == 0) {
tsleep((caddr_t) bp, PVM, "swwrt", 0);
tsleep(bp, PVM, "swwrt", 0);
}
if (bp->b_flags & B_ERROR) {
printf("swap_pager: I/O error - pageout failed; blkno %d, size %d, error %d\n",
@ -1487,12 +1382,12 @@ swap_pager_output(swp, m, count, flags, rtvals)
--swp->sw_poip;
if (swp->sw_poip == 0)
wakeup((caddr_t) swp);
wakeup(swp);
if (bp->b_vp)
pbrelvp(bp);
if (bp->b_flags & B_WANTED)
wakeup((caddr_t) bp);
wakeup(bp);
splx(s);
@ -1532,7 +1427,7 @@ swap_pager_output(swp, m, count, flags, rtvals)
crfree(bp->b_wcred);
TAILQ_INSERT_TAIL(&swap_pager_free, spc, spc_list);
if (swap_pager_needflags & SWAP_FREE_NEEDED) {
wakeup((caddr_t) &swap_pager_free);
wakeup(&swap_pager_free);
}
if( swap_pager_needflags & SWAP_FREE_NEEDED_BY_PAGEOUT)
pagedaemon_wakeup();
@ -1540,15 +1435,15 @@ swap_pager_output(swp, m, count, flags, rtvals)
return (rv);
}
boolean_t
swap_pager_clean()
void
swap_pager_sync()
{
register swp_clean_t spc, tspc;
register int s;
tspc = NULL;
if (swap_pager_done.tqh_first == NULL)
return FALSE;
return;
for (;;) {
s = splbio();
/*
@ -1580,7 +1475,7 @@ doclean:
spc->spc_flags = 0;
TAILQ_INSERT_TAIL(&swap_pager_free, spc, spc_list);
if (swap_pager_needflags & SWAP_FREE_NEEDED) {
wakeup((caddr_t) &swap_pager_free);
wakeup(&swap_pager_free);
}
if( swap_pager_needflags & SWAP_FREE_NEEDED_BY_PAGEOUT)
pagedaemon_wakeup();
@ -1588,7 +1483,7 @@ doclean:
splx(s);
}
return (tspc ? TRUE : FALSE);
return;
}
void
@ -1602,7 +1497,7 @@ swap_pager_finish(spc)
if ((object->paging_in_progress == 0) &&
(object->flags & OBJ_PIPWNT)) {
object->flags &= ~OBJ_PIPWNT;
thread_wakeup((int) object);
wakeup(object);
}
/*
@ -1662,7 +1557,7 @@ swap_pager_iodone(bp)
pbrelvp(bp);
if (bp->b_flags & B_WANTED)
wakeup((caddr_t) bp);
wakeup(bp);
if (bp->b_rcred != NOCRED)
crfree(bp->b_rcred);
@ -1671,12 +1566,12 @@ swap_pager_iodone(bp)
nswiodone += spc->spc_count;
if (--spc->spc_swp->sw_poip == 0) {
wakeup((caddr_t) spc->spc_swp);
wakeup(spc->spc_swp);
}
if ((swap_pager_needflags & SWAP_FREE_NEEDED) ||
swap_pager_inuse.tqh_first == 0) {
swap_pager_needflags &= ~SWAP_FREE_NEEDED;
wakeup((caddr_t) &swap_pager_free);
wakeup(&swap_pager_free);
}
if( swap_pager_needflags & SWAP_FREE_NEEDED_BY_PAGEOUT) {
@ -1685,7 +1580,7 @@ swap_pager_iodone(bp)
}
if (vm_pageout_pages_needed) {
wakeup((caddr_t) &vm_pageout_pages_needed);
wakeup(&vm_pageout_pages_needed);
vm_pageout_pages_needed = 0;
}
if ((swap_pager_inuse.tqh_first == NULL) ||

View File

@ -36,7 +36,7 @@
* SUCH DAMAGE.
*
* from: @(#)swap_pager.h 7.1 (Berkeley) 12/5/90
* $Id: swap_pager.h,v 1.5 1995/02/02 09:08:09 davidg Exp $
* $Id: swap_pager.h,v 1.6 1995/05/10 18:56:04 davidg Exp $
*/
/*
@ -67,36 +67,27 @@ typedef struct swblock *sw_blk_t;
* Swap pager private data.
*/
struct swpager {
vm_size_t sw_osize; /* size of object we are backing (bytes) */
int sw_nblocks; /* number of blocks in list (sw_blk_t units) */
int sw_allocsize; /* amount of space actually allocated */
sw_blk_t sw_blocks; /* pointer to list of swap blocks */
short sw_flags; /* flags */
short sw_poip; /* pageouts in progress */
short sw_piip; /* pageins in progress */
};
typedef struct swpager *sw_pager_t;
#define SW_WANTED 0x01
#define SW_NAMED 0x02
#ifdef KERNEL
void swap_pager_init(void);
vm_pager_t swap_pager_alloc(void *, vm_size_t, vm_prot_t, vm_offset_t);
void swap_pager_dealloc(vm_pager_t);
boolean_t swap_pager_getpage(vm_pager_t, vm_page_t, boolean_t);
boolean_t swap_pager_putpage(vm_pager_t, vm_page_t, boolean_t);
boolean_t swap_pager_getmulti(vm_pager_t, vm_page_t *, int, int, boolean_t);
boolean_t swap_pager_haspage(vm_pager_t, vm_offset_t);
int swap_pager_io(sw_pager_t, vm_page_t *, int, int, int);
void swap_pager_iodone(struct buf *);
boolean_t swap_pager_clean();
void swap_pager_copy __P((vm_pager_t, vm_offset_t, vm_pager_t, vm_offset_t, vm_offset_t));
void swap_pager_freespace __P((vm_pager_t, vm_offset_t, vm_offset_t));
extern struct pagerops swappagerops;
void swap_pager_init __P((void));
vm_object_t swap_pager_alloc __P((void *, vm_size_t, vm_prot_t, vm_offset_t));
void swap_pager_dealloc __P((vm_object_t));
int swap_pager_getpages __P((vm_object_t, vm_page_t *, int, int));
int swap_pager_putpages __P((vm_object_t, vm_page_t *, int, boolean_t, int *));
boolean_t swap_pager_haspage __P((vm_object_t, vm_offset_t, int *, int *));
void swap_pager_sync __P((void));
void swap_pager_iodone __P((struct buf *));
int swap_pager_swp_alloc __P((vm_object_t, int));
void swap_pager_copy __P((vm_object_t, vm_offset_t, vm_object_t, vm_offset_t, vm_offset_t));
void swap_pager_freespace __P((vm_object_t, vm_offset_t, vm_offset_t));
void swap_pager_swap_init __P((void));
#endif
#endif /* _SWAP_PAGER_ */

View File

@ -31,7 +31,7 @@
* SUCH DAMAGE.
*
* @(#)vm.h 8.2 (Berkeley) 12/13/93
* $Id: vm.h,v 1.3 1994/08/02 07:55:16 davidg Exp $
* $Id: vm.h,v 1.4 1995/01/09 16:05:37 davidg Exp $
*/
#ifndef VM_H
@ -54,9 +54,6 @@ typedef struct vm_object *vm_object_t;
struct vm_page;
typedef struct vm_page *vm_page_t;
struct pager_struct;
typedef struct pager_struct *vm_pager_t;
#include <sys/vmmeter.h>
#include <sys/queue.h>
#include <machine/cpufunc.h>

View File

@ -31,14 +31,13 @@
* SUCH DAMAGE.
*
* @(#)vm_extern.h 8.2 (Berkeley) 1/12/94
* $Id: vm_extern.h,v 1.15 1995/05/14 03:00:09 davidg Exp $
* $Id: vm_extern.h,v 1.16 1995/07/10 08:53:16 davidg Exp $
*/
#ifndef _VM_EXTERN_H_
#define _VM_EXTERN_H_
struct buf;
struct loadavg;
struct proc;
struct vmspace;
struct vmtotal;
@ -50,17 +49,6 @@ void chgkprot __P((caddr_t, int, int));
#endif
/*
* Try to get semi-meaningful wait messages into thread_sleep...
*/
void thread_sleep_ __P((int, simple_lock_t, char *));
#if __GNUC__ >= 2
#define thread_sleep(a,b,c) thread_sleep_((a), (b), __FUNCTION__)
#else
#define thread_sleep(a,b,c) thread_sleep_((a), (b), "vmslp")
#endif
#ifdef KERNEL
extern int indent;
@ -79,13 +67,10 @@ int swapon __P((struct proc *, void *, int *));
#endif
void assert_wait __P((int, boolean_t));
void faultin __P((struct proc *p));
int grow __P((struct proc *, u_int));
void iprintf __P((const char *,...));
int kernacc __P((caddr_t, int, int));
int kinfo_loadavg __P((int, char *, int *, int, int *));
int kinfo_meter __P((int, caddr_t, int *, int, int *));
vm_offset_t kmem_alloc __P((vm_map_t, vm_size_t));
vm_offset_t kmem_alloc_pageable __P((vm_map_t, vm_size_t));
vm_offset_t kmem_alloc_wait __P((vm_map_t, vm_size_t));
@ -94,17 +79,12 @@ void kmem_free_wakeup __P((vm_map_t, vm_offset_t, vm_size_t));
void kmem_init __P((vm_offset_t, vm_offset_t));
vm_offset_t kmem_malloc __P((vm_map_t, vm_size_t, boolean_t));
vm_map_t kmem_suballoc __P((vm_map_t, vm_offset_t *, vm_offset_t *, vm_size_t, boolean_t));
void loadav __P((struct loadavg *));
void munmapfd __P((struct proc *, int));
int pager_cache __P((vm_object_t, boolean_t));
void sched __P((void));
int swaponvp __P((struct proc *, struct vnode *, dev_t , u_long ));
void swapout __P((struct proc *));
void swapout_procs __P((void));
void swstrategy __P((struct buf *));
void thread_block __P((char *));
void thread_sleep __P((int, simple_lock_t, boolean_t));
void thread_wakeup __P((int));
int useracc __P((caddr_t, int, int));
int vm_fault __P((vm_map_t, vm_offset_t, vm_prot_t, boolean_t));
void vm_fault_copy_entry __P((vm_map_t, vm_map_t, vm_map_entry_t, vm_map_entry_t));
@ -121,10 +101,9 @@ struct vmspace *vmspace_alloc __P((vm_offset_t, vm_offset_t, int));
struct vmspace *vmspace_fork __P((struct vmspace *));
void vmspace_free __P((struct vmspace *));
void vmtotal __P((struct vmtotal *));
vm_pager_t vnode_pager_alloc __P((void *, vm_offset_t, vm_prot_t, vm_offset_t));
void vnode_pager_setsize __P((struct vnode *, u_long));
void vnode_pager_umount __P((struct mount *));
boolean_t vnode_pager_uncache __P((struct vnode *));
void vnode_pager_uncache __P((struct vnode *));
void vslock __P((caddr_t, u_int));
void vsunlock __P((caddr_t, u_int, int));

View File

@ -66,7 +66,7 @@
* any improvements or extensions that they make and grant Carnegie the
* rights to redistribute these changes.
*
* $Id: vm_fault.c,v 1.24 1995/05/18 02:59:22 davidg Exp $
* $Id: vm_fault.c,v 1.25 1995/05/30 08:15:59 rgrimes Exp $
*/
/*
@ -76,6 +76,7 @@
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/proc.h>
#include <sys/vnode.h>
#include <sys/resource.h>
#include <sys/signalvar.h>
#include <sys/resourcevar.h>
@ -84,19 +85,16 @@
#include <vm/vm_page.h>
#include <vm/vm_pageout.h>
#include <vm/vm_kern.h>
#include <vm/vm_pager.h>
#include <vm/vnode_pager.h>
int vm_fault_additional_pages __P((vm_object_t, vm_offset_t, vm_page_t, int, int, vm_page_t *, int *));
#define VM_FAULT_READ_AHEAD 4
#define VM_FAULT_READ_AHEAD_MIN 1
#define VM_FAULT_READ_BEHIND 3
#define VM_FAULT_READ (VM_FAULT_READ_AHEAD+VM_FAULT_READ_BEHIND+1)
extern int swap_pager_full;
struct vnode *vnode_pager_lock __P((vm_object_t object));
void vnode_pager_unlock __P((struct vnode *));
/*
* vm_fault:
*
@ -148,16 +146,12 @@ vm_fault(map, vaddr, fault_type, change_wiring)
*/
#define FREE_PAGE(m) { \
PAGE_WAKEUP(m); \
vm_page_lock_queues(); \
vm_page_free(m); \
vm_page_unlock_queues(); \
}
#define RELEASE_PAGE(m) { \
PAGE_WAKEUP(m); \
vm_page_lock_queues(); \
if ((m->flags & PG_ACTIVE) == 0) vm_page_activate(m); \
vm_page_unlock_queues(); \
}
#define UNLOCK_MAP { \
@ -169,15 +163,12 @@ vm_fault(map, vaddr, fault_type, change_wiring)
#define UNLOCK_THINGS { \
vm_object_pip_wakeup(object); \
vm_object_unlock(object); \
if (object != first_object) { \
vm_object_lock(first_object); \
FREE_PAGE(first_m); \
vm_object_pip_wakeup(first_object); \
vm_object_unlock(first_object); \
} \
UNLOCK_MAP; \
if (vp != NULL) vnode_pager_unlock(vp); \
if (vp != NULL) VOP_UNLOCK(vp); \
}
#define UNLOCK_AND_DEALLOCATE { \
@ -198,7 +189,7 @@ RetryFault:;
return (result);
}
vp = (struct vnode *) vnode_pager_lock(first_object);
vp = vnode_pager_lock(first_object);
lookup_still_valid = TRUE;
@ -214,8 +205,6 @@ RetryFault:;
* they will stay around as well.
*/
vm_object_lock(first_object);
first_object->ref_count++;
first_object->paging_in_progress++;
@ -223,7 +212,7 @@ RetryFault:;
* INVARIANTS (through entire routine):
*
* 1) At all times, we must either have the object lock or a busy
* page in some object to prevent some other thread from trying to
* page in some object to prevent some other process from trying to
* bring in the same page.
*
* Note that we cannot hold any locks during the pager access or when
@ -237,7 +226,7 @@ RetryFault:;
* 2) Once we have a busy page, we must remove it from the pageout
* queues, so that the pageout daemon will not grab it away.
*
* 3) To prevent another thread from racing us down the shadow chain
* 3) To prevent another process from racing us down the shadow chain
* and entering a new page in the top object before we do, we must
* keep a busy page in the top object while following the shadow
* chain.
@ -273,7 +262,7 @@ RetryFault:;
if ((m->flags & PG_BUSY) || m->busy) {
m->flags |= PG_WANTED | PG_REFERENCED;
cnt.v_intrans++;
tsleep((caddr_t) m, PSWP, "vmpfw", 0);
tsleep(m, PSWP, "vmpfw", 0);
}
splx(s);
vm_object_deallocate(first_object);
@ -288,7 +277,7 @@ RetryFault:;
}
/*
* Mark page busy for other threads, and the pagedaemon.
* Mark page busy for other processes, and the pagedaemon.
*/
m->flags |= PG_BUSY;
if (m->valid && ((m->valid & VM_PAGE_BITS_ALL) != VM_PAGE_BITS_ALL) &&
@ -297,16 +286,18 @@ RetryFault:;
}
break;
}
if (((object->pager != NULL) && (!change_wiring || wired))
if (((object->type != OBJT_DEFAULT) && (!change_wiring || wired))
|| (object == first_object)) {
if (offset >= object->size) {
UNLOCK_AND_DEALLOCATE;
return (KERN_PROTECTION_FAILURE);
}
if (swap_pager_full && !object->shadow && (!object->pager ||
(object->pager && object->pager->pg_type == PG_SWAP &&
!vm_pager_has_page(object->pager, offset + object->paging_offset)))) {
#if 0 /* XXX is this really necessary? */
if (swap_pager_full && !object->backing_object &&
(object->type == OBJT_DEFAULT ||
(object->type == OBJT_SWAP &&
!vm_pager_has_page(object, offset + object->paging_offset, NULL, NULL)))) {
if (vaddr < VM_MAXUSER_ADDRESS && curproc && curproc->p_pid >= 48) { /* XXX */
printf("Process %lu killed by vm_fault -- out of swap\n", (u_long) curproc->p_pid);
psignal(curproc, SIGKILL);
@ -315,6 +306,7 @@ RetryFault:;
resetpriority(curproc);
}
}
#endif
/*
* Allocate a new page for this object/offset pair.
*/
@ -328,16 +320,11 @@ RetryFault:;
}
}
readrest:
if (object->pager != NULL && (!change_wiring || wired)) {
if (object->type != OBJT_DEFAULT && (!change_wiring || wired)) {
int rv;
int faultcount;
int reqpage;
/*
* Now that we have a busy page, we can release the
* object lock.
*/
vm_object_unlock(object);
/*
* now we find out if any other pages should be paged
* in at this time this routine checks to see if the
@ -362,14 +349,13 @@ readrest:
UNLOCK_MAP;
rv = faultcount ?
vm_pager_get_pages(object->pager,
marray, faultcount, reqpage, TRUE) : VM_PAGER_FAIL;
vm_pager_get_pages(object, marray, faultcount,
reqpage) : VM_PAGER_FAIL;
if (rv == VM_PAGER_OK) {
/*
* Found the page. Leave it busy while we play
* with it.
*/
vm_object_lock(object);
/*
* Relookup in case pager changed page. Pager
@ -392,11 +378,11 @@ readrest:
* object/offset); before doing so, we must get back
* our object lock to preserve our invariant.
*
* Also wake up any other thread that may want to bring
* Also wake up any other process that may want to bring
* in this page.
*
* If this is the top-level object, we must leave the
* busy page to prevent another thread from rushing
* busy page to prevent another process from rushing
* past us, and inserting the page in that object at
* the same time that we are.
*/
@ -404,7 +390,6 @@ readrest:
if (rv == VM_PAGER_ERROR)
printf("vm_fault: pager input (probably hardware) error, PID %d failure\n",
curproc->p_pid);
vm_object_lock(object);
/*
* Data outside the range of the pager or an I/O error
*/
@ -427,7 +412,7 @@ readrest:
}
}
/*
* We get here if the object has no pager (or unwiring) or the
* We get here if the object has default pager (or unwiring) or the
* pager doesn't have the page.
*/
if (object == first_object)
@ -438,8 +423,8 @@ readrest:
* unlocking the current one.
*/
offset += object->shadow_offset;
next_object = object->shadow;
offset += object->backing_object_offset;
next_object = object->backing_object;
if (next_object == NULL) {
/*
* If there's no object left, fill the page in the top
@ -447,12 +432,10 @@ readrest:
*/
if (object != first_object) {
vm_object_pip_wakeup(object);
vm_object_unlock(object);
object = first_object;
offset = first_offset;
m = first_m;
vm_object_lock(object);
}
first_m = NULL;
@ -461,11 +444,9 @@ readrest:
cnt.v_zfod++;
break;
} else {
vm_object_lock(next_object);
if (object != first_object) {
vm_object_pip_wakeup(object);
}
vm_object_unlock(object);
object = next_object;
object->paging_in_progress++;
}
@ -529,19 +510,15 @@ readrest:
* call.
*/
vm_page_lock_queues();
if ((m->flags & PG_ACTIVE) == 0)
vm_page_activate(m);
vm_page_protect(m, VM_PROT_NONE);
vm_page_unlock_queues();
/*
* We no longer need the old page or object.
*/
PAGE_WAKEUP(m);
vm_object_pip_wakeup(object);
vm_object_unlock(object);
/*
* Only use the new page below...
@ -555,9 +532,7 @@ readrest:
/*
* Now that we've gotten the copy out of the way,
* let's try to collapse the top object.
*/
vm_object_lock(object);
/*
*
* But we have to play ugly games with
* paging_in_progress to do that...
*/
@ -570,176 +545,6 @@ readrest:
}
}
/*
* If the page is being written, but hasn't been copied to the
* copy-object, we have to copy it there.
*/
RetryCopy:
if (first_object->copy != NULL) {
vm_object_t copy_object = first_object->copy;
vm_offset_t copy_offset;
vm_page_t copy_m;
/*
* We only need to copy if we want to write it.
*/
if ((fault_type & VM_PROT_WRITE) == 0) {
prot &= ~VM_PROT_WRITE;
m->flags |= PG_COPYONWRITE;
} else {
/*
* Try to get the lock on the copy_object.
*/
if (!vm_object_lock_try(copy_object)) {
vm_object_unlock(object);
/* should spin a bit here... */
vm_object_lock(object);
goto RetryCopy;
}
/*
* Make another reference to the copy-object, to keep
* it from disappearing during the copy.
*/
copy_object->ref_count++;
/*
* Does the page exist in the copy?
*/
copy_offset = first_offset
- copy_object->shadow_offset;
copy_m = vm_page_lookup(copy_object, copy_offset);
page_exists = (copy_m != NULL);
if (page_exists) {
if ((copy_m->flags & PG_BUSY) || copy_m->busy) {
/*
* If the page is being brought in,
* wait for it and then retry.
*/
RELEASE_PAGE(m);
copy_object->ref_count--;
vm_object_unlock(copy_object);
UNLOCK_THINGS;
spl = splhigh();
if ((copy_m->flags & PG_BUSY) || copy_m->busy) {
copy_m->flags |= PG_WANTED | PG_REFERENCED;
tsleep((caddr_t) copy_m, PSWP, "vmpfwc", 0);
}
splx(spl);
vm_object_deallocate(first_object);
goto RetryFault;
}
}
/*
* If the page is not in memory (in the object) and
* the object has a pager, we have to check if the
* pager has the data in secondary storage.
*/
if (!page_exists) {
/*
* If we don't allocate a (blank) page here...
* another thread could try to page it in,
* allocate a page, and then block on the busy
* page in its shadow (first_object). Then
* we'd trip over the busy page after we found
* that the copy_object's pager doesn't have
* the page...
*/
copy_m = vm_page_alloc(copy_object, copy_offset, VM_ALLOC_NORMAL);
if (copy_m == NULL) {
/*
* Wait for a page, then retry.
*/
RELEASE_PAGE(m);
copy_object->ref_count--;
vm_object_unlock(copy_object);
UNLOCK_AND_DEALLOCATE;
VM_WAIT;
goto RetryFault;
}
if (copy_object->pager != NULL) {
vm_object_unlock(object);
vm_object_unlock(copy_object);
UNLOCK_MAP;
page_exists = vm_pager_has_page(
copy_object->pager,
(copy_offset + copy_object->paging_offset));
vm_object_lock(copy_object);
/*
* Since the map is unlocked, someone
* else could have copied this object
* and put a different copy_object
* between the two. Or, the last
* reference to the copy-object (other
* than the one we have) may have
* disappeared - if that has happened,
* we don't need to make the copy.
*/
if (copy_object->shadow != object ||
copy_object->ref_count == 1) {
/*
* Gaah... start over!
*/
FREE_PAGE(copy_m);
vm_object_unlock(copy_object);
vm_object_deallocate(copy_object);
/* may block */
vm_object_lock(object);
goto RetryCopy;
}
vm_object_lock(object);
if (page_exists) {
/*
* We didn't need the page
*/
FREE_PAGE(copy_m);
}
}
}
if (!page_exists) {
/*
* Must copy page into copy-object.
*/
vm_page_copy(m, copy_m);
copy_m->valid = VM_PAGE_BITS_ALL;
/*
* Things to remember: 1. The copied page must
* be marked 'dirty' so it will be paged out
* to the copy object. 2. If the old page was
* in use by any users of the copy-object, it
* must be removed from all pmaps. (We can't
* know which pmaps use it.)
*/
vm_page_lock_queues();
if ((old_m->flags & PG_ACTIVE) == 0)
vm_page_activate(old_m);
vm_page_protect(old_m, VM_PROT_NONE);
copy_m->dirty = VM_PAGE_BITS_ALL;
if ((copy_m->flags & PG_ACTIVE) == 0)
vm_page_activate(copy_m);
vm_page_unlock_queues();
PAGE_WAKEUP(copy_m);
}
/*
* The reference count on copy_object must be at least
* 2: one for our extra reference, and at least one
* from the outside world (we checked that when we
* last locked copy_object).
*/
copy_object->ref_count--;
vm_object_unlock(copy_object);
m->flags &= ~PG_COPYONWRITE;
}
}
/*
* We must verify that the maps have not changed since our last
* lookup.
@ -754,10 +559,9 @@ RetryCopy:
* Since map entries may be pageable, make sure we can take a
* page fault on them.
*/
vm_object_unlock(object);
/*
* To avoid trying to write_lock the map while another thread
* To avoid trying to write_lock the map while another process
* has it read_locked (in vm_map_pageable), we do not try for
* write permission. If the page is still writable, we will
* get write permission. If it is not, or has been marked
@ -767,8 +571,6 @@ RetryCopy:
result = vm_map_lookup(&map, vaddr, fault_type & ~VM_PROT_WRITE,
&entry, &retry_object, &retry_offset, &retry_prot, &wired, &su);
vm_object_lock(object);
/*
* If we don't need the page any longer, put it on the active
* list (the easiest thing to do here). If no one needs it,
@ -815,8 +617,6 @@ RetryCopy:
* once in each map for which it is wired.
*/
vm_object_unlock(object);
/*
* Put this page into the physical map. We had to do the unlock above
* because pmap_enter may cause other faults. We don't put the page
@ -849,8 +649,6 @@ RetryCopy:
* If the page is not wired down, then put it where the pageout daemon
* can find it.
*/
vm_object_lock(object);
vm_page_lock_queues();
if (change_wiring) {
if (wired)
vm_page_wire(m);
@ -868,7 +666,6 @@ RetryCopy:
curproc->p_stats->p_ru.ru_minflt++;
}
}
vm_page_unlock_queues();
/*
* Unlock everything, and return
@ -948,8 +745,6 @@ vm_fault_unwire(map, start, end)
* mappings from the physical map system.
*/
vm_page_lock_queues();
for (va = start; va < end; va += PAGE_SIZE) {
pa = pmap_extract(pmap, va);
if (pa == (vm_offset_t) 0) {
@ -958,7 +753,6 @@ vm_fault_unwire(map, start, end)
pmap_change_wiring(pmap, va, FALSE);
vm_page_unwire(PHYS_TO_VM_PAGE(pa));
}
vm_page_unlock_queues();
/*
* Inform the physical mapping system that the range of addresses may
@ -1008,7 +802,7 @@ vm_fault_copy_entry(dst_map, src_map, dst_entry, src_entry)
* Create the top-level object for the destination entry. (Doesn't
* actually shadow anything - we copy the pages directly.)
*/
dst_object = vm_object_allocate(
dst_object = vm_object_allocate(OBJT_DEFAULT,
(vm_size_t) (dst_entry->end - dst_entry->start));
dst_entry->object.vm_object = dst_object;
@ -1028,13 +822,10 @@ vm_fault_copy_entry(dst_map, src_map, dst_entry, src_entry)
/*
* Allocate a page in the destination object
*/
vm_object_lock(dst_object);
do {
dst_m = vm_page_alloc(dst_object, dst_offset, VM_ALLOC_NORMAL);
if (dst_m == NULL) {
vm_object_unlock(dst_object);
VM_WAIT;
vm_object_lock(dst_object);
}
} while (dst_m == NULL);
@ -1043,7 +834,6 @@ vm_fault_copy_entry(dst_map, src_map, dst_entry, src_entry)
* (Because the source is wired down, the page will be in
* memory.)
*/
vm_object_lock(src_object);
src_m = vm_page_lookup(src_object, dst_offset + src_offset);
if (src_m == NULL)
panic("vm_fault_copy_wired: page missing");
@ -1053,8 +843,6 @@ vm_fault_copy_entry(dst_map, src_map, dst_entry, src_entry)
/*
* Enter it in the pmap...
*/
vm_object_unlock(src_object);
vm_object_unlock(dst_object);
dst_m->flags |= PG_WRITEABLE;
dst_m->flags |= PG_MAPPED;
@ -1064,12 +852,8 @@ vm_fault_copy_entry(dst_map, src_map, dst_entry, src_entry)
/*
* Mark it no longer busy, and put it on the active list.
*/
vm_object_lock(dst_object);
vm_page_lock_queues();
vm_page_activate(dst_m);
vm_page_unlock_queues();
PAGE_WAKEUP(dst_m);
vm_object_unlock(dst_object);
}
}
@ -1093,18 +877,16 @@ vm_fault_page_lookup(object, offset, rtobject, rtoffset, rtm)
*rtoffset = 0;
while (!(m = vm_page_lookup(object, offset))) {
if (object->pager) {
if (vm_pager_has_page(object->pager, object->paging_offset + offset)) {
*rtobject = object;
*rtoffset = offset;
return 1;
}
if (vm_pager_has_page(object, object->paging_offset + offset, NULL, NULL)) {
*rtobject = object;
*rtoffset = offset;
return 1;
}
if (!object->shadow)
if (!object->backing_object)
return 0;
else {
offset += object->shadow_offset;
object = object->shadow;
offset += object->backing_object_offset;
object = object->backing_object;
}
}
*rtobject = object;
@ -1155,7 +937,7 @@ vm_fault_additional_pages(first_object, first_offset, m, rbehind, raheada, marra
* if the requested page is not available, then give up now
*/
if (!vm_pager_has_page(object->pager, object->paging_offset + offset))
if (!vm_pager_has_page(object, object->paging_offset + offset, NULL, NULL))
return 0;
/*

View File

@ -59,7 +59,7 @@
* any improvements or extensions that they make and grant Carnegie the
* rights to redistribute these changes.
*
* $Id: vm_glue.c,v 1.21 1995/07/10 08:48:58 davidg Exp $
* $Id: vm_glue.c,v 1.22 1995/07/10 08:53:20 davidg Exp $
*/
#include <sys/param.h>
@ -352,7 +352,7 @@ scheduler()
loop:
while ((cnt.v_free_count + cnt.v_cache_count) < (cnt.v_free_reserved + UPAGES + 2)) {
VM_WAIT;
tsleep((caddr_t) &proc0, PVM, "schedm", 0);
tsleep(&proc0, PVM, "schedm", 0);
}
pp = NULL;
@ -379,7 +379,7 @@ loop:
* Nothing to do, back to sleep
*/
if ((p = pp) == NULL) {
tsleep((caddr_t) &proc0, PVM, "sched", 0);
tsleep(&proc0, PVM, "sched", 0);
goto loop;
}
/*
@ -465,7 +465,7 @@ retry:
* then wakeup the sched process.
*/
if (didswap)
wakeup((caddr_t) &proc0);
wakeup(&proc0);
}
void
@ -505,56 +505,7 @@ swapout(p)
p->p_swtime = 0;
}
/*
* The rest of these routines fake thread handling
*/
#ifndef assert_wait
void
assert_wait(event, ruptible)
int event;
boolean_t ruptible;
{
#ifdef lint
ruptible++;
#endif
curproc->p_thread = event;
}
#endif
void
thread_block(char *msg)
{
if (curproc->p_thread)
tsleep((caddr_t) curproc->p_thread, PVM, msg, 0);
}
void
thread_sleep_(event, lock, wmesg)
int event;
simple_lock_t lock;
char *wmesg;
{
curproc->p_thread = event;
simple_unlock(lock);
if (curproc->p_thread) {
tsleep((caddr_t) event, PVM, wmesg, 0);
}
}
#ifndef thread_wakeup
void
thread_wakeup(event)
int event;
{
wakeup((caddr_t) event);
}
#endif
#ifdef DDB
/*
* DEBUG stuff
*/

View File

@ -61,7 +61,7 @@
* any improvements or extensions that they make and grant Carnegie the
* rights to redistribute these changes.
*
* $Id: vm_inherit.h,v 1.2 1994/08/02 07:55:20 davidg Exp $
* $Id: vm_inherit.h,v 1.3 1995/01/09 16:05:41 davidg Exp $
*/
/*
@ -78,7 +78,6 @@
#define VM_INHERIT_SHARE ((vm_inherit_t) 0) /* share with child */
#define VM_INHERIT_COPY ((vm_inherit_t) 1) /* copy into child */
#define VM_INHERIT_NONE ((vm_inherit_t) 2) /* absent from child */
#define VM_INHERIT_DONATE_COPY ((vm_inherit_t) 3) /* copy and delete */
#define VM_INHERIT_DEFAULT VM_INHERIT_COPY

View File

@ -61,7 +61,7 @@
* any improvements or extensions that they make and grant Carnegie the
* rights to redistribute these changes.
*
* $Id: vm_init.c,v 1.5 1995/01/09 16:05:42 davidg Exp $
* $Id: vm_init.c,v 1.6 1995/03/16 18:17:11 bde Exp $
*/
/*
@ -74,6 +74,7 @@
#include <vm/vm.h>
#include <vm/vm_page.h>
#include <vm/vm_kern.h>
#include <vm/vm_pager.h>
/*
* vm_init initializes the virtual memory system.

View File

@ -61,7 +61,7 @@
* any improvements or extensions that they make and grant Carnegie the
* rights to redistribute these changes.
*
* $Id: vm_kern.c,v 1.12 1995/03/15 07:52:06 davidg Exp $
* $Id: vm_kern.c,v 1.13 1995/05/30 08:16:04 rgrimes Exp $
*/
/*
@ -176,20 +176,16 @@ kmem_alloc(map, size)
* race with page-out. vm_map_pageable will wire the pages.
*/
vm_object_lock(kernel_object);
for (i = 0; i < size; i += PAGE_SIZE) {
vm_page_t mem;
while ((mem = vm_page_alloc(kernel_object, offset + i, VM_ALLOC_NORMAL)) == NULL) {
vm_object_unlock(kernel_object);
VM_WAIT;
vm_object_lock(kernel_object);
}
vm_page_zero_fill(mem);
mem->flags &= ~PG_BUSY;
mem->valid = VM_PAGE_BITS_ALL;
}
vm_object_unlock(kernel_object);
/*
* And finally, mark the data as non-pageable.
@ -332,7 +328,6 @@ kmem_malloc(map, size, waitflag)
* If we cannot wait then we must allocate all memory up front,
* pulling it off the active queue to prevent pageout.
*/
vm_object_lock(kmem_object);
for (i = 0; i < size; i += PAGE_SIZE) {
m = vm_page_alloc(kmem_object, offset + i,
(waitflag == M_NOWAIT) ? VM_ALLOC_INTERRUPT : VM_ALLOC_SYSTEM);
@ -348,7 +343,6 @@ kmem_malloc(map, size, waitflag)
m = vm_page_lookup(kmem_object, offset + i);
vm_page_free(m);
}
vm_object_unlock(kmem_object);
vm_map_delete(map, addr, addr + size);
vm_map_unlock(map);
return (0);
@ -359,7 +353,6 @@ kmem_malloc(map, size, waitflag)
m->flags &= ~PG_BUSY;
m->valid = VM_PAGE_BITS_ALL;
}
vm_object_unlock(kmem_object);
/*
* Mark map entry as non-pageable. Assert: vm_map_insert() will never
@ -379,9 +372,7 @@ kmem_malloc(map, size, waitflag)
* splimp...)
*/
for (i = 0; i < size; i += PAGE_SIZE) {
vm_object_lock(kmem_object);
m = vm_page_lookup(kmem_object, offset + i);
vm_object_unlock(kmem_object);
pmap_kenter(addr + i, VM_PAGE_TO_PHYS(m));
}
vm_map_unlock(map);
@ -419,9 +410,8 @@ kmem_alloc_wait(map, size)
vm_map_unlock(map);
return (0);
}
assert_wait((int) map, TRUE);
vm_map_unlock(map);
thread_block("kmaw");
tsleep(map, PVM, "kmaw", 0);
}
vm_map_insert(map, NULL, (vm_offset_t) 0, addr, addr + size);
vm_map_unlock(map);
@ -431,7 +421,7 @@ kmem_alloc_wait(map, size)
/*
* kmem_free_wakeup
*
* Returns memory to a submap of the kernel, and wakes up any threads
* Returns memory to a submap of the kernel, and wakes up any processes
* waiting for memory in that map.
*/
void
@ -442,7 +432,7 @@ kmem_free_wakeup(map, addr, size)
{
vm_map_lock(map);
(void) vm_map_delete(map, trunc_page(addr), round_page(addr + size));
thread_wakeup((int) map);
wakeup(map);
vm_map_unlock(map);
}

View File

@ -61,7 +61,7 @@
* any improvements or extensions that they make and grant Carnegie the
* rights to redistribute these changes.
*
* $Id: vm_map.c,v 1.21 1995/04/16 12:56:17 davidg Exp $
* $Id: vm_map.c,v 1.22 1995/05/30 08:16:07 rgrimes Exp $
*/
/*
@ -77,6 +77,7 @@
#include <vm/vm_page.h>
#include <vm/vm_object.h>
#include <vm/vm_kern.h>
#include <vm/vm_pager.h>
/*
* Virtual memory maps provide for the mapping, protection,
@ -290,8 +291,6 @@ vm_map_init(map, min, max, pageable)
map->hint = &map->header;
map->timestamp = 0;
lock_init(&map->lock, TRUE);
simple_lock_init(&map->ref_lock);
simple_lock_init(&map->hint_lock);
}
/*
@ -436,9 +435,7 @@ vm_map_reference(map)
if (map == NULL)
return;
simple_lock(&map->ref_lock);
map->ref_count++;
simple_unlock(&map->ref_lock);
}
/*
@ -457,16 +454,14 @@ vm_map_deallocate(map)
if (map == NULL)
return;
simple_lock(&map->ref_lock);
c = map->ref_count;
simple_unlock(&map->ref_lock);
if (c == 0)
panic("vm_map_deallocate: deallocating already freed map");
if (c != 1) {
--map->ref_count;
wakeup((caddr_t) &map->ref_count);
wakeup(&map->ref_count);
return;
}
/*
@ -609,12 +604,10 @@ vm_map_insert(map, object, offset, start, end)
* SAVE_HINT:
*
* Saves the specified entry as the hint for
* future lookups. Performs necessary interlocks.
* future lookups.
*/
#define SAVE_HINT(map,value) \
simple_lock(&(map)->hint_lock); \
(map)->hint = (value); \
simple_unlock(&(map)->hint_lock);
(map)->hint = (value);
/*
* vm_map_lookup_entry: [ internal use only ]
@ -639,9 +632,7 @@ vm_map_lookup_entry(map, address, entry)
* Start looking either from the head of the list, or from the hint.
*/
simple_lock(&map->hint_lock);
cur = map->hint;
simple_unlock(&map->hint_lock);
if (cur == &map->header)
cur = cur->next;
@ -828,9 +819,7 @@ vm_map_simplify_entry(map, entry)
int count;
my_share_map = entry->object.share_map;
simple_lock(&my_share_map->ref_lock);
count = my_share_map->ref_count;
simple_unlock(&my_share_map->ref_lock);
if (count == 1) {
/*
@ -1291,7 +1280,7 @@ vm_map_pageable(map, start, end, new_pageable)
* 1).
*
* Downgrading to a read lock for vm_fault_wire avoids a possible
* deadlock with another thread that may have faulted on one
* deadlock with another process that may have faulted on one
* of the pages to be wired (it would mark the page busy,
* blocking us, then in turn block on the map lock that we
* hold). Because of problems in the recursive lock package,
@ -1329,7 +1318,7 @@ vm_map_pageable(map, start, end, new_pageable)
entry->needs_copy = FALSE;
} else if (entry->object.vm_object == NULL) {
entry->object.vm_object =
vm_object_allocate((vm_size_t) (entry->end
vm_object_allocate(OBJT_DEFAULT, (vm_size_t) (entry->end
- entry->start));
entry->offset = (vm_offset_t) 0;
}
@ -1367,12 +1356,12 @@ vm_map_pageable(map, start, end, new_pageable)
/*
* HACK HACK HACK HACK
*
* If we are wiring in the kernel map or a submap of it, unlock
* the map to avoid deadlocks. We trust that the kernel
* threads are well-behaved, and therefore will not do
* anything destructive to this region of the map while we
* have it unlocked. We cannot trust user threads to do the
* same.
* If we are wiring in the kernel map or a submap of it,
* unlock the map to avoid deadlocks. We trust that the
* kernel is well-behaved, and therefore will not do
* anything destructive to this region of the map while
* we have it unlocked. We cannot trust user processes
* to do the same.
*
* HACK HACK HACK HACK
*/
@ -1493,9 +1482,7 @@ vm_map_clean(map, start, end, syncio, invalidate)
} else {
object = current->object.vm_object;
}
if (object && (object->pager != NULL) &&
(object->pager->pg_type == PG_VNODE)) {
vm_object_lock(object);
if (object && (object->type == OBJT_VNODE)) {
/*
* Flush pages if writing is allowed. XXX should we continue
* on an error?
@ -1505,10 +1492,9 @@ vm_map_clean(map, start, end, syncio, invalidate)
* idea.
*/
if (current->protection & VM_PROT_WRITE)
vm_object_page_clean(object, offset, offset + size, syncio);
vm_object_page_clean(object, offset, offset + size, syncio, TRUE);
if (invalidate)
vm_object_page_remove(object, offset, offset + size, FALSE);
vm_object_unlock(object);
}
start += size;
}
@ -1746,9 +1732,8 @@ vm_map_copy_entry(src_map, dst_map, src_entry, dst_entry)
if (src_entry->is_sub_map || dst_entry->is_sub_map)
return;
if (dst_entry->object.vm_object != NULL &&
(dst_entry->object.vm_object->flags & OBJ_INTERNAL) == 0)
printf("vm_map_copy_entry: copying over permanent data!\n");
if (dst_entry->object.vm_object != NULL)
printf("vm_map_copy_entry: dst_entry object not NULL!\n");
/*
* If our destination map was wired down, unwire it now.
@ -1788,9 +1773,7 @@ vm_map_copy_entry(src_map, dst_map, src_entry, dst_entry)
* just protect the virtual address range.
*/
if (!(su = src_map->is_main_map)) {
simple_lock(&src_map->ref_lock);
su = (src_map->ref_count == 1);
simple_unlock(&src_map->ref_lock);
}
if (su) {
pmap_protect(src_map->pmap,
@ -1807,7 +1790,6 @@ vm_map_copy_entry(src_map, dst_map, src_entry, dst_entry)
/*
* Make a copy of the object.
*/
temp_object = dst_entry->object.vm_object;
vm_object_copy(src_entry->object.vm_object,
src_entry->offset,
(vm_size_t) (src_entry->end -
@ -1834,10 +1816,6 @@ vm_map_copy_entry(src_map, dst_map, src_entry, dst_entry)
*/
src_entry->copy_on_write = TRUE;
dst_entry->copy_on_write = TRUE;
/*
* Get rid of the old object.
*/
vm_object_deallocate(temp_object);
pmap_copy(dst_map->pmap, src_map->pmap, dst_entry->start,
dst_entry->end - dst_entry->start, src_entry->start);
@ -1851,292 +1829,6 @@ vm_map_copy_entry(src_map, dst_map, src_entry, dst_entry)
}
}
/*
* vm_map_copy:
*
* Perform a virtual memory copy from the source
* address map/range to the destination map/range.
*
* If src_destroy or dst_alloc is requested,
* the source and destination regions should be
* disjoint, not only in the top-level map, but
* in the sharing maps as well. [The best way
* to guarantee this is to use a new intermediate
* map to make copies. This also reduces map
* fragmentation.]
*/
int
vm_map_copy(dst_map, src_map,
dst_addr, len, src_addr,
dst_alloc, src_destroy)
vm_map_t dst_map;
vm_map_t src_map;
vm_offset_t dst_addr;
vm_size_t len;
vm_offset_t src_addr;
boolean_t dst_alloc;
boolean_t src_destroy;
{
register
vm_map_entry_t src_entry;
register
vm_map_entry_t dst_entry;
vm_map_entry_t tmp_entry;
vm_offset_t src_start;
vm_offset_t src_end;
vm_offset_t dst_start;
vm_offset_t dst_end;
vm_offset_t src_clip;
vm_offset_t dst_clip;
int result;
boolean_t old_src_destroy;
/*
* XXX While we figure out why src_destroy screws up, we'll do it by
* explicitly vm_map_delete'ing at the end.
*/
old_src_destroy = src_destroy;
src_destroy = FALSE;
/*
* Compute start and end of region in both maps
*/
src_start = src_addr;
src_end = src_start + len;
dst_start = dst_addr;
dst_end = dst_start + len;
/*
* Check that the region can exist in both source and destination.
*/
if ((dst_end < dst_start) || (src_end < src_start))
return (KERN_NO_SPACE);
/*
* Lock the maps in question -- we avoid deadlock by ordering lock
* acquisition by map value
*/
if (src_map == dst_map) {
vm_map_lock(src_map);
} else if ((int) src_map < (int) dst_map) {
vm_map_lock(src_map);
vm_map_lock(dst_map);
} else {
vm_map_lock(dst_map);
vm_map_lock(src_map);
}
result = KERN_SUCCESS;
/*
* Check protections... source must be completely readable and
* destination must be completely writable. [Note that if we're
* allocating the destination region, we don't have to worry about
* protection, but instead about whether the region exists.]
*/
if (src_map->is_main_map && dst_map->is_main_map) {
if (!vm_map_check_protection(src_map, src_start, src_end,
VM_PROT_READ)) {
result = KERN_PROTECTION_FAILURE;
goto Return;
}
if (dst_alloc) {
/* XXX Consider making this a vm_map_find instead */
if ((result = vm_map_insert(dst_map, NULL,
(vm_offset_t) 0, dst_start, dst_end)) != KERN_SUCCESS)
goto Return;
} else if (!vm_map_check_protection(dst_map, dst_start, dst_end,
VM_PROT_WRITE)) {
result = KERN_PROTECTION_FAILURE;
goto Return;
}
}
/*
* Find the start entries and clip.
*
* Note that checking protection asserts that the lookup cannot fail.
*
* Also note that we wait to do the second lookup until we have done the
* first clip, as the clip may affect which entry we get!
*/
(void) vm_map_lookup_entry(src_map, src_addr, &tmp_entry);
src_entry = tmp_entry;
vm_map_clip_start(src_map, src_entry, src_start);
(void) vm_map_lookup_entry(dst_map, dst_addr, &tmp_entry);
dst_entry = tmp_entry;
vm_map_clip_start(dst_map, dst_entry, dst_start);
/*
* If both source and destination entries are the same, retry the
* first lookup, as it may have changed.
*/
if (src_entry == dst_entry) {
(void) vm_map_lookup_entry(src_map, src_addr, &tmp_entry);
src_entry = tmp_entry;
}
/*
* If source and destination entries are still the same, a null copy
* is being performed.
*/
if (src_entry == dst_entry)
goto Return;
/*
* Go through entries until we get to the end of the region.
*/
while (src_start < src_end) {
/*
* Clip the entries to the endpoint of the entire region.
*/
vm_map_clip_end(src_map, src_entry, src_end);
vm_map_clip_end(dst_map, dst_entry, dst_end);
/*
* Clip each entry to the endpoint of the other entry.
*/
src_clip = src_entry->start + (dst_entry->end - dst_entry->start);
vm_map_clip_end(src_map, src_entry, src_clip);
dst_clip = dst_entry->start + (src_entry->end - src_entry->start);
vm_map_clip_end(dst_map, dst_entry, dst_clip);
/*
* Both entries now match in size and relative endpoints.
*
* If both entries refer to a VM object, we can deal with them
* now.
*/
if (!src_entry->is_a_map && !dst_entry->is_a_map) {
vm_map_copy_entry(src_map, dst_map, src_entry,
dst_entry);
} else {
register vm_map_t new_dst_map;
vm_offset_t new_dst_start;
vm_size_t new_size;
vm_map_t new_src_map;
vm_offset_t new_src_start;
/*
* We have to follow at least one sharing map.
*/
new_size = (dst_entry->end - dst_entry->start);
if (src_entry->is_a_map) {
new_src_map = src_entry->object.share_map;
new_src_start = src_entry->offset;
} else {
new_src_map = src_map;
new_src_start = src_entry->start;
lock_set_recursive(&src_map->lock);
}
if (dst_entry->is_a_map) {
vm_offset_t new_dst_end;
new_dst_map = dst_entry->object.share_map;
new_dst_start = dst_entry->offset;
/*
* Since the destination sharing entries will
* be merely deallocated, we can do that now,
* and replace the region with a null object.
* [This prevents splitting the source map to
* match the form of the destination map.]
* Note that we can only do so if the source
* and destination do not overlap.
*/
new_dst_end = new_dst_start + new_size;
if (new_dst_map != new_src_map) {
vm_map_lock(new_dst_map);
(void) vm_map_delete(new_dst_map,
new_dst_start,
new_dst_end);
(void) vm_map_insert(new_dst_map,
NULL,
(vm_offset_t) 0,
new_dst_start,
new_dst_end);
vm_map_unlock(new_dst_map);
}
} else {
new_dst_map = dst_map;
new_dst_start = dst_entry->start;
lock_set_recursive(&dst_map->lock);
}
/*
* Recursively copy the sharing map.
*/
(void) vm_map_copy(new_dst_map, new_src_map,
new_dst_start, new_size, new_src_start,
FALSE, FALSE);
if (dst_map == new_dst_map)
lock_clear_recursive(&dst_map->lock);
if (src_map == new_src_map)
lock_clear_recursive(&src_map->lock);
}
/*
* Update variables for next pass through the loop.
*/
src_start = src_entry->end;
src_entry = src_entry->next;
dst_start = dst_entry->end;
dst_entry = dst_entry->next;
/*
* If the source is to be destroyed, here is the place to do
* it.
*/
if (src_destroy && src_map->is_main_map &&
dst_map->is_main_map)
vm_map_entry_delete(src_map, src_entry->prev);
}
/*
* Update the physical maps as appropriate
*/
if (src_map->is_main_map && dst_map->is_main_map) {
if (src_destroy)
pmap_remove(src_map->pmap, src_addr, src_addr + len);
}
/*
* Unlock the maps
*/
Return:;
if (old_src_destroy)
vm_map_delete(src_map, src_addr, src_addr + len);
vm_map_unlock(src_map);
if (src_map != dst_map)
vm_map_unlock(dst_map);
return (result);
}
/*
* vmspace_fork:
* Create a new process vmspace structure and vm_map
@ -2177,59 +1869,13 @@ vmspace_fork(vm1)
break;
case VM_INHERIT_SHARE:
/*
* If we don't already have a sharing map:
*/
if (!old_entry->is_a_map) {
vm_map_t new_share_map;
vm_map_entry_t new_share_entry;
/*
* Create a new sharing map
*/
new_share_map = vm_map_create(NULL,
old_entry->start,
old_entry->end,
TRUE);
new_share_map->is_main_map = FALSE;
/*
* Create the only sharing entry from the old
* task map entry.
*/
new_share_entry =
vm_map_entry_create(new_share_map);
*new_share_entry = *old_entry;
new_share_entry->wired_count = 0;
/*
* Insert the entry into the new sharing map
*/
vm_map_entry_link(new_share_map,
new_share_map->header.prev,
new_share_entry);
/*
* Fix up the task map entry to refer to the
* sharing map now.
*/
old_entry->is_a_map = TRUE;
old_entry->object.share_map = new_share_map;
old_entry->offset = old_entry->start;
}
/*
* Clone the entry, referencing the sharing map.
*/
new_entry = vm_map_entry_create(new_map);
*new_entry = *old_entry;
new_entry->wired_count = 0;
vm_map_reference(new_entry->object.share_map);
++new_entry->object.vm_object->ref_count;
/*
* Insert the entry into the new map -- we know we're
@ -2261,22 +1907,7 @@ vmspace_fork(vm1)
new_entry->is_a_map = FALSE;
vm_map_entry_link(new_map, new_map->header.prev,
new_entry);
if (old_entry->is_a_map) {
int check;
check = vm_map_copy(new_map,
old_entry->object.share_map,
new_entry->start,
(vm_size_t) (new_entry->end -
new_entry->start),
old_entry->offset,
FALSE, FALSE);
if (check != KERN_SUCCESS)
printf("vm_map_fork: copy in share_map region failed\n");
} else {
vm_map_copy_entry(old_map, new_map, old_entry,
new_entry);
}
vm_map_copy_entry(old_map, new_map, old_entry, new_entry);
break;
}
old_entry = old_entry->next;
@ -2350,9 +1981,7 @@ RetryLookup:;
* blown lookup routine.
*/
simple_lock(&map->hint_lock);
entry = map->hint;
simple_unlock(&map->hint_lock);
*out_entry = entry;
@ -2483,7 +2112,7 @@ RetryLookup:;
vm_map_unlock_read(map);
goto RetryLookup;
}
entry->object.vm_object = vm_object_allocate(
entry->object.vm_object = vm_object_allocate(OBJT_DEFAULT,
(vm_size_t) (entry->end - entry->start));
entry->offset = 0;
lock_write_to_read(&share_map->lock);
@ -2501,9 +2130,7 @@ RetryLookup:;
*/
if (!su) {
simple_lock(&share_map->ref_lock);
su = (share_map->ref_count == 1);
simple_unlock(&share_map->ref_lock);
}
*out_prot = prot;
*single_use = su;

View File

@ -61,7 +61,7 @@
* any improvements or extensions that they make and grant Carnegie the
* rights to redistribute these changes.
*
* $Id: vm_map.h,v 1.4 1995/01/09 16:05:46 davidg Exp $
* $Id: vm_map.h,v 1.5 1995/03/16 18:17:17 bde Exp $
*/
/*
@ -130,9 +130,7 @@ struct vm_map {
vm_size_t size; /* virtual size */
boolean_t is_main_map; /* Am I a main map? */
int ref_count; /* Reference count */
simple_lock_data_t ref_lock; /* Lock for ref_count field */
vm_map_entry_t hint; /* hint for quick lookups */
simple_lock_data_t hint_lock; /* lock for hint storage */
vm_map_entry_t first_free; /* First free space hint */
boolean_t entries_pageable; /* map entries pageable?? */
unsigned int timestamp; /* Version number */

View File

@ -31,7 +31,7 @@
* SUCH DAMAGE.
*
* @(#)vm_meter.c 8.4 (Berkeley) 1/4/94
* $Id: vm_meter.c,v 1.5 1995/01/09 16:05:47 davidg Exp $
* $Id: vm_meter.c,v 1.6 1995/01/10 07:32:47 davidg Exp $
*/
#include <sys/param.h>
@ -45,16 +45,6 @@ struct loadavg averunnable; /* load average, of runnable procs */
int maxslp = MAXSLP;
void
vmmeter()
{
if (time.tv_sec % 5 == 0)
loadav(&averunnable);
if (proc0.p_slptime > maxslp / 2)
wakeup((caddr_t) &proc0);
}
/*
* Constants for averages over 1, 5, and 15 minutes
* when sampling at 5 second intervals.
@ -69,7 +59,7 @@ fixpt_t cexp[3] = {
* Compute a tenex style load average of a quantity on
* 1, 5 and 15 minute intervals.
*/
void
static void
loadav(avg)
register struct loadavg *avg;
{
@ -92,6 +82,16 @@ loadav(avg)
nrun * FSCALE * (FSCALE - cexp[i])) >> FSHIFT;
}
void
vmmeter()
{
if (time.tv_sec % 5 == 0)
loadav(&averunnable);
if (proc0.p_slptime > maxslp / 2)
wakeup(&proc0);
}
/*
* Attributes associated with virtual memory.
*/
@ -159,12 +159,10 @@ vmtotal(totalp)
/*
* Mark all objects as inactive.
*/
simple_lock(&vm_object_list_lock);
for (object = vm_object_list.tqh_first;
object != NULL;
object = object->object_list.tqe_next)
object->flags &= ~OBJ_ACTIVE;
simple_unlock(&vm_object_list_lock);
/*
* Calculate process statistics.
*/
@ -216,7 +214,6 @@ vmtotal(totalp)
/*
* Calculate object memory usage statistics.
*/
simple_lock(&vm_object_list_lock);
for (object = vm_object_list.tqh_first;
object != NULL;
object = object->object_list.tqe_next) {

View File

@ -38,7 +38,7 @@
* from: Utah $Hdr: vm_mmap.c 1.6 91/10/21$
*
* @(#)vm_mmap.c 8.4 (Berkeley) 1/12/94
* $Id: vm_mmap.c,v 1.24 1995/05/30 08:16:09 rgrimes Exp $
* $Id: vm_mmap.c,v 1.25 1995/07/09 06:58:01 davidg Exp $
*/
/*
@ -62,14 +62,6 @@
#include <vm/vm_pageout.h>
#include <vm/vm_prot.h>
#ifdef DEBUG
int mmapdebug;
#define MDB_FOLLOW 0x01
#define MDB_SYNC 0x02
#define MDB_MAPIT 0x04
#endif
void pmap_object_init_pt();
struct sbrk_args {
@ -149,12 +141,6 @@ mmap(p, uap, retval)
prot = uap->prot & VM_PROT_ALL;
flags = uap->flags;
#ifdef DEBUG
if (mmapdebug & MDB_FOLLOW)
printf("mmap(%d): addr %x len %x pro %x flg %x fd %d pos %x\n",
p->p_pid, uap->addr, uap->len, prot,
flags, uap->fd, (vm_offset_t) uap->pos);
#endif
/*
* Address (if FIXED) must be page aligned. Size is implicitly rounded
* to a page boundary.
@ -318,12 +304,6 @@ msync(p, uap, retval)
vm_map_t map;
int rv;
#ifdef DEBUG
if (mmapdebug & (MDB_FOLLOW | MDB_SYNC))
printf("msync(%d): addr %x len %x\n",
p->p_pid, uap->addr, uap->len);
#endif
map = &p->p_vmspace->vm_map;
addr = (vm_offset_t) uap->addr;
size = (vm_size_t) uap->len;
@ -352,12 +332,6 @@ msync(p, uap, retval)
size = entry->end - entry->start;
}
#ifdef DEBUG
if (mmapdebug & MDB_SYNC)
printf("msync: cleaning/flushing address range [%x-%x)\n",
addr, addr + size);
#endif
/*
* Clean the pages and interpret the return value.
*/
@ -392,12 +366,6 @@ munmap(p, uap, retval)
vm_size_t size;
vm_map_t map;
#ifdef DEBUG
if (mmapdebug & MDB_FOLLOW)
printf("munmap(%d): addr %x len %x\n",
p->p_pid, uap->addr, uap->len);
#endif
addr = (vm_offset_t) uap->addr;
if ((addr & PAGE_MASK) || uap->len < 0)
return (EINVAL);
@ -432,11 +400,6 @@ munmapfd(p, fd)
struct proc *p;
int fd;
{
#ifdef DEBUG
if (mmapdebug & MDB_FOLLOW)
printf("munmapfd(%d): fd %d\n", p->p_pid, fd);
#endif
/*
* XXX should unmap any regions mapped to this file
*/
@ -458,12 +421,6 @@ mprotect(p, uap, retval)
vm_size_t size;
register vm_prot_t prot;
#ifdef DEBUG
if (mmapdebug & MDB_FOLLOW)
printf("mprotect(%d): addr %x len %x prot %d\n",
p->p_pid, uap->addr, uap->len, uap->prot);
#endif
addr = (vm_offset_t) uap->addr;
if ((addr & PAGE_MASK) || uap->len < 0)
return (EINVAL);
@ -530,11 +487,6 @@ mlock(p, uap, retval)
vm_size_t size;
int error;
#ifdef DEBUG
if (mmapdebug & MDB_FOLLOW)
printf("mlock(%d): addr %x len %x\n",
p->p_pid, uap->addr, uap->len);
#endif
addr = (vm_offset_t) uap->addr;
if ((addr & PAGE_MASK) || uap->addr + uap->len < uap->addr)
return (EINVAL);
@ -569,11 +521,6 @@ munlock(p, uap, retval)
vm_size_t size;
int error;
#ifdef DEBUG
if (mmapdebug & MDB_FOLLOW)
printf("munlock(%d): addr %x len %x\n",
p->p_pid, uap->addr, uap->len);
#endif
addr = (vm_offset_t) uap->addr;
if ((addr & PAGE_MASK) || uap->addr + uap->len < uap->addr)
return (EINVAL);
@ -603,11 +550,10 @@ vm_mmap(map, addr, size, prot, maxprot, flags, handle, foff)
caddr_t handle; /* XXX should be vp */
vm_offset_t foff;
{
register vm_pager_t pager;
boolean_t fitit;
vm_object_t object;
struct vnode *vp = NULL;
int type;
objtype_t type;
int rv = KERN_SUCCESS;
vm_size_t objsize;
struct proc *p = curproc;
@ -639,12 +585,10 @@ vm_mmap(map, addr, size, prot, maxprot, flags, handle, foff)
}
/*
* Lookup/allocate pager. All except an unnamed anonymous lookup gain
* a reference to ensure continued existance of the object. (XXX the
* exception is to appease the pageout daemon)
* Lookup/allocate object.
*/
if (flags & MAP_ANON) {
type = PG_DFLT;
type = OBJT_SWAP;
/*
* Unnamed anonymous regions always start at 0.
*/
@ -653,7 +597,7 @@ vm_mmap(map, addr, size, prot, maxprot, flags, handle, foff)
} else {
vp = (struct vnode *) handle;
if (vp->v_type == VCHR) {
type = PG_DEVICE;
type = OBJT_DEVICE;
handle = (caddr_t) vp->v_rdev;
} else {
struct vattr vat;
@ -663,45 +607,23 @@ vm_mmap(map, addr, size, prot, maxprot, flags, handle, foff)
if (error)
return (error);
objsize = vat.va_size;
type = PG_VNODE;
type = OBJT_VNODE;
}
}
pager = vm_pager_allocate(type, handle, objsize, prot, foff);
if (pager == NULL)
return (type == PG_DEVICE ? EINVAL : ENOMEM);
/*
* Guarantee that the pager has an object.
*/
object = vm_object_lookup(pager);
if (object == NULL) {
if (handle != NULL)
panic("vm_mmap: pager didn't allocate an object (and should have)");
/*
* Should only happen for unnamed anonymous regions.
*/
object = vm_object_allocate(size);
object->pager = pager;
} else {
/*
* Lose vm_object_lookup() reference. Retain reference
* gained by vm_pager_allocate().
*/
vm_object_deallocate(object);
}
/*
* At this point, our actions above have gained a total of
* one reference to the object, and we have a pager.
*/
object = vm_pager_allocate(type, handle, objsize, prot, foff);
if (object == NULL)
return (type == OBJT_DEVICE ? EINVAL : ENOMEM);
/*
* Anonymous memory, shared file, or character special file.
*/
if ((flags & (MAP_ANON|MAP_SHARED)) || (type == PG_DEVICE)) {
if ((flags & (MAP_ANON|MAP_SHARED)) || (type == OBJT_DEVICE)) {
rv = vm_map_find(map, object, foff, addr, size, fitit);
if (rv != KERN_SUCCESS) {
/*
* Lose the object reference. This will also destroy
* the pager if there are no other references.
* Lose the object reference. Will destroy the
* object if it's an unnamed anonymous mapping
* or named anonymous without other references.
*/
vm_object_deallocate(object);
goto out;
@ -711,77 +633,32 @@ vm_mmap(map, addr, size, prot, maxprot, flags, handle, foff)
* mmap a COW regular file
*/
else {
vm_map_t tmap;
vm_offset_t off;
vm_map_entry_t entry;
vm_object_t private_object;
if (flags & MAP_COPY) {
/* locate and allocate the target address space */
rv = vm_map_find(map, NULL, 0, addr, size, fitit);
if (rv != KERN_SUCCESS) {
vm_object_deallocate(object);
goto out;
}
/*
* Create a new object and make the original object
* the backing object. NOTE: the object reference gained
* above is now changed into the reference held by
* private_object. Since we don't map 'object', we want
* only this one reference.
*/
private_object = vm_object_allocate(OBJT_DEFAULT, object->size);
private_object->backing_object = object;
TAILQ_INSERT_TAIL(&object->shadow_head,
private_object, shadow_list);
off = VM_MIN_ADDRESS;
tmap = vm_map_create(NULL, off, off + size, TRUE);
rv = vm_map_find(tmap, object, foff, &off, size, FALSE);
if (rv != KERN_SUCCESS) {
/*
* Deallocate and delete the temporary map.
* Note that since the object insertion
* above has failed, the vm_map_deallocate
* doesn't lose the object reference - we
* must do it explicitly.
*/
vm_object_deallocate(object);
vm_map_deallocate(tmap);
goto out;
}
rv = vm_map_copy(map, tmap, *addr, size, off,
FALSE, FALSE);
/*
* Deallocate temporary map. XXX - depending
* on events, this may leave the object with
* no net gain in reference count! ...this
* needs to be looked at!
*/
vm_map_deallocate(tmap);
if (rv != KERN_SUCCESS)
goto out;
} else {
vm_object_t user_object;
/*
* Create a new object and make the original object
* the backing object. NOTE: the object reference gained
* above is now changed into the reference held by
* user_object. Since we don't map 'object', we want
* only this one reference.
*/
user_object = vm_object_allocate(object->size);
user_object->shadow = object;
TAILQ_INSERT_TAIL(&object->reverse_shadow_head,
user_object, reverse_shadow_list);
rv = vm_map_find(map, user_object, foff, addr, size, fitit);
if( rv != KERN_SUCCESS) {
vm_object_deallocate(user_object);
goto out;
}
/*
* this is a consistancy check, gets the map entry, and should
* never fail
*/
if (!vm_map_lookup_entry(map, *addr, &entry)) {
panic("vm_mmap: missing map entry!!!");
}
entry->copy_on_write = TRUE;
rv = vm_map_find(map, private_object, foff, addr, size, fitit);
if (rv != KERN_SUCCESS) {
vm_object_deallocate(private_object);
goto out;
}
if (!vm_map_lookup_entry(map, *addr, &entry)) {
panic("vm_mmap: missing map entry!!!");
}
entry->copy_on_write = TRUE;
/*
* set pages COW and protect for read access only
*/
@ -792,7 +669,7 @@ vm_mmap(map, addr, size, prot, maxprot, flags, handle, foff)
/*
* "Pre-fault" resident pages.
*/
if ((type == PG_VNODE) && (map->pmap != NULL)) {
if ((type == OBJT_VNODE) && (map->pmap != NULL)) {
pmap_object_init_pt(map->pmap, *addr, object, foff, size);
}
@ -820,10 +697,6 @@ vm_mmap(map, addr, size, prot, maxprot, flags, handle, foff)
}
}
out:
#ifdef DEBUG
if (mmapdebug & MDB_MAPIT)
printf("vm_mmap: rv %d\n", rv);
#endif
switch (rv) {
case KERN_SUCCESS:
return (0);

File diff suppressed because it is too large Load Diff

View File

@ -61,7 +61,7 @@
* any improvements or extensions that they make and grant Carnegie the
* rights to redistribute these changes.
*
* $Id: vm_object.h,v 1.17 1995/04/09 06:03:51 davidg Exp $
* $Id: vm_object.h,v 1.18 1995/05/02 05:57:11 davidg Exp $
*/
/*
@ -75,8 +75,8 @@
#include <sys/proc.h> /* XXX for wakeup() */
#endif
#include <vm/vm_page.h>
#include <vm/vm_pager.h>
enum obj_type { OBJT_DEFAULT, OBJT_SWAP, OBJT_VNODE, OBJT_DEVICE };
typedef enum obj_type objtype_t;
/*
* Types defined:
@ -85,44 +85,44 @@
*/
struct vm_object {
struct pglist memq; /* Resident memory */
TAILQ_HEAD(rslist, vm_object) reverse_shadow_head; /* objects that this is a shadow for */
TAILQ_ENTRY(vm_object) object_list; /* list of all objects */
TAILQ_ENTRY(vm_object) reverse_shadow_list; /* chain of objects that are shadowed */
TAILQ_ENTRY(vm_object) cached_list; /* for persistence */
TAILQ_ENTRY(vm_object) cached_list; /* list of cached (persistent) objects */
TAILQ_HEAD(, vm_object) shadow_head; /* objects that this is a shadow for */
TAILQ_ENTRY(vm_object) shadow_list; /* chain of shadow objects */
TAILQ_HEAD(, vm_page) memq; /* list of resident pages */
objtype_t type; /* type of pager */
vm_size_t size; /* Object size */
int ref_count; /* How many refs?? */
u_short flags; /* see below */
u_short paging_in_progress; /* Paging (in or out) so don't collapse or destroy */
int resident_page_count; /* number of resident pages */
vm_pager_t pager; /* Where to get data */
vm_offset_t paging_offset; /* Offset into paging space */
struct vm_object *shadow; /* My shadow */
vm_offset_t shadow_offset; /* Offset in shadow */
struct vm_object *backing_object; /* object that I'm a shadow of */
vm_offset_t backing_object_offset;/* Offset in backing object */
struct vm_object *copy; /* Object that holds copies of my changed pages */
vm_offset_t last_read; /* last read in object -- detect seq behavior */
TAILQ_ENTRY(vm_object) pager_object_list; /* list of all objects of this pager type */
void *handle;
void *pg_data;
union {
struct {
vm_size_t vnp_size; /* Current size of file */
} vnp;
struct {
TAILQ_HEAD(, vm_page) devp_pglist; /* list of pages allocated */
} devp;
} un_pager;
};
/*
* Flags
*/
#define OBJ_CANPERSIST 0x0001 /* allow to persist */
#define OBJ_INTERNAL 0x0002 /* internally created object */
#define OBJ_ACTIVE 0x0004 /* used to mark active objects */
#define OBJ_DEAD 0x0008 /* used to mark dead objects during rundown */
#define OBJ_ILOCKED 0x0010 /* lock from modification */
#define OBJ_ILOCKWT 0x0020 /* wait for lock from modification */
#define OBJ_ACTIVE 0x0004 /* active objects */
#define OBJ_DEAD 0x0008 /* dead objects (during rundown) */
#define OBJ_PIPWNT 0x0040 /* paging in progress wanted */
#define OBJ_WRITEABLE 0x0080 /* object has been made writeable */
#define OBJ_WRITEABLE 0x0080 /* object has been made writable */
LIST_HEAD(vm_object_hash_head, vm_object_hash_entry);
struct vm_object_hash_entry {
LIST_ENTRY(vm_object_hash_entry) hash_links; /* hash chain links */
vm_object_t object; /* object represened */
};
typedef struct vm_object_hash_entry *vm_object_hash_entry_t;
#ifdef KERNEL
extern int vm_object_cache_max;
@ -131,28 +131,17 @@ TAILQ_HEAD(object_q, vm_object);
struct object_q vm_object_cached_list; /* list of objects persisting */
int vm_object_cached; /* size of cached list */
simple_lock_data_t vm_cache_lock; /* lock for object cache */
struct object_q vm_object_list; /* list of allocated objects */
long vm_object_count; /* count of all objects */
simple_lock_data_t vm_object_list_lock;
/* lock for object list and count */
vm_object_t kernel_object; /* the single kernel object */
vm_object_t kmem_object;
#define vm_object_cache_lock() simple_lock(&vm_cache_lock)
#define vm_object_cache_unlock() simple_unlock(&vm_cache_lock)
#endif /* KERNEL */
#if 1
#define vm_object_lock_init(object) simple_lock_init(&(object)->Lock)
#define vm_object_lock(object) simple_lock(&(object)->Lock)
#define vm_object_unlock(object) simple_unlock(&(object)->Lock)
#define vm_object_lock_try(object) simple_lock_try(&(object)->Lock)
#endif
#ifdef KERNEL
static __inline void
vm_object_pip_wakeup(vm_object_t object)
@ -164,7 +153,7 @@ vm_object_pip_wakeup(vm_object_t object)
}
}
vm_object_t vm_object_allocate __P((vm_size_t));
vm_object_t vm_object_allocate __P((objtype_t, vm_size_t));
void vm_object_cache_clear __P((void));
void vm_object_cache_trim __P((void));
boolean_t vm_object_coalesce __P((vm_object_t, vm_object_t, vm_offset_t, vm_offset_t, vm_offset_t, vm_size_t));
@ -172,17 +161,13 @@ void vm_object_collapse __P((vm_object_t));
void vm_object_copy __P((vm_object_t, vm_offset_t, vm_size_t, vm_object_t *, vm_offset_t *, boolean_t *));
void vm_object_deactivate_pages __P((vm_object_t));
void vm_object_deallocate __P((vm_object_t));
void vm_object_enter __P((vm_object_t, vm_pager_t));
void vm_object_init __P((vm_size_t));
vm_object_t vm_object_lookup __P((vm_pager_t));
void _vm_object_page_clean __P((vm_object_t, vm_offset_t, vm_offset_t, boolean_t));
void vm_object_page_clean __P((vm_object_t, vm_offset_t, vm_offset_t, boolean_t));
void vm_object_page_clean __P((vm_object_t, vm_offset_t, vm_offset_t, boolean_t, boolean_t));
void vm_object_page_remove __P((vm_object_t, vm_offset_t, vm_offset_t, boolean_t));
void vm_object_pmap_copy __P((vm_object_t, vm_offset_t, vm_offset_t));
void vm_object_pmap_remove __P((vm_object_t, vm_offset_t, vm_offset_t));
void vm_object_print __P((vm_object_t, boolean_t));
void vm_object_reference __P((vm_object_t));
void vm_object_remove __P((vm_pager_t));
void vm_object_shadow __P((vm_object_t *, vm_offset_t *, vm_size_t));
void vm_object_terminate __P((vm_object_t));
#endif /* KERNEL */

View File

@ -34,7 +34,7 @@
* SUCH DAMAGE.
*
* from: @(#)vm_page.c 7.4 (Berkeley) 5/7/91
* $Id: vm_page.c,v 1.31 1995/04/16 12:56:21 davidg Exp $
* $Id: vm_page.c,v 1.32 1995/05/30 08:16:15 rgrimes Exp $
*/
/*
@ -86,14 +86,11 @@
struct pglist *vm_page_buckets; /* Array of buckets */
int vm_page_bucket_count; /* How big is array? */
int vm_page_hash_mask; /* Mask for hash function */
simple_lock_data_t bucket_lock; /* lock for all buckets XXX */
struct pglist vm_page_queue_free;
struct pglist vm_page_queue_active;
struct pglist vm_page_queue_inactive;
struct pglist vm_page_queue_cache;
simple_lock_data_t vm_page_queue_lock;
simple_lock_data_t vm_page_queue_free_lock;
/* has physical page allocation been initialized? */
boolean_t vm_page_startup_initialized;
@ -196,14 +193,6 @@ vm_page_startup(starta, enda, vaddr)
start = phys_avail[biggestone];
/*
* Initialize the locks
*/
simple_lock_init(&vm_page_queue_free_lock);
simple_lock_init(&vm_page_queue_lock);
/*
* Initialize the queue headers for the free queue, the active queue
* and the inactive queue.
@ -250,8 +239,6 @@ vm_page_startup(starta, enda, vaddr)
bucket++;
}
simple_lock_init(&bucket_lock);
/*
* round (or truncate) the addresses to our page size.
*/
@ -290,8 +277,6 @@ vm_page_startup(starta, enda, vaddr)
*/
first_page = phys_avail[0] / PAGE_SIZE;
/* for VM_PAGE_CHECK() */
last_page = phys_avail[(nblocks - 1) * 2 + 1] / PAGE_SIZE;
page_range = last_page - (phys_avail[0] / PAGE_SIZE);
@ -342,12 +327,6 @@ vm_page_startup(starta, enda, vaddr)
}
}
/*
* Initialize vm_pages_needed lock here - don't wait for pageout
* daemon XXX
*/
simple_lock_init(&vm_pages_needed_lock);
return (mapped);
}
@ -383,8 +362,6 @@ vm_page_insert(mem, object, offset)
{
register struct pglist *bucket;
VM_PAGE_CHECK(mem);
if (mem->flags & PG_TABLED)
panic("vm_page_insert: already inserted");
@ -400,9 +377,7 @@ vm_page_insert(mem, object, offset)
*/
bucket = &vm_page_buckets[vm_page_hash(object, offset)];
simple_lock(&bucket_lock);
TAILQ_INSERT_TAIL(bucket, mem, hashq);
simple_unlock(&bucket_lock);
/*
* Now link into the object's list of backed pages.
@ -434,8 +409,6 @@ vm_page_remove(mem)
{
register struct pglist *bucket;
VM_PAGE_CHECK(mem);
if (!(mem->flags & PG_TABLED))
return;
@ -444,9 +417,7 @@ vm_page_remove(mem)
*/
bucket = &vm_page_buckets[vm_page_hash(mem->object, mem->offset)];
simple_lock(&bucket_lock);
TAILQ_REMOVE(bucket, mem, hashq);
simple_unlock(&bucket_lock);
/*
* Now remove from the object's list of backed pages.
@ -488,17 +459,13 @@ vm_page_lookup(object, offset)
bucket = &vm_page_buckets[vm_page_hash(object, offset)];
s = splhigh();
simple_lock(&bucket_lock);
for (mem = bucket->tqh_first; mem != NULL; mem = mem->hashq.tqe_next) {
VM_PAGE_CHECK(mem);
if ((mem->object == object) && (mem->offset == offset)) {
simple_unlock(&bucket_lock);
splx(s);
return (mem);
}
}
simple_unlock(&bucket_lock);
splx(s);
return (NULL);
}
@ -522,12 +489,10 @@ vm_page_rename(mem, new_object, new_offset)
if (mem->object == new_object)
return;
vm_page_lock_queues(); /* keep page from moving out from under pageout daemon */
s = splhigh();
vm_page_remove(mem);
vm_page_insert(mem, new_object, new_offset);
splx(s);
vm_page_unlock_queues();
}
/*
@ -583,12 +548,19 @@ vm_page_alloc(object, offset, page_req)
register vm_page_t mem;
int s;
#ifdef DIAGNOSTIC
if (offset != trunc_page(offset))
panic("vm_page_alloc: offset not page aligned");
mem = vm_page_lookup(object, offset);
if (mem)
panic("vm_page_alloc: page already allocated");
#endif
if ((curproc == pageproc) && (page_req != VM_ALLOC_INTERRUPT)) {
page_req = VM_ALLOC_SYSTEM;
};
simple_lock(&vm_page_queue_free_lock);
s = splhigh();
mem = vm_page_queue_free.tqh_first;
@ -605,7 +577,6 @@ vm_page_alloc(object, offset, page_req)
vm_page_remove(mem);
cnt.v_cache_count--;
} else {
simple_unlock(&vm_page_queue_free_lock);
splx(s);
pagedaemon_wakeup();
return (NULL);
@ -626,7 +597,6 @@ vm_page_alloc(object, offset, page_req)
vm_page_remove(mem);
cnt.v_cache_count--;
} else {
simple_unlock(&vm_page_queue_free_lock);
splx(s);
pagedaemon_wakeup();
return (NULL);
@ -639,7 +609,6 @@ vm_page_alloc(object, offset, page_req)
TAILQ_REMOVE(&vm_page_queue_free, mem, pageq);
cnt.v_free_count--;
} else {
simple_unlock(&vm_page_queue_free_lock);
splx(s);
pagedaemon_wakeup();
return NULL;
@ -650,8 +619,6 @@ vm_page_alloc(object, offset, page_req)
panic("vm_page_alloc: invalid allocation class");
}
simple_unlock(&vm_page_queue_free_lock);
mem->flags = PG_BUSY;
mem->wire_count = 0;
mem->hold_count = 0;
@ -784,10 +751,8 @@ vm_page_free(mem)
}
if ((flags & PG_WANTED) != 0)
wakeup((caddr_t) mem);
wakeup(mem);
if ((flags & PG_FICTITIOUS) == 0) {
simple_lock(&vm_page_queue_free_lock);
if (mem->wire_count) {
if (mem->wire_count > 1) {
printf("vm_page_free: wire count > 1 (%d)", mem->wire_count);
@ -798,15 +763,13 @@ vm_page_free(mem)
}
mem->flags |= PG_FREE;
TAILQ_INSERT_TAIL(&vm_page_queue_free, mem, pageq);
simple_unlock(&vm_page_queue_free_lock);
splx(s);
/*
* if pageout daemon needs pages, then tell it that there are
* some free.
*/
if (vm_pageout_pages_needed) {
wakeup((caddr_t) &vm_pageout_pages_needed);
wakeup(&vm_pageout_pages_needed);
vm_pageout_pages_needed = 0;
}
@ -817,8 +780,8 @@ vm_page_free(mem)
* lots of memory. this process will swapin processes.
*/
if ((cnt.v_free_count + cnt.v_cache_count) == cnt.v_free_min) {
wakeup((caddr_t) &cnt.v_free_count);
wakeup((caddr_t) &proc0);
wakeup(&cnt.v_free_count);
wakeup(&proc0);
}
} else {
splx(s);
@ -841,7 +804,6 @@ vm_page_wire(mem)
register vm_page_t mem;
{
int s;
VM_PAGE_CHECK(mem);
if (mem->wire_count == 0) {
s = splhigh();
@ -867,8 +829,6 @@ vm_page_unwire(mem)
{
int s;
VM_PAGE_CHECK(mem);
s = splhigh();
if (mem->wire_count)
@ -895,8 +855,6 @@ vm_page_activate(m)
{
int s;
VM_PAGE_CHECK(m);
s = splhigh();
if (m->flags & PG_ACTIVE)
panic("vm_page_activate: already active");
@ -933,8 +891,6 @@ vm_page_deactivate(m)
{
int spl;
VM_PAGE_CHECK(m);
/*
* Only move active pages -- ignore locked or already inactive ones.
*
@ -969,7 +925,6 @@ vm_page_cache(m)
{
int s;
VM_PAGE_CHECK(m);
if ((m->flags & (PG_CACHE | PG_BUSY)) || m->busy || m->wire_count ||
m->bmapped)
return;
@ -982,11 +937,11 @@ vm_page_cache(m)
m->flags |= PG_CACHE;
cnt.v_cache_count++;
if ((cnt.v_free_count + cnt.v_cache_count) == cnt.v_free_min) {
wakeup((caddr_t) &cnt.v_free_count);
wakeup((caddr_t) &proc0);
wakeup(&cnt.v_free_count);
wakeup(&proc0);
}
if (vm_pageout_pages_needed) {
wakeup((caddr_t) &vm_pageout_pages_needed);
wakeup(&vm_pageout_pages_needed);
vm_pageout_pages_needed = 0;
}
@ -1004,8 +959,6 @@ boolean_t
vm_page_zero_fill(m)
vm_page_t m;
{
VM_PAGE_CHECK(m);
pmap_zero_page(VM_PAGE_TO_PHYS(m));
m->valid = VM_PAGE_BITS_ALL;
return (TRUE);
@ -1021,9 +974,6 @@ vm_page_copy(src_m, dest_m)
vm_page_t src_m;
vm_page_t dest_m;
{
VM_PAGE_CHECK(src_m);
VM_PAGE_CHECK(dest_m);
pmap_copy_page(VM_PAGE_TO_PHYS(src_m), VM_PAGE_TO_PHYS(dest_m));
dest_m->valid = VM_PAGE_BITS_ALL;
}

View File

@ -61,7 +61,7 @@
* any improvements or extensions that they make and grant Carnegie the
* rights to redistribute these changes.
*
* $Id: vm_page.h,v 1.17 1995/03/26 23:33:14 davidg Exp $
* $Id: vm_page.h,v 1.18 1995/04/23 08:05:49 bde Exp $
*/
/*
@ -136,34 +136,38 @@ struct vm_page {
#define PG_CACHE 0x4000 /* On VMIO cache */
#define PG_FREE 0x8000 /* page is in free list */
#if VM_PAGE_DEBUG
#define VM_PAGE_CHECK(mem) { \
if ((((unsigned int) mem) < ((unsigned int) &vm_page_array[0])) || \
(((unsigned int) mem) > \
((unsigned int) &vm_page_array[last_page-first_page])) || \
((mem->flags & (PG_ACTIVE | PG_INACTIVE)) == \
(PG_ACTIVE | PG_INACTIVE))) \
panic("vm_page_check: not valid!"); \
}
#else /* VM_PAGE_DEBUG */
#define VM_PAGE_CHECK(mem)
#endif /* VM_PAGE_DEBUG */
/*
* Misc constants.
*/
#define ACT_DECLINE 1
#define ACT_ADVANCE 3
#define ACT_MAX 100
#define PFCLUSTER_BEHIND 3
#define PFCLUSTER_AHEAD 3
#ifdef KERNEL
/*
* Each pageable resident page falls into one of three lists:
* Each pageable resident page falls into one of four lists:
*
* free
* Available for allocation now.
*
* The following are all LRU sorted:
*
* cache
* Almost available for allocation. Still in an
* object, but clean and immediately freeable at
* non-interrupt times.
*
* inactive
* Not referenced in any map, but still has an
* object/offset-page mapping, and may be dirty.
* Low activity, candidates for reclaimation.
* This is the list of pages that should be
* paged out next.
*
* active
* A list of pages which have been placed in
* at least one physical map. This list is
* ordered, in LRU-like fashion.
* Pages that are "active" i.e. they have been
* recently referenced.
*/
extern struct pglist vm_page_queue_free; /* memory free queue */
@ -190,9 +194,6 @@ extern vm_offset_t last_phys_addr; /* physical address for last_page */
#define PHYS_TO_VM_PAGE(pa) \
(&vm_page_array[atop(pa) - first_page ])
extern simple_lock_data_t vm_page_queue_lock; /* lock on active and inactive page queues */
extern simple_lock_data_t vm_page_queue_free_lock; /* lock on free page queue */
/*
* Functions implemented as macros
*/
@ -210,9 +211,6 @@ extern simple_lock_data_t vm_page_queue_free_lock; /* lock on free page queue */
} \
}
#define vm_page_lock_queues() simple_lock(&vm_page_queue_lock)
#define vm_page_unlock_queues() simple_unlock(&vm_page_queue_lock)
#if PAGE_SIZE == 4096
#define VM_PAGE_BITS_ALL 0xff
#endif
@ -293,9 +291,4 @@ vm_page_protect(vm_page_t mem, int prot)
#endif /* KERNEL */
#define ACT_DECLINE 1
#define ACT_ADVANCE 3
#define ACT_MAX 100
#endif /* !_VM_PAGE_ */

View File

@ -65,7 +65,7 @@
* any improvements or extensions that they make and grant Carnegie the
* rights to redistribute these changes.
*
* $Id: vm_pageout.c,v 1.51 1995/05/30 08:16:18 rgrimes Exp $
* $Id: vm_pageout.c,v 1.52 1995/07/10 08:53:22 davidg Exp $
*/
/*
@ -86,8 +86,8 @@
#include <vm/vm_page.h>
#include <vm/vm_pageout.h>
#include <vm/vm_kern.h>
#include <vm/vm_pager.h>
#include <vm/swap_pager.h>
#include <vm/vnode_pager.h>
int vm_pages_needed; /* Event on which pageout daemon sleeps */
@ -112,181 +112,163 @@ int vm_page_max_wired; /* XXX max # of wired pages system-wide */
/*
* vm_pageout_clean:
* cleans a vm_page
*
* Clean the page and remove it from the laundry.
*
* We set the busy bit to cause potential page faults on this page to
* block.
*
* And we set pageout-in-progress to keep the object from disappearing
* during pageout. This guarantees that the page won't move from the
* inactive queue. (However, any other page on the inactive queue may
* move!)
*/
int
vm_pageout_clean(m, sync)
register vm_page_t m;
vm_page_t m;
int sync;
{
/*
* Clean the page and remove it from the laundry.
*
* We set the busy bit to cause potential page faults on this page to
* block.
*
* And we set pageout-in-progress to keep the object from disappearing
* during pageout. This guarantees that the page won't move from the
* inactive queue. (However, any other page on the inactive queue may
* move!)
*/
register vm_object_t object;
register vm_pager_t pager;
int pageout_status[VM_PAGEOUT_PAGE_COUNT];
vm_page_t ms[VM_PAGEOUT_PAGE_COUNT], mb[VM_PAGEOUT_PAGE_COUNT];
int pageout_count, b_pageout_count;
vm_page_t mc[2*VM_PAGEOUT_PAGE_COUNT];
int pageout_count;
int anyok = 0;
int i;
int i, forward_okay, backward_okay, page_base;
vm_offset_t offset = m->offset;
object = m->object;
if (!object) {
printf("pager: object missing\n");
return 0;
}
if (!object->pager && (object->flags & OBJ_INTERNAL) == 0) {
printf("pager: non internal obj without pager\n");
}
/*
* Try to collapse the object before making a pager for it. We must
* unlock the page queues first. We try to defer the creation of a
* pager until all shadows are not paging. This allows
* vm_object_collapse to work better and helps control swap space
* size. (J. Dyson 11 Nov 93)
* If not OBJT_SWAP, additional memory may be needed to do the pageout.
* Try to avoid the deadlock.
*/
if (!object->pager &&
(cnt.v_free_count + cnt.v_cache_count) < cnt.v_pageout_free_min)
if ((sync != VM_PAGEOUT_FORCE) &&
(object->type != OBJT_SWAP) &&
((cnt.v_free_count + cnt.v_cache_count) < cnt.v_pageout_free_min))
return 0;
/*
* Don't mess with the page if it's busy.
*/
if ((!sync && m->hold_count != 0) ||
((m->busy != 0) || (m->flags & PG_BUSY)))
return 0;
if (!sync && object->shadow) {
/*
* Try collapsing before it's too late.
*/
if (!sync && object->backing_object) {
vm_object_collapse(object);
}
mc[VM_PAGEOUT_PAGE_COUNT] = m;
pageout_count = 1;
ms[0] = m;
pager = object->pager;
if (pager) {
for (i = 1; i < vm_pageout_page_count; i++) {
vm_page_t mt;
ms[i] = mt = vm_page_lookup(object, offset + i * NBPG);
if (mt) {
if (mt->flags & (PG_BUSY|PG_CACHE) || mt->busy)
break;
/*
* we can cluster ONLY if: ->> the page is NOT
* busy, and is NOT clean the page is not
* wired, busy, held, or mapped into a buffer.
* and one of the following: 1) The page is
* inactive, or a seldom used active page. 2)
* or we force the issue.
*/
vm_page_test_dirty(mt);
if ((mt->dirty & mt->valid) != 0
&& ((mt->flags & PG_INACTIVE) ||
(sync == VM_PAGEOUT_FORCE))
&& (mt->wire_count == 0)
&& (mt->hold_count == 0))
pageout_count++;
else
break;
} else
break;
}
if ((pageout_count < vm_pageout_page_count) && (offset != 0)) {
b_pageout_count = 0;
for (i = 0; i < vm_pageout_page_count-pageout_count; i++) {
vm_page_t mt;
mt = vm_page_lookup(object, offset - (i + 1) * NBPG);
if (mt) {
if (mt->flags & (PG_BUSY|PG_CACHE) || mt->busy)
break;
vm_page_test_dirty(mt);
if ((mt->dirty & mt->valid) != 0
&& ((mt->flags & PG_INACTIVE) ||
(sync == VM_PAGEOUT_FORCE))
&& (mt->wire_count == 0)
&& (mt->hold_count == 0)) {
mb[b_pageout_count] = mt;
b_pageout_count++;
if ((offset - (i + 1) * NBPG) == 0)
break;
} else
break;
} else
break;
}
if (b_pageout_count > 0) {
for(i=pageout_count - 1;i>=0;--i) {
ms[i+b_pageout_count] = ms[i];
}
for(i=0;i<b_pageout_count;i++) {
ms[i] = mb[b_pageout_count - (i + 1)];
}
pageout_count += b_pageout_count;
}
}
page_base = VM_PAGEOUT_PAGE_COUNT;
forward_okay = TRUE;
if (offset != 0)
backward_okay = TRUE;
else
backward_okay = FALSE;
/*
* Scan object for clusterable pages.
*
* We can cluster ONLY if: ->> the page is NOT
* clean, wired, busy, held, or mapped into a
* buffer, and one of the following:
* 1) The page is inactive, or a seldom used
* active page.
* -or-
* 2) we force the issue.
*/
for (i = 1; (i < vm_pageout_page_count) && (forward_okay || backward_okay); i++) {
vm_page_t p;
/*
* we allow reads during pageouts...
* See if forward page is clusterable.
*/
for (i = 0; i < pageout_count; i++) {
ms[i]->flags |= PG_BUSY;
vm_page_protect(ms[i], VM_PROT_READ);
if (forward_okay) {
/*
* Stop forward scan at end of object.
*/
if ((offset + i * PAGE_SIZE) > object->size) {
forward_okay = FALSE;
goto do_backward;
}
p = vm_page_lookup(object, offset + i * PAGE_SIZE);
if (p) {
if ((p->flags & (PG_BUSY|PG_CACHE)) || p->busy) {
forward_okay = FALSE;
goto do_backward;
}
vm_page_test_dirty(p);
if ((p->dirty & p->valid) != 0 &&
((p->flags & PG_INACTIVE) ||
(sync == VM_PAGEOUT_FORCE)) &&
(p->wire_count == 0) &&
(p->hold_count == 0)) {
mc[VM_PAGEOUT_PAGE_COUNT + i] = p;
pageout_count++;
if (pageout_count == vm_pageout_page_count)
break;
} else {
forward_okay = FALSE;
}
} else {
forward_okay = FALSE;
}
}
object->paging_in_progress += pageout_count;
} else {
m->flags |= PG_BUSY;
vm_page_protect(m, VM_PROT_READ);
object->paging_in_progress++;
pager = vm_pager_allocate(PG_DFLT, 0,
object->size, VM_PROT_ALL, 0);
if (pager != NULL) {
object->pager = pager;
do_backward:
/*
* See if backward page is clusterable.
*/
if (backward_okay) {
/*
* Stop backward scan at beginning of object.
*/
if ((offset - i * PAGE_SIZE) == 0) {
backward_okay = FALSE;
}
p = vm_page_lookup(object, offset - i * PAGE_SIZE);
if (p) {
if ((p->flags & (PG_BUSY|PG_CACHE)) || p->busy) {
backward_okay = FALSE;
continue;
}
vm_page_test_dirty(p);
if ((p->dirty & p->valid) != 0 &&
((p->flags & PG_INACTIVE) ||
(sync == VM_PAGEOUT_FORCE)) &&
(p->wire_count == 0) &&
(p->hold_count == 0)) {
mc[VM_PAGEOUT_PAGE_COUNT - i] = p;
pageout_count++;
page_base--;
if (pageout_count == vm_pageout_page_count)
break;
} else {
backward_okay = FALSE;
}
} else {
backward_okay = FALSE;
}
}
}
/*
* If there is no pager for the page, use the default pager. If
* there's no place to put the page at the moment, leave it in the
* laundry and hope that there will be paging space later.
* we allow reads during pageouts...
*/
if ((pager && pager->pg_type == PG_SWAP) ||
(cnt.v_free_count + cnt.v_cache_count) >= cnt.v_pageout_free_min) {
if (pageout_count == 1) {
pageout_status[0] = pager ?
vm_pager_put(pager, m,
((sync || (object == kernel_object)) ? TRUE : FALSE)) :
VM_PAGER_FAIL;
} else {
if (!pager) {
for (i = 0; i < pageout_count; i++)
pageout_status[i] = VM_PAGER_FAIL;
} else {
vm_pager_put_pages(pager, ms, pageout_count,
((sync || (object == kernel_object)) ? TRUE : FALSE),
pageout_status);
}
}
} else {
for (i = 0; i < pageout_count; i++)
pageout_status[i] = VM_PAGER_FAIL;
for (i = page_base; i < (page_base + pageout_count); i++) {
mc[i]->flags |= PG_BUSY;
vm_page_protect(mc[i], VM_PROT_READ);
}
object->paging_in_progress += pageout_count;
vm_pager_put_pages(object, &mc[page_base], pageout_count,
((sync || (object == kernel_object)) ? TRUE : FALSE),
pageout_status);
for (i = 0; i < pageout_count; i++) {
vm_page_t mt = mc[page_base + i];
switch (pageout_status[i]) {
case VM_PAGER_OK:
++anyok;
@ -300,8 +282,8 @@ vm_pageout_clean(m, sync)
* essentially lose the changes by pretending it
* worked.
*/
pmap_clear_modify(VM_PAGE_TO_PHYS(ms[i]));
ms[i]->dirty = 0;
pmap_clear_modify(VM_PAGE_TO_PHYS(mt));
mt->dirty = 0;
break;
case VM_PAGER_ERROR:
case VM_PAGER_FAIL:
@ -310,8 +292,8 @@ vm_pageout_clean(m, sync)
* page so it doesn't clog the inactive list. (We
* will try paging out it again later).
*/
if (ms[i]->flags & PG_INACTIVE)
vm_page_activate(ms[i]);
if (mt->flags & PG_INACTIVE)
vm_page_activate(mt);
break;
case VM_PAGER_AGAIN:
break;
@ -326,14 +308,14 @@ vm_pageout_clean(m, sync)
*/
if (pageout_status[i] != VM_PAGER_PEND) {
vm_object_pip_wakeup(object);
if ((ms[i]->flags & (PG_REFERENCED|PG_WANTED)) ||
pmap_is_referenced(VM_PAGE_TO_PHYS(ms[i]))) {
pmap_clear_reference(VM_PAGE_TO_PHYS(ms[i]));
ms[i]->flags &= ~PG_REFERENCED;
if (ms[i]->flags & PG_INACTIVE)
vm_page_activate(ms[i]);
if ((mt->flags & (PG_REFERENCED|PG_WANTED)) ||
pmap_is_referenced(VM_PAGE_TO_PHYS(mt))) {
pmap_clear_reference(VM_PAGE_TO_PHYS(mt));
mt->flags &= ~PG_REFERENCED;
if (mt->flags & PG_INACTIVE)
vm_page_activate(mt);
}
PAGE_WAKEUP(ms[i]);
PAGE_WAKEUP(mt);
}
}
return anyok;
@ -345,7 +327,7 @@ vm_pageout_clean(m, sync)
* deactivate enough pages to satisfy the inactive target
* requirements or if vm_page_proc_limit is set, then
* deactivate all of the pages in the object and its
* shadows.
* backing_objects.
*
* The object and map must be locked.
*/
@ -364,16 +346,18 @@ vm_pageout_object_deactivate_pages(map, object, count, map_remove_only)
if (count == 0)
count = 1;
if (object->pager && (object->pager->pg_type == PG_DEVICE))
if (object->type == OBJT_DEVICE)
return 0;
if (object->shadow) {
if (object->shadow->ref_count == 1)
dcount += vm_pageout_object_deactivate_pages(map, object->shadow, count / 2 + 1, map_remove_only);
if (object->backing_object) {
if (object->backing_object->ref_count == 1)
dcount += vm_pageout_object_deactivate_pages(map,
object->backing_object, count / 2 + 1, map_remove_only);
else
vm_pageout_object_deactivate_pages(map, object->shadow, count, 1);
vm_pageout_object_deactivate_pages(map,
object->backing_object, count, 1);
}
if (object->paging_in_progress || !vm_object_lock_try(object))
if (object->paging_in_progress)
return dcount;
/*
@ -384,7 +368,6 @@ vm_pageout_object_deactivate_pages(map, object, count, map_remove_only)
while (p && (rcount-- > 0)) {
next = p->listq.tqe_next;
cnt.v_pdpages++;
vm_page_lock_queues();
if (p->wire_count != 0 ||
p->hold_count != 0 ||
p->busy != 0 ||
@ -427,8 +410,6 @@ vm_pageout_object_deactivate_pages(map, object, count, map_remove_only)
++dcount;
if (count <= 0 &&
cnt.v_inactive_count > cnt.v_inactive_target) {
vm_page_unlock_queues();
vm_object_unlock(object);
return dcount;
}
}
@ -447,10 +428,8 @@ vm_pageout_object_deactivate_pages(map, object, count, map_remove_only)
} else if ((p->flags & (PG_INACTIVE | PG_BUSY)) == PG_INACTIVE) {
vm_page_protect(p, VM_PROT_NONE);
}
vm_page_unlock_queues();
p = next;
}
vm_object_unlock(object);
return dcount;
}
@ -505,7 +484,7 @@ vm_req_vmdaemon()
static int lastrun = 0;
if ((ticks > (lastrun + hz / 10)) || (ticks < lastrun)) {
wakeup((caddr_t) &vm_daemon_needed);
wakeup(&vm_daemon_needed);
lastrun = ticks;
}
}
@ -602,15 +581,14 @@ rescan1:
struct vnode *vp = NULL;
object = m->object;
if ((object->flags & OBJ_DEAD) || !vm_object_lock_try(object)) {
if (object->flags & OBJ_DEAD) {
m = next;
continue;
}
if (object->pager && object->pager->pg_type == PG_VNODE) {
vp = ((vn_pager_t) object->pager->pg_data)->vnp_vp;
if (object->type == OBJT_VNODE) {
vp = object->handle;
if (VOP_ISLOCKED(vp) || vget(vp, 1)) {
vm_object_unlock(object);
if (object->flags & OBJ_WRITEABLE)
++vnodes_skipped;
m = next;
@ -629,8 +607,6 @@ rescan1:
if (vp)
vput(vp);
vm_object_unlock(object);
if (!next) {
break;
}
@ -744,7 +720,7 @@ rescan1:
(cnt.v_cache_count + cnt.v_free_count) < cnt.v_free_min) {
if (!vfs_update_wakeup) {
vfs_update_wakeup = 1;
wakeup((caddr_t) &vfs_update_wakeup);
wakeup(&vfs_update_wakeup);
}
}
/*
@ -804,7 +780,7 @@ rescan1:
bigproc->p_estcpu = 0;
bigproc->p_nice = PRIO_MIN;
resetpriority(bigproc);
wakeup((caddr_t) &cnt.v_free_count);
wakeup(&cnt.v_free_count);
}
}
return force_wakeup;
@ -853,7 +829,7 @@ vm_pageout()
vm_page_max_wired = cnt.v_free_count / 3;
(void) swap_pager_alloc(0, 0, 0, 0);
swap_pager_swap_init();
/*
* The pageout daemon is never done, so loop forever.
*/
@ -864,7 +840,7 @@ vm_pageout()
((cnt.v_free_count >= cnt.v_free_reserved) &&
(cnt.v_free_count + cnt.v_cache_count >= cnt.v_free_min))) {
vm_pages_needed = 0;
tsleep((caddr_t) &vm_pages_needed, PVM, "psleep", 0);
tsleep(&vm_pages_needed, PVM, "psleep", 0);
}
vm_pages_needed = 0;
splx(s);
@ -872,8 +848,8 @@ vm_pageout()
vm_pager_sync();
vm_pageout_scan();
vm_pager_sync();
wakeup((caddr_t) &cnt.v_free_count);
wakeup((caddr_t) kmem_map);
wakeup(&cnt.v_free_count);
wakeup(kmem_map);
}
}
@ -884,8 +860,8 @@ vm_daemon()
struct proc *p;
while (TRUE) {
tsleep((caddr_t) &vm_daemon_needed, PUSER, "psleep", 0);
if( vm_pageout_req_swapout) {
tsleep(&vm_daemon_needed, PUSER, "psleep", 0);
if (vm_pageout_req_swapout) {
swapout_procs();
vm_pageout_req_swapout = 0;
}
@ -934,27 +910,22 @@ vm_daemon()
(vm_map_entry_t) 0, &overage, vm_pageout_object_deactivate_pages);
}
}
}
/*
* we remove cached objects that have no RSS...
*/
restart:
vm_object_cache_lock();
object = vm_object_cached_list.tqh_first;
while (object) {
vm_object_cache_unlock();
/*
* if there are no resident pages -- get rid of the object
* we remove cached objects that have no RSS...
*/
if (object->resident_page_count == 0) {
if (object != vm_object_lookup(object->pager))
panic("vm_object_cache_trim: I'm sooo confused.");
pager_cache(object, FALSE);
goto restart;
restart:
object = vm_object_cached_list.tqh_first;
while (object) {
/*
* if there are no resident pages -- get rid of the object
*/
if (object->resident_page_count == 0) {
vm_object_reference(object);
pager_cache(object, FALSE);
goto restart;
}
object = object->cached_list.tqe_next;
}
object = object->cached_list.tqe_next;
vm_object_cache_lock();
}
vm_object_cache_unlock();
}

View File

@ -61,7 +61,7 @@
* any improvements or extensions that they make and grant Carnegie the
* rights to redistribute these changes.
*
* $Id: vm_pageout.h,v 1.11 1995/04/09 06:03:55 davidg Exp $
* $Id: vm_pageout.h,v 1.12 1995/05/30 08:16:20 rgrimes Exp $
*/
#ifndef _VM_VM_PAGEOUT_H_
@ -77,7 +77,6 @@
extern int vm_page_max_wired;
extern int vm_pages_needed; /* should be some "event" structure */
simple_lock_data_t vm_pages_needed_lock;
extern int vm_pageout_pages_needed;
#define VM_PAGEOUT_ASYNC 0
@ -97,7 +96,7 @@ pagedaemon_wakeup()
{
if (!vm_pages_needed && curproc != pageproc) {
vm_pages_needed++;
wakeup((caddr_t) &vm_pages_needed);
wakeup(&vm_pages_needed);
}
}
@ -111,13 +110,13 @@ vm_wait()
s = splhigh();
if (curproc == pageproc) {
vm_pageout_pages_needed = 1;
tsleep((caddr_t) &vm_pageout_pages_needed, PSWP, "vmwait", 0);
tsleep(&vm_pageout_pages_needed, PSWP, "vmwait", 0);
} else {
if (!vm_pages_needed) {
vm_pages_needed++;
wakeup((caddr_t) &vm_pages_needed);
wakeup(&vm_pages_needed);
}
tsleep((caddr_t) &cnt.v_free_count, PVM, "vmwait", 0);
tsleep(&cnt.v_free_count, PVM, "vmwait", 0);
}
splx(s);
}

View File

@ -61,7 +61,7 @@
* any improvements or extensions that they make and grant Carnegie the
* rights to redistribute these changes.
*
* $Id: vm_pager.c,v 1.14 1995/04/25 06:22:47 davidg Exp $
* $Id: vm_pager.c,v 1.15 1995/05/10 18:56:07 davidg Exp $
*/
/*
@ -79,20 +79,21 @@
#include <vm/vm.h>
#include <vm/vm_page.h>
#include <vm/vm_kern.h>
#include <vm/vm_pager.h>
extern struct pagerops defaultpagerops;
extern struct pagerops swappagerops;
extern struct pagerops vnodepagerops;
extern struct pagerops devicepagerops;
struct pagerops *pagertab[] = {
&swappagerops, /* PG_SWAP */
&vnodepagerops, /* PG_VNODE */
&devicepagerops, /* PG_DEV */
&defaultpagerops, /* OBJT_DEFAULT */
&swappagerops, /* OBJT_SWAP */
&vnodepagerops, /* OBJT_VNODE */
&devicepagerops, /* OBJT_DEVICE */
};
int npagers = sizeof(pagertab) / sizeof(pagertab[0]);
struct pagerops *dfltpagerops = NULL; /* default pager */
/*
* Kernel address space for mapping pages.
* Used by pagers where KVAs are needed for IO.
@ -119,10 +120,8 @@ vm_pager_init()
* Initialize known pagers
*/
for (pgops = pagertab; pgops < &pagertab[npagers]; pgops++)
if (pgops)
if (pgops && ((*pgops)->pgo_init != NULL))
(*(*pgops)->pgo_init) ();
if (dfltpagerops == NULL)
panic("no default pager");
}
void
@ -154,9 +153,9 @@ vm_pager_bufferinit()
* Size, protection and offset parameters are passed in for pagers that
* need to perform page-level validation (e.g. the device pager).
*/
vm_pager_t
vm_object_t
vm_pager_allocate(type, handle, size, prot, off)
int type;
objtype_t type;
void *handle;
vm_size_t size;
vm_prot_t prot;
@ -164,84 +163,49 @@ vm_pager_allocate(type, handle, size, prot, off)
{
struct pagerops *ops;
ops = (type == PG_DFLT) ? dfltpagerops : pagertab[type];
ops = pagertab[type];
if (ops)
return ((*ops->pgo_alloc) (handle, size, prot, off));
return (NULL);
}
void
vm_pager_deallocate(pager)
vm_pager_t pager;
vm_pager_deallocate(object)
vm_object_t object;
{
if (pager == NULL)
panic("vm_pager_deallocate: null pager");
(*pager->pg_ops->pgo_dealloc) (pager);
(*pagertab[object->type]->pgo_dealloc) (object);
}
int
vm_pager_get_pages(pager, m, count, reqpage, sync)
vm_pager_t pager;
vm_pager_get_pages(object, m, count, reqpage)
vm_object_t object;
vm_page_t *m;
int count;
int reqpage;
boolean_t sync;
{
int i;
if (pager == NULL) {
for (i = 0; i < count; i++) {
if (i != reqpage) {
PAGE_WAKEUP(m[i]);
vm_page_free(m[i]);
}
}
vm_page_zero_fill(m[reqpage]);
return VM_PAGER_OK;
}
if (pager->pg_ops->pgo_getpages == 0) {
for (i = 0; i < count; i++) {
if (i != reqpage) {
PAGE_WAKEUP(m[i]);
vm_page_free(m[i]);
}
}
return (VM_PAGER_GET(pager, m[reqpage], sync));
} else {
return (VM_PAGER_GET_MULTI(pager, m, count, reqpage, sync));
}
return ((*pagertab[object->type]->pgo_getpages)(object, m, count, reqpage));
}
int
vm_pager_put_pages(pager, m, count, sync, rtvals)
vm_pager_t pager;
vm_pager_put_pages(object, m, count, sync, rtvals)
vm_object_t object;
vm_page_t *m;
int count;
boolean_t sync;
int *rtvals;
{
int i;
if (pager->pg_ops->pgo_putpages)
return (VM_PAGER_PUT_MULTI(pager, m, count, sync, rtvals));
else {
for (i = 0; i < count; i++) {
rtvals[i] = VM_PAGER_PUT(pager, m[i], sync);
}
return rtvals[0];
}
return ((*pagertab[object->type]->pgo_putpages)(object, m, count, sync, rtvals));
}
boolean_t
vm_pager_has_page(pager, offset)
vm_pager_t pager;
vm_pager_has_page(object, offset, before, after)
vm_object_t object;
vm_offset_t offset;
int *before;
int *after;
{
if (pager == NULL)
panic("vm_pager_has_page: null pager");
return ((*pager->pg_ops->pgo_haspage) (pager, offset));
return ((*pagertab[object->type]->pgo_haspage) (object, offset, before, after));
}
/*
@ -254,24 +218,10 @@ vm_pager_sync()
struct pagerops **pgops;
for (pgops = pagertab; pgops < &pagertab[npagers]; pgops++)
if (pgops)
(*(*pgops)->pgo_putpage) (NULL, NULL, 0);
if (pgops && ((*pgops)->pgo_sync != NULL))
(*(*pgops)->pgo_sync) ();
}
#if 0
void
vm_pager_cluster(pager, offset, loff, hoff)
vm_pager_t pager;
vm_offset_t offset;
vm_offset_t *loff;
vm_offset_t *hoff;
{
if (pager == NULL)
panic("vm_pager_cluster: null pager");
return ((*pager->pg_ops->pgo_cluster) (pager, offset, loff, hoff));
}
#endif
vm_offset_t
vm_pager_map_page(m)
vm_page_t m;
@ -303,16 +253,16 @@ vm_pager_atop(kva)
return (PHYS_TO_VM_PAGE(pa));
}
vm_pager_t
vm_pager_lookup(pglist, handle)
register struct pagerlst *pglist;
caddr_t handle;
vm_object_t
vm_pager_object_lookup(pg_list, handle)
register struct pagerlst *pg_list;
void *handle;
{
register vm_pager_t pager;
register vm_object_t object;
for (pager = pglist->tqh_first; pager; pager = pager->pg_list.tqe_next)
if (pager->pg_handle == handle)
return (pager);
for (object = pg_list->tqh_first; object != NULL; object = object->pager_object_list.tqe_next)
if (object->handle == handle)
return (object);
return (NULL);
}
@ -328,14 +278,10 @@ pager_cache(object, should_cache)
if (object == NULL)
return (KERN_INVALID_ARGUMENT);
vm_object_cache_lock();
vm_object_lock(object);
if (should_cache)
object->flags |= OBJ_CANPERSIST;
else
object->flags &= ~OBJ_CANPERSIST;
vm_object_unlock(object);
vm_object_cache_unlock();
vm_object_deallocate(object);
@ -355,7 +301,7 @@ getpbuf()
/* get a bp from the swap buffer header pool */
while ((bp = bswlist.tqh_first) == NULL) {
bswneeded = 1;
tsleep((caddr_t) &bswneeded, PVM, "wswbuf", 0);
tsleep(&bswneeded, PVM, "wswbuf", 0);
}
TAILQ_REMOVE(&bswlist, bp, b_freelist);
splx(s);
@ -416,13 +362,13 @@ relpbuf(bp)
pbrelvp(bp);
if (bp->b_flags & B_WANTED)
wakeup((caddr_t) bp);
wakeup(bp);
TAILQ_INSERT_HEAD(&bswlist, bp, b_freelist);
if (bswneeded) {
bswneeded = 0;
wakeup((caddr_t) &bswneeded);
wakeup(&bswneeded);
}
splx(s);
}

View File

@ -1,4 +1,3 @@
/*
* Copyright (c) 1990 University of Utah.
* Copyright (c) 1991, 1993
@ -37,56 +36,28 @@
* SUCH DAMAGE.
*
* @(#)vm_pager.h 8.4 (Berkeley) 1/12/94
* $Id: vm_pager.h,v 1.6 1995/03/16 18:17:32 bde Exp $
* $Id: vm_pager.h,v 1.7 1995/05/10 18:56:08 davidg Exp $
*/
/*
* Pager routine interface definition.
* For BSD we use a cleaner version of the internal pager interface.
*/
#ifndef _VM_PAGER_
#define _VM_PAGER_
TAILQ_HEAD(pagerlst, pager_struct);
struct pager_struct {
TAILQ_ENTRY(pager_struct) pg_list; /* links for list management */
void *pg_handle; /* ext. handle (vp, dev, fp) */
int pg_type; /* type of pager */
struct pagerops *pg_ops; /* pager operations */
void *pg_data; /* private pager data */
};
/* pager types */
#define PG_DFLT -1
#define PG_SWAP 0
#define PG_VNODE 1
#define PG_DEVICE 2
/* flags */
#define PG_CLUSTERGET 1
#define PG_CLUSTERPUT 2
TAILQ_HEAD(pagerlst, vm_object);
struct pagerops {
void (*pgo_init) __P((void)); /* Initialize pager. */
vm_pager_t(*pgo_alloc) __P((void *, vm_size_t, vm_prot_t, vm_offset_t)); /* Allocate pager. */
void (*pgo_dealloc) __P((vm_pager_t)); /* Disassociate. */
int (*pgo_getpage) __P((vm_pager_t, vm_page_t, boolean_t));
int (*pgo_getpages) __P((vm_pager_t, vm_page_t *, int, int, boolean_t)); /* Get (read) page. */
int (*pgo_putpage) __P((vm_pager_t, vm_page_t, boolean_t));
int (*pgo_putpages) __P((vm_pager_t, vm_page_t *, int, boolean_t, int *)); /* Put (write) page. */
boolean_t(*pgo_haspage) __P((vm_pager_t, vm_offset_t)); /* Does pager have page? */
vm_object_t (*pgo_alloc) __P((void *, vm_size_t, vm_prot_t, vm_offset_t)); /* Allocate pager. */
void (*pgo_dealloc) __P((vm_object_t)); /* Disassociate. */
int (*pgo_getpages) __P((vm_object_t, vm_page_t *, int, int)); /* Get (read) page. */
int (*pgo_putpages) __P((vm_object_t, vm_page_t *, int, boolean_t, int *)); /* Put (write) page. */
boolean_t (*pgo_haspage) __P((vm_object_t, vm_offset_t, int *, int *)); /* Does pager have page? */
void (*pgo_sync) __P((void));
};
#define VM_PAGER_ALLOC(h, s, p, o) (*(pg)->pg_ops->pgo_alloc)(h, s, p, o)
#define VM_PAGER_DEALLOC(pg) (*(pg)->pg_ops->pgo_dealloc)(pg)
#define VM_PAGER_GET(pg, m, s) (*(pg)->pg_ops->pgo_getpage)(pg, m, s)
#define VM_PAGER_GET_MULTI(pg, m, c, r, s) (*(pg)->pg_ops->pgo_getpages)(pg, m, c, r, s)
#define VM_PAGER_PUT(pg, m, s) (*(pg)->pg_ops->pgo_putpage)(pg, m, s)
#define VM_PAGER_PUT_MULTI(pg, m, c, s, rtval) (*(pg)->pg_ops->pgo_putpages)(pg, m, c, s, rtval)
#define VM_PAGER_HASPAGE(pg, o) (*(pg)->pg_ops->pgo_haspage)(pg, o)
/*
* get/put return values
* OK operation was successful
@ -104,41 +75,20 @@ struct pagerops {
#define VM_PAGER_AGAIN 5
#ifdef KERNEL
extern struct pagerops *dfltpagerops;
vm_pager_t vm_pager_allocate __P((int, void *, vm_size_t, vm_prot_t, vm_offset_t));
vm_object_t vm_pager_allocate __P((objtype_t, void *, vm_size_t, vm_prot_t, vm_offset_t));
vm_page_t vm_pager_atop __P((vm_offset_t));
void vm_pager_bufferinit __P((void));
void vm_pager_deallocate __P((vm_pager_t));
int vm_pager_get_pages __P((vm_pager_t, vm_page_t *, int, int, boolean_t));
boolean_t vm_pager_has_page __P((vm_pager_t, vm_offset_t));
void vm_pager_deallocate __P((vm_object_t));
int vm_pager_get_pages __P((vm_object_t, vm_page_t *, int, int));
boolean_t vm_pager_has_page __P((vm_object_t, vm_offset_t, int *, int *));
void vm_pager_init __P((void));
vm_pager_t vm_pager_lookup __P((struct pagerlst *, caddr_t));
vm_object_t vm_pager_object_lookup __P((struct pagerlst *, void *));
vm_offset_t vm_pager_map_pages __P((vm_page_t *, int, boolean_t));
vm_offset_t vm_pager_map_page __P((vm_page_t));
int vm_pager_put_pages __P((vm_pager_t, vm_page_t *, int, boolean_t, int *));
int vm_pager_put_pages __P((vm_object_t, vm_page_t *, int, boolean_t, int *));
void vm_pager_sync __P((void));
void vm_pager_unmap_pages __P((vm_offset_t, int));
void vm_pager_unmap_page __P((vm_offset_t));
/*
* XXX compat with old interface
*/
#define vm_pager_get(p, m, s) \
({ \
vm_page_t ml[1]; \
ml[0] = (m); \
vm_pager_get_pages(p, ml, 1, 0, s); \
})
#define vm_pager_put(p, m, s) \
({ \
int rtval; \
vm_page_t ml[1]; \
ml[0] = (m); \
vm_pager_put_pages(p, ml, 1, s, &rtval); \
rtval; \
})
#endif
#endif /* _VM_PAGER_ */

View File

@ -31,7 +31,7 @@
* SUCH DAMAGE.
*
* @(#)vm_swap.c 8.5 (Berkeley) 2/17/94
* $Id: vm_swap.c,v 1.20 1995/05/25 03:38:11 davidg Exp $
* $Id: vm_swap.c,v 1.21 1995/05/30 08:16:21 rgrimes Exp $
*/
#include <sys/param.h>
@ -109,7 +109,7 @@ swstrategy(bp)
vp->v_numoutput--;
if ((vp->v_flag & VBWAIT) && vp->v_numoutput <= 0) {
vp->v_flag &= ~VBWAIT;
wakeup((caddr_t) &vp->v_numoutput);
wakeup(&vp->v_numoutput);
}
}
sp->sw_vp->v_numoutput++;

View File

@ -2,7 +2,8 @@
* Copyright (c) 1990 University of Utah.
* Copyright (c) 1991 The Regents of the University of California.
* All rights reserved.
* Copyright (c) 1993,1994 John S. Dyson
* Copyright (c) 1993, 1994 John S. Dyson
* Copyright (c) 1995, David Greenman
*
* This code is derived from software contributed to Berkeley by
* the Systems Programming Group of the University of Utah Computer
@ -37,25 +38,17 @@
* SUCH DAMAGE.
*
* from: @(#)vnode_pager.c 7.5 (Berkeley) 4/20/91
* $Id: vnode_pager.c,v 1.42 1995/07/06 11:48:48 davidg Exp $
* $Id: vnode_pager.c,v 1.43 1995/07/09 06:58:03 davidg Exp $
*/
/*
* Page to/from files (vnodes).
*
* TODO:
* pageouts
* fix credential use (uses current process credentials now)
*/
/*
* 1) Supports multiple - block reads/writes
* 2) Bypasses buffer cache for reads
*
* TODO:
* Implement getpage/putpage interface for filesystems. Should
* Implement VOP_GETPAGES/PUTPAGES interface for filesystems. Will
* greatly re-simplify the vnode_pager.
*
*/
#include <sys/param.h>
@ -66,64 +59,34 @@
#include <sys/vnode.h>
#include <sys/uio.h>
#include <sys/mount.h>
#include <sys/buf.h>
#include <vm/vm.h>
#include <vm/vm_page.h>
#include <vm/vm_pager.h>
#include <vm/vnode_pager.h>
#include <sys/buf.h>
#include <miscfs/specfs/specdev.h>
int vnode_pager_putmulti();
void vnode_pager_init();
void vnode_pager_dealloc();
int vnode_pager_getpage();
int vnode_pager_getmulti();
int vnode_pager_putpage();
boolean_t vnode_pager_haspage();
struct pagerops vnodepagerops = {
vnode_pager_init,
NULL,
vnode_pager_alloc,
vnode_pager_dealloc,
vnode_pager_getpage,
vnode_pager_getmulti,
vnode_pager_putpage,
vnode_pager_putmulti,
vnode_pager_haspage
vnode_pager_getpages,
vnode_pager_putpages,
vnode_pager_haspage,
NULL
};
static int vnode_pager_input(vn_pager_t vnp, vm_page_t * m, int count, int reqpage);
static int vnode_pager_output(vn_pager_t vnp, vm_page_t * m, int count, int *rtvals);
extern vm_map_t pager_map;
struct pagerlst vnode_pager_list; /* list of managed vnodes */
#define MAXBP (PAGE_SIZE/DEV_BSIZE);
void
vnode_pager_init()
{
TAILQ_INIT(&vnode_pager_list);
}
/*
* Allocate (or lookup) pager for a vnode.
* Handle is a vnode pointer.
*/
vm_pager_t
vm_object_t
vnode_pager_alloc(handle, size, prot, offset)
void *handle;
vm_size_t size;
vm_prot_t prot;
vm_offset_t offset;
{
register vm_pager_t pager;
register vn_pager_t vnp;
vm_object_t object;
struct vnode *vp;
@ -149,49 +112,31 @@ vnode_pager_alloc(handle, size, prot, offset)
* If the object is being terminated, wait for it to
* go away.
*/
while (((object = vp->v_object) != NULL) && (object->flags & OBJ_DEAD))
while (((object = vp->v_object) != NULL) && (object->flags & OBJ_DEAD)) {
tsleep(object, PVM, "vadead", 0);
}
pager = NULL;
if (object != NULL)
pager = object->pager;
if (pager == NULL) {
/*
* Allocate pager structures
*/
pager = (vm_pager_t) malloc(sizeof *pager, M_VMPAGER, M_WAITOK);
vnp = (vn_pager_t) malloc(sizeof *vnp, M_VMPGDATA, M_WAITOK);
if (object == NULL) {
/*
* And an object of the appropriate size
*/
object = vm_object_allocate(round_page(size));
object = vm_object_allocate(OBJT_VNODE, round_page(size));
object->flags = OBJ_CANPERSIST;
vm_object_enter(object, pager);
object->pager = pager;
/*
* Hold a reference to the vnode and initialize pager data.
* Hold a reference to the vnode and initialize object data.
*/
VREF(vp);
vnp->vnp_flags = 0;
vnp->vnp_vp = vp;
vnp->vnp_size = size;
object->un_pager.vnp.vnp_size = size;
TAILQ_INSERT_TAIL(&vnode_pager_list, pager, pg_list);
pager->pg_handle = handle;
pager->pg_type = PG_VNODE;
pager->pg_ops = &vnodepagerops;
pager->pg_data = (caddr_t) vnp;
vp->v_object = (caddr_t) object;
object->handle = handle;
vp->v_object = object;
} else {
/*
* vm_object_lookup() will remove the object from the cache if
* found and also gain a reference to the object.
* vm_object_reference() will remove the object from the cache if
* found and gain a reference to the object.
*/
(void) vm_object_lookup(pager);
vm_object_reference(object);
}
if (vp->v_type == VREG)
@ -202,134 +147,97 @@ vnode_pager_alloc(handle, size, prot, offset)
vp->v_flag &= ~VOWANT;
wakeup(vp);
}
return (pager);
return (object);
}
void
vnode_pager_dealloc(pager)
vm_pager_t pager;
{
register vn_pager_t vnp = (vn_pager_t) pager->pg_data;
register struct vnode *vp;
vnode_pager_dealloc(object)
vm_object_t object;
{
register struct vnode *vp = object->handle;
vp = vnp->vnp_vp;
if (vp) {
if (vp == NULL)
panic("vnode_pager_dealloc: pager already dealloced");
if (object->paging_in_progress) {
int s = splbio();
object = vp->v_object;
if (object) {
while (object->paging_in_progress) {
object->flags |= OBJ_PIPWNT;
tsleep(object, PVM, "vnpdea", 0);
}
while (object->paging_in_progress) {
object->flags |= OBJ_PIPWNT;
tsleep(object, PVM, "vnpdea", 0);
}
splx(s);
vp->v_object = NULL;
vp->v_flag &= ~(VTEXT | VVMIO);
vp->v_flag |= VAGE;
vrele(vp);
}
TAILQ_REMOVE(&vnode_pager_list, pager, pg_list);
free((caddr_t) vnp, M_VMPGDATA);
free((caddr_t) pager, M_VMPAGER);
}
int
vnode_pager_getmulti(pager, m, count, reqpage, sync)
vm_pager_t pager;
vm_page_t *m;
int count;
int reqpage;
boolean_t sync;
{
object->handle = NULL;
return vnode_pager_input((vn_pager_t) pager->pg_data, m, count, reqpage);
}
int
vnode_pager_getpage(pager, m, sync)
vm_pager_t pager;
vm_page_t m;
boolean_t sync;
{
vm_page_t marray[1];
if (pager == NULL)
return FALSE;
marray[0] = m;
return vnode_pager_input((vn_pager_t) pager->pg_data, marray, 1, 0);
vp->v_object = NULL;
vp->v_flag &= ~(VTEXT | VVMIO);
vp->v_flag |= VAGE;
vrele(vp);
}
boolean_t
vnode_pager_putpage(pager, m, sync)
vm_pager_t pager;
vm_page_t m;
boolean_t sync;
{
vm_page_t marray[1];
int rtvals[1];
if (pager == NULL)
return FALSE;
marray[0] = m;
vnode_pager_output((vn_pager_t) pager->pg_data, marray, 1, rtvals);
return rtvals[0];
}
int
vnode_pager_putmulti(pager, m, c, sync, rtvals)
vm_pager_t pager;
vm_page_t *m;
int c;
boolean_t sync;
int *rtvals;
{
return vnode_pager_output((vn_pager_t) pager->pg_data, m, c, rtvals);
}
boolean_t
vnode_pager_haspage(pager, offset)
vm_pager_t pager;
vnode_pager_haspage(object, offset, before, after)
vm_object_t object;
vm_offset_t offset;
int *before;
int *after;
{
register vn_pager_t vnp = (vn_pager_t) pager->pg_data;
register struct vnode *vp = vnp->vnp_vp;
struct vnode *vp = object->handle;
daddr_t bn;
int err;
daddr_t block;
int err, run;
daddr_t startblock, reqblock;
/*
* If filesystem no longer mounted or offset beyond end of file we do
* not have the page.
*/
if ((vp->v_mount == NULL) || (offset >= vnp->vnp_size))
if ((vp->v_mount == NULL) || (offset >= object->un_pager.vnp.vnp_size))
return FALSE;
block = offset / vp->v_mount->mnt_stat.f_iosize;
if (incore(vp, block))
return TRUE;
startblock = reqblock = offset / vp->v_mount->mnt_stat.f_iosize;
if (startblock > PFCLUSTER_BEHIND)
startblock -= PFCLUSTER_BEHIND;
else
startblock = 0;;
/*
* Read the index to find the disk block to read from. If there is no
* block, report that we don't have this data.
*
* Assumes that the vnode has whole page or nothing.
*/
err = VOP_BMAP(vp, block, (struct vnode **) 0, &bn, 0);
if (before != NULL) {
/*
* Loop looking for a contiguous chunk that includes the
* requested page.
*/
while (TRUE) {
err = VOP_BMAP(vp, startblock, (struct vnode **) 0, &bn, &run);
if (err || bn == -1) {
if (startblock < reqblock) {
startblock++;
continue;
}
*before = 0;
if (after != NULL)
*after = 0;
return err ? TRUE : FALSE;
}
if ((startblock + run) < reqblock) {
startblock += run + 1;
continue;
}
*before = reqblock - startblock;
if (after != NULL)
*after = run;
return TRUE;
}
}
err = VOP_BMAP(vp, reqblock, (struct vnode **) 0, &bn, after);
if (err)
return (TRUE);
return TRUE;
return ((long) bn < 0 ? FALSE : TRUE);
}
/*
* Lets the VM system know about a change in size for a file.
* If this vnode is mapped into some address space (i.e. we have a pager
* for it) we adjust our own internal size and flush any cached pages in
* We adjust our own internal size and flush any cached pages in
* the associated object that are affected by the size change.
*
* Note: this routine may be invoked as a result of a pager put
@ -340,37 +248,24 @@ vnode_pager_setsize(vp, nsize)
struct vnode *vp;
u_long nsize;
{
register vn_pager_t vnp;
register vm_object_t object;
vm_pager_t pager;
vm_object_t object = vp->v_object;
/*
* Not a mapped vnode
*/
if (vp == NULL || vp->v_type != VREG || vp->v_object == NULL)
if (object == NULL)
return;
/*
* Hasn't changed size
*/
object = vp->v_object;
if (object == NULL)
return;
if ((pager = object->pager) == NULL)
return;
vnp = (vn_pager_t) pager->pg_data;
if (nsize == vnp->vnp_size)
if (nsize == object->un_pager.vnp.vnp_size)
return;
/*
* File has shrunk. Toss any cached pages beyond the new EOF.
*/
if (nsize < vnp->vnp_size) {
if (round_page((vm_offset_t) nsize) < vnp->vnp_size) {
vm_object_lock(object);
if (nsize < object->un_pager.vnp.vnp_size) {
if (round_page((vm_offset_t) nsize) < object->un_pager.vnp.vnp_size) {
vm_object_page_remove(object,
round_page((vm_offset_t) nsize), vnp->vnp_size, FALSE);
vm_object_unlock(object);
round_page((vm_offset_t) nsize), object->un_pager.vnp.vnp_size, FALSE);
}
/*
* this gets rid of garbage at the end of a page that is now
@ -389,7 +284,7 @@ vnode_pager_setsize(vp, nsize)
}
}
}
vnp->vnp_size = (vm_offset_t) nsize;
object->un_pager.vnp.vnp_size = (vm_offset_t) nsize;
object->size = round_page(nsize);
}
@ -397,19 +292,26 @@ void
vnode_pager_umount(mp)
register struct mount *mp;
{
register vm_pager_t pager, npager;
struct vnode *vp;
struct vnode *vp, *nvp;
loop:
for (vp = mp->mnt_vnodelist.lh_first; vp != NULL; vp = nvp) {
/*
* Vnode can be reclaimed by getnewvnode() while we
* traverse the list.
*/
if (vp->v_mount != mp)
goto loop;
for (pager = vnode_pager_list.tqh_first; pager != NULL; pager = npager) {
/*
* Save the next pointer now since uncaching may terminate the
* object and render pager invalid
* object and render vnode invalid
*/
npager = pager->pg_list.tqe_next;
vp = ((vn_pager_t) pager->pg_data)->vnp_vp;
if (mp == (struct mount *) 0 || vp->v_mount == mp) {
nvp = vp->v_mntvnodes.le_next;
if (vp->v_object != NULL) {
VOP_LOCK(vp);
(void) vnode_pager_uncache(vp);
vnode_pager_uncache(vp);
VOP_UNLOCK(vp);
}
}
@ -424,46 +326,24 @@ vnode_pager_umount(mp)
* destruction which may initiate paging activity which may necessitate
* re-locking the vnode.
*/
boolean_t
void
vnode_pager_uncache(vp)
register struct vnode *vp;
struct vnode *vp;
{
register vm_object_t object;
boolean_t uncached;
vm_pager_t pager;
vm_object_t object;
/*
* Not a mapped vnode
*/
object = vp->v_object;
if (object == NULL)
return (TRUE);
return;
pager = object->pager;
if (pager == NULL)
return (TRUE);
#ifdef DEBUG
if (!VOP_ISLOCKED(vp)) {
extern int (**nfsv2_vnodeop_p)();
if (vp->v_op != nfsv2_vnodeop_p)
panic("vnode_pager_uncache: vnode not locked!");
}
#endif
/*
* Must use vm_object_lookup() as it actually removes the object from
* the cache list.
*/
object = vm_object_lookup(pager);
if (object) {
uncached = (object->ref_count <= 1);
VOP_UNLOCK(vp);
pager_cache(object, FALSE);
VOP_LOCK(vp);
} else
uncached = TRUE;
return (uncached);
vm_object_reference(object);
VOP_UNLOCK(vp);
pager_cache(object, FALSE);
VOP_LOCK(vp);
return;
}
@ -523,15 +403,15 @@ vnode_pager_iodone(bp)
struct buf *bp;
{
bp->b_flags |= B_DONE;
wakeup((caddr_t) bp);
wakeup(bp);
}
/*
* small block file system vnode pager input
*/
int
vnode_pager_input_smlfs(vnp, m)
vn_pager_t vnp;
vnode_pager_input_smlfs(object, m)
vm_object_t object;
vm_page_t m;
{
int i;
@ -540,11 +420,10 @@ vnode_pager_input_smlfs(vnp, m)
struct buf *bp;
vm_offset_t kva;
int fileaddr;
int block;
vm_offset_t bsize;
int error = 0;
vp = vnp->vnp_vp;
vp = object->handle;
bsize = vp->v_mount->mnt_stat.f_iosize;
@ -602,7 +481,6 @@ vnode_pager_input_smlfs(vnp, m)
vm_page_set_clean(m, (i * bsize) & (PAGE_SIZE-1), bsize);
bzero((caddr_t) kva + i * bsize, bsize);
}
nextblock:
}
vm_pager_unmap_page(kva);
pmap_clear_modify(VM_PAGE_TO_PHYS(m));
@ -618,8 +496,8 @@ nextblock:
* old style vnode pager output routine
*/
int
vnode_pager_input_old(vnp, m)
vn_pager_t vnp;
vnode_pager_input_old(object, m)
vm_object_t object;
vm_page_t m;
{
struct uio auio;
@ -633,12 +511,12 @@ vnode_pager_input_old(vnp, m)
/*
* Return failure if beyond current EOF
*/
if (m->offset >= vnp->vnp_size) {
if (m->offset >= object->un_pager.vnp.vnp_size) {
return VM_PAGER_BAD;
} else {
size = PAGE_SIZE;
if (m->offset + size > vnp->vnp_size)
size = vnp->vnp_size - m->offset;
if (m->offset + size > object->un_pager.vnp.vnp_size)
size = object->un_pager.vnp.vnp_size - m->offset;
/*
* Allocate a kernel virtual address and initialize so that
@ -656,7 +534,7 @@ vnode_pager_input_old(vnp, m)
auio.uio_resid = size;
auio.uio_procp = (struct proc *) 0;
error = VOP_READ(vnp->vnp_vp, &auio, 0, curproc->p_ucred);
error = VOP_READ(object->handle, &auio, 0, curproc->p_ucred);
if (!error) {
register int count = size - auio.uio_resid;
@ -676,34 +554,22 @@ vnode_pager_input_old(vnp, m)
* generic vnode pager input routine
*/
int
vnode_pager_input(vnp, m, count, reqpage)
register vn_pager_t vnp;
vm_page_t *m;
int count, reqpage;
{
int i;
vm_offset_t kva, foff;
int size;
vnode_pager_getpages(object, m, count, reqpage)
vm_object_t object;
vm_page_t *m;
int count;
int reqpage;
{
vm_offset_t kva, foff;
int i, size, bsize, first, firstaddr;
struct vnode *dp, *vp;
int bsize;
int first, last;
int firstaddr;
int block, offset;
int runpg;
int runend;
struct buf *bp;
int s;
int failflag;
int error = 0;
object = m[reqpage]->object; /* all vm_page_t items are in same
* object */
vp = vnp->vnp_vp;
vp = object->handle;
bsize = vp->v_mount->mnt_stat.f_iosize;
/* get the UNDERLYING device for the file with VOP_BMAP() */
@ -725,7 +591,7 @@ vnode_pager_input(vnp, m, count, reqpage)
}
cnt.v_vnodein++;
cnt.v_vnodepgsin++;
return vnode_pager_input_old(vnp, m[reqpage]);
return vnode_pager_input_old(object, m[reqpage]);
/*
* if the blocksize is smaller than a page size, then use
@ -742,7 +608,7 @@ vnode_pager_input(vnp, m, count, reqpage)
}
cnt.v_vnodein++;
cnt.v_vnodepgsin++;
return vnode_pager_input_smlfs(vnp, m[reqpage]);
return vnode_pager_input_smlfs(object, m[reqpage]);
}
/*
* if ANY DEV_BSIZE blocks are valid on a large filesystem block
@ -768,10 +634,9 @@ vnode_pager_input(vnp, m, count, reqpage)
for(first = 0, i = 0; i < count; i = runend) {
firstaddr = vnode_pager_addr(vp, m[i]->offset, &runpg);
if (firstaddr == -1) {
if( i == reqpage && foff < vnp->vnp_size) {
printf("vnode_pager_input: unexpected missing page: firstaddr: %d, foff: %d, vnp_size: %d\n",
firstaddr, foff, vnp->vnp_size);
panic("vnode_pager_input:...");
if (i == reqpage && foff < object->un_pager.vnp.vnp_size) {
panic("vnode_pager_putpages: unexpected missing page: firstaddr: %d, foff: %ld, vnp_size: %d",
firstaddr, foff, object->un_pager.vnp.vnp_size);
}
vnode_pager_freepage(m[i]);
runend = i + 1;
@ -779,14 +644,14 @@ vnode_pager_input(vnp, m, count, reqpage)
continue;
}
runend = i + runpg;
if( runend <= reqpage) {
if (runend <= reqpage) {
int j;
for(j = i; j < runend; j++) {
for (j = i; j < runend; j++) {
vnode_pager_freepage(m[j]);
}
} else {
if( runpg < (count - first)) {
for(i=first + runpg; i < count; i++)
if (runpg < (count - first)) {
for (i = first + runpg; i < count; i++)
vnode_pager_freepage(m[i]);
count = first + runpg;
}
@ -816,8 +681,8 @@ vnode_pager_input(vnp, m, count, reqpage)
* calculate the size of the transfer
*/
size = count * PAGE_SIZE;
if ((foff + size) > vnp->vnp_size)
size = vnp->vnp_size - foff;
if ((foff + size) > object->un_pager.vnp.vnp_size)
size = object->un_pager.vnp.vnp_size - foff;
/*
* round up physical size for real devices
@ -875,7 +740,6 @@ vnode_pager_input(vnp, m, count, reqpage)
*/
relpbuf(bp);
finishup:
for (i = 0; i < count; i++) {
pmap_clear_modify(VM_PAGE_TO_PHYS(m[i]));
m[i]->dirty = 0;
@ -903,7 +767,7 @@ finishup:
}
}
if (error) {
printf("vnode_pager_input: I/O read error\n");
printf("vnode_pager_getpages: I/O read error\n");
}
return (error ? VM_PAGER_ERROR : VM_PAGER_OK);
}
@ -912,10 +776,11 @@ finishup:
* generic vnode pager output routine
*/
int
vnode_pager_output(vnp, m, count, rtvals)
vn_pager_t vnp;
vnode_pager_putpages(object, m, count, sync, rtvals)
vm_object_t object;
vm_page_t *m;
int count;
boolean_t sync;
int *rtvals;
{
int i;
@ -926,12 +791,12 @@ vnode_pager_output(vnp, m, count, rtvals)
struct iovec aiov;
int error;
vp = vnp->vnp_vp;
vp = object->handle;;
for (i = 0; i < count; i++)
rtvals[i] = VM_PAGER_AGAIN;
if ((int) m[0]->offset < 0) {
printf("vnode_pager_output: attempt to write meta-data!!! -- 0x%x(%x)\n", m[0]->offset, m[0]->dirty);
printf("vnode_pager_putpages: attempt to write meta-data!!! -- 0x%x(%x)\n", m[0]->offset, m[0]->dirty);
rtvals[0] = VM_PAGER_BAD;
return VM_PAGER_BAD;
}
@ -939,9 +804,9 @@ vnode_pager_output(vnp, m, count, rtvals)
maxsize = count * PAGE_SIZE;
ncount = count;
if (maxsize + m[0]->offset > vnp->vnp_size) {
if (vnp->vnp_size > m[0]->offset)
maxsize = vnp->vnp_size - m[0]->offset;
if (maxsize + m[0]->offset > object->un_pager.vnp.vnp_size) {
if (object->un_pager.vnp.vnp_size > m[0]->offset)
maxsize = object->un_pager.vnp.vnp_size - m[0]->offset;
else
maxsize = 0;
ncount = (maxsize + PAGE_SIZE - 1) / PAGE_SIZE;
@ -950,8 +815,8 @@ vnode_pager_output(vnp, m, count, rtvals)
rtvals[i] = VM_PAGER_BAD;
}
if (ncount == 0) {
printf("vnode_pager_output: write past end of file: %d, %d\n",
m[0]->offset, vnp->vnp_size);
printf("vnode_pager_putpages: write past end of file: %d, %d\n",
m[0]->offset, object->un_pager.vnp.vnp_size);
return rtvals[0];
}
}
@ -976,10 +841,10 @@ vnode_pager_output(vnp, m, count, rtvals)
cnt.v_vnodepgsout += ncount;
if (error) {
printf("vnode_pager_output: I/O error %d\n", error);
printf("vnode_pager_putpages: I/O error %d\n", error);
}
if (auio.uio_resid) {
printf("vnode_pager_output: residual I/O %d at %d\n", auio.uio_resid, m[0]->offset);
printf("vnode_pager_putpages: residual I/O %d at %d\n", auio.uio_resid, m[0]->offset);
}
for (i = 0; i < count; i++) {
m[i]->busy--;
@ -987,28 +852,21 @@ vnode_pager_output(vnp, m, count, rtvals)
rtvals[i] = VM_PAGER_OK;
}
if ((m[i]->busy == 0) && (m[i]->flags & PG_WANTED))
wakeup((caddr_t) m[i]);
wakeup(m[i]);
}
return rtvals[0];
}
struct vnode *
vnode_pager_lock(vm_object_t object) {
for(;object;object=object->shadow) {
vn_pager_t vnp;
if( !object->pager || (object->pager->pg_type != PG_VNODE))
vnode_pager_lock(object)
vm_object_t object;
{
for (; object != NULL; object = object->backing_object) {
if (object->type != OBJT_VNODE)
continue;
vnp = (vn_pager_t) object->pager->pg_data;
VOP_LOCK(vnp->vnp_vp);
return vnp->vnp_vp;
VOP_LOCK(object->handle);
return object->handle;
}
return (struct vnode *)NULL;
return NULL;
}
void
vnode_pager_unlock(struct vnode *vp) {
VOP_UNLOCK(vp);
}

View File

@ -36,22 +36,19 @@
* SUCH DAMAGE.
*
* @(#)vnode_pager.h 8.1 (Berkeley) 6/11/93
* $Id: vnode_pager.h,v 1.3 1994/08/02 07:55:43 davidg Exp $
* $Id: vnode_pager.h,v 1.4 1995/01/09 16:06:02 davidg Exp $
*/
#ifndef _VNODE_PAGER_
#define _VNODE_PAGER_ 1
/*
* VNODE pager private data.
*/
struct vnpager {
int vnp_flags; /* flags */
struct vnode *vnp_vp; /* vnode */
vm_size_t vnp_size; /* vnode current size */
};
typedef struct vnpager *vn_pager_t;
#define VN_PAGER_NULL ((vn_pager_t)0)
#ifdef KERNEL
vm_object_t vnode_pager_alloc __P((void *, vm_size_t, vm_prot_t, vm_offset_t));
void vnode_pager_dealloc __P((vm_object_t));
int vnode_pager_getpages __P((vm_object_t, vm_page_t *, int, int));
int vnode_pager_putpages __P((vm_object_t, vm_page_t *, int, boolean_t, int *));
boolean_t vnode_pager_haspage __P((vm_object_t, vm_offset_t, int *, int *));
struct vnode *vnode_pager_lock __P((vm_object_t));
#endif
#endif /* _VNODE_PAGER_ */