This chapter explains how to manage the volumes stored on file server machines. The volume is the designated unit of administration in AFS, so managing them is a large part of the administrator's duties.
This chapter explains how to perform the following tasks by
using the indicated commands:
Create read/write volume | vos create |
Create read-only volume | vos addsite and vos release |
Create backup volume | vos backup |
Create many backup volumes at once | vos backupsys |
Examine VLDB entry | vos listvldb |
Examine volume header | vos listvol |
Examine both VLDB entry and volume header | vos examine |
Display volume's name | fs listquota or fs examine |
Display volume's ID number | fs examine or vos examine or vos listvol |
Display partition's size and space available | vos partinfo |
Display volume's location | fs whereis or vos examine |
Create mount point | fs mkmount |
Remove mount point | fs rmmount |
Display mount point | fs lsmount |
Move read/write volume | vos move |
Synchronize VLDB with volume headers | vos syncvldb and vos syncserv |
Set volume quota | fs setvol or fs setquota |
Display volume quota | fs quota or fs listquota or fs examine |
Display volume's current size | fs listquota or fs examine |
Display list of volumes on a machine/partition | vos listvol |
Remove read/write volume | vos remove and fs rmmount |
Remove read-only volume | vos remove |
Remove backup volume | vos remove and fs rmmount |
Remove volume; no VLDB change | vos zap |
Remove read-only site definition | vos remsite |
Remove VLDB entry; no volume change | vos delentry |
Dump volume | vos dump |
Restore dumped volume | vos restore |
Rename volume | vos rename, fs rmmount and fs mkmount |
Unlock volume | vos unlock |
Unlock multiple volumes | vos unlockvldb |
Lock volume | vos lock |
An AFS volume is a logical unit of disk space that functions like a container for the files in an AFS directory, keeping them all together on one partition of a file server machine. To make a volume's contents visible in the cell's file tree and accessible to users, you mount the volume at a directory location in the AFS filespace. The association between the volume and its location in the filespace is called a mount point, and because of AFS's internal workings it looks and acts just like a standard directory element. Users can access and manipulate a volume's contents in the same way they access and manipulate the contents of a standard UNIX directory. For more on the relationship between volumes and directories, see About Mounting Volumes.
Many of an administrator's daily activities involve manipulating volumes, since they are the basic storage and administrative unit of AFS. For a discussion of some of the ways volumes can make your job easier, see How Volumes Improve AFS Efficiency.
There are three types of volumes in AFS, as described in the following list:
Note: | A backup volume is not the same as the backup of a volume transferred to tape using the AFS Backup System, although making a backup version of a volume is usually a stage in the process of backing up the volume to tape. For information on backing up a volume using the AFS Backup System, see Backing Up Data. |
As noted, the three types of volumes are related to one another: read-only and backup volumes are both derived from a read/write volume through a process called cloning. Read-only and backup volumes are exact copies of the read/write source at the time they are created.
Volumes make your cell easier to manage and more efficient in the following three ways:
Backup also refers to using the AFS Backup System to store permanent copies of volume contents on tape or in a special backup data. See Configuring the AFS Backup System and Backing Up and Restoring AFS Data.
The Volume Location Database (VLDB) includes entries for every volume in a cell. Perhaps the most important information in the entry is the volume's location, which is key to transparent access to AFS data. When a user opens a file, the Cache Manager consults the Volume Location (VL) Server, which maintains the VLDB, for a list of the file server machines that house the volume containing the file. The Cache Manager then requests the file from the File Server running on one of the relevant file server machines. The file location procedure is invisible to the user, who only needs to know the file's pathname.
The VLDB volume entry for a read/write volume also contains the pertinent information about the read-only and backup versions, which do not have their own VLDB entries. (The rare exception is a read-only volume that has its own VLDB entry because its read/write source has been removed.) A volume's VLDB entry records the volume's name, the unique volume ID number for each version (read/write, read-only, backup, and releaseClone), a count of the number of sites that house a read/write or read-only version, and a list of the sites.
To display the VLDB entry for one or more volumes, use the vos listvldb command as described in To display VLDB entries. To display the VLDB entry for a single volume along with its volume header, use the vos examine command as described in To display one volume's VLDB entry and volume header. (See the following section for a description of the volume header.)
Whereas all versions of a volume share one VLDB entry, each volume on an AFS server partition has its own volume header, a data structure that maps the files and directories in the volume to physical memory addresses on the partition that stores them. The volume header binds the volume's contents into a logical unit without requiring that they be stored in contiguous memory blocks. The volume header also records the following information about the volume, some of it redundant with the VLDB entry: name, volume ID number, type, size, status (online, offline, or busy), space quota, timestamps for creation date and date of last modification, and number of accesses during the current day.
To display the volume headers on one or more partitions, use the vos listvol command as described in To display volume headers. To display the VLDB entry for a single volume along with its volume header, use the vos examine command as described in To display one volume's VLDB entry and volume header.
It is vital that the information in the VLDB correspond to the status of the actual volumes on the servers (as recorded in volume headers) as much of the time as possible. If a volume's location information in the VLDB is incorrect, the Cache Manager cannot find access its contents. Whenever you issue a vos command that changes a volume's status, the Volume Server and VL Server cooperate to keep the volume header and VLDB synchronized. In rare cases, the header and VLDB can diverge, for instance because a vos operation halts prematurely. For instructions on resynchronizing them, see Synchronizing the VLDB and Volume Headers.
To make a volume's contents visible in the cell's file tree and accessible to users, you mount the volume at a directory location in the AFS filespace. The association between the volume and its location in the filespace is called a mount point. An AFS mount point looks and functions like a regular UNIX file system directory, but structurally it is more like a symbolic link that tells the Cache Manager the name of the volume associated with the directory. A mount point looks and acts like a directory only because the Cache Manager knows how to interpret it.
Consider the common case where the Cache Manager needs to retrieve a file requested by an application program. The Cache Manager traverses the file's complete pathname, starting at the AFS root (by convention mounted at the /afs directory) and continuing to the file. When the Cache Manager encounters (or crosses) a mount point during the traversal, it reads it to learn the name of the volume mounted at that directory location. After obtaining location information about the volume from the Volume Location (VL) Server, the Cache Manager fetches the indicated volume and opens its root directory. The root directory of a volume lists all the files, subdirectories, and mount points that reside in it. The Cache Manager scans the root directory listing for the next element in the pathname. It continues down the path, using this method to interpret any other mount points it encounters, until it reaches the volume that houses the requested file.
Mount points act as the glue that connects the AFS file space, creating the illusion of a single, seamless file tree even when volumes reside on many different file server machines. A volume's contents are visible and accessible when the volume is mounted at a directory location, and are not accessible at all if the volume is not mounted.
You can mount a volume at more than one location in the file tree, but this is not recommended for two reasons. First, it distorts the hierarchical nature of the filespace. Second, the Cache Manager can become confused about which pathname it followed to reach the file (causing unpredictable output from the pwd command, for example). However, if you mount a volume at more than one directory, the access control list (ACL) associated with the volume's root directory applies to all of the mount points.
There are several types of mount points, each of which the Cache Manager handles in a different way and each of which is appropriate for a different purpose. See Mounting Volumes.
A read/write volume's name can be up to 22 characters in length. The Volume Server automatically adds the .readonly and .backup extensions to read-only and backup volumes respectively. Do not explicitly add the extensions to volume names, even if they are appropriate.
It is conventional for a volume's name to indicate the type of data it houses. For example, it is conventional to name all user volumes user.username where username is the user's login name. Similarly, many cells elect to put system binaries in volumes with names that begin with the system type code. For a list of other naming conventions, see Creating Volumes to Simplify Administration.
A read/write volume is the most basic type of volume, and must exist before you can create read-only or backup versions of it. When you issue the vos create command to create a read/write volume, the VL Server creates a VLDB entry for it which records the name you specify, assigns a read/write volume ID number, and reserves the next two consecutive volume ID numbers for read-only and backup versions that possibly are to be created later. At the same time, the Volume Server creates a volume header at the site you designate, allocating space on disk to record the name of the volume's root directory. The name is filled in when you issue the fs mkmount command to mount the volume, and matches the mount point name. The following is also recorded in the volume header:
To change the quota after creation, use the fs setquota command as described in Setting and Displaying Volume Quota and Current Size.
% bos listusers <machine name>
% fs listacl [<dir/file path>]
Members of the system:administrators group always implicitly have the a (administer) and by default also the l (lookup) permission on every ACL and can use the fs setacl command to grant other rights as necessary.
Note: | The partition-related statistics in this command's output do not always agree with the corresponding values in the output of the standard UNIX df command. The statistics reported by this command can be up to five minutes old, because the Cache Manager polls the File Server for partition information at that frequency. Also, on some operating systems, the df command's report of partition size includes reserved space not included in this command's calculation, and so is likely to be about 10% larger. |
% vos partinfo <machine name> [<partition name>]
where
% vos create <machine name> <partition name> <volume name> \ [-maxquota <initial quota (KB)>]
where
% fs mkmount <directory> <volume name>
% fs lsmount <directory>
% fs setvol <dir/file path> -offlinemsg <offline message>
where
Specify the read/write path to the mount point, to avoid the failure that results when you attempt to change a read-only volume. By convention, you indicate the read/write path by placing a period before the cell name at the pathname's second level (for example, /afs/.abc.com). For further discussion of the concept of read/write and read-only paths through the filespace, see The Rules of Mount Point Traversal.
To create a backup or read-only volume, the Volume Server begins by cloning the read/write source volume to create a clone. The Volume Server creates the clone automatically when you issue the vos backup or vos backupsys command (for a backup volume) or the vos release command (for a read-only volume). No special action is required on your part.
A clone is not a copy of the data in the read/write source volume, but rather a copy of the read/write volume's vnode index. The vnode index is a table of pointers between the files and directories in the volume and the physical disk blocks on the partition where the data resides. From the clone, backup and read-only volumes are created in the following manner:
Figure 1. File Sharing Between the Read/write Source and a Clone Volume
Replication refers to creating a read-only copy of a read/write volume and distributing the copy to one or more additional file server machines. Replication makes a volume's contents accessible on more than one file server machine, which increases data availability. It can also increase system efficiency by reducing load on the network and File Server. Network load is reduced if a client machine's server preference ranks lead the Cache Manager to access the copy of a volume stored on the closest file server machine. Load on the File Server is reduced because it issues only one callback for all data fetched from a read-only volume, as opposed to a callback for each file fetched from a read/write volume. The single callback is sufficient for an entire read-only volume because the volume does not change except in response to administrator action, whereas each read/write file can change at any time.
Replicating a volume requires issuing two commands. First, use the vos addsite command to add one or more read-only site definitions to the volume's VLDB entry (a site is a particular partition on a file server machine). Then use the vos release command to clone the read/write source volume and distribute the clone to the defined read-only sites. You issue the vos addsite only once for each read-only site, but must reissue the vos release command every time the read/write volume's contents change and you want to update the read-only volumes.
For users to have a consistent view of the file system, the release of updated volume contents to read-only sites must be atomic: either all read-only sites receive the new version of the volume, or all sites keep the version they currently have. The vos release command is designed to ensure that all copies of the volume's read-only version match both the read/write source and each other. In cases where problems such as machine or server process outages prevent successful completion of the release operation, AFS uses two mechanisms to alert you.
First, the command interpreter generates an error message on the standard error stream naming each read-only site that did not receive the new volume version. Second, during the release operation the Volume Location (VL) Server marks site definitions in the VLDB entry with flags (New release and Old release) that indicate whether or not the site has the new volume version. If any flags remain after the operation completes, it was not successful. The Cache Manager refuses to access a read-only site marked with the Old release flag, which potentially imposes a greater load on the sites marked with the New release flag. It is important to investigate and eliminate the cause of the failure and then to issue the vos release command as many times as necessary to complete the release without errors.
The pattern of site flags remaining in the volume's VLDB entry after a failed release operation can help determine the point at which the operation failed. Use the vos examine or vos listvldb command to display the VLDB entry. The VL Server sets the flags in concert with the Volume Server's operations, as follows:
By default, the Volume Server determines automatically whether or not it needs to create a new ReleaseClone:
To override the default behavior, forcing the Volume Server to create and release a new ReleaseClone to the read-only sites, include the -f flag. This is appropriate if, for example, the data at the read/write site has changed since the existing ReleaseClone was created during the previous release operation.
For maximum effectiveness, replicate only volumes that satisfy two criteria:
Explicitly mounting a read-only volume (creating a mount point that names a volume with a .readonly extension) is not generally necessary or appropriate. The Cache Manager has a built-in bias to access the read-only version of a replicated volume whenever possible. As described in more detail in The Rules of Mount Point Traversal, when the Cache Manager encounters a mount point it reads the volume name inside it and contacts the VL Server for a list of the sites that house the volume. In the normal case, if the mount point resides in a read-only volume and names a read/write volume (one that does not have a .readonly or .backup extension), the Cache Manager always attempts to access a read-only copy of the volume. Thus there is normally no reason to force the Cache Manager to access a read-only volume by mounting it explicitly.
It is a good practice to place a read-only volume at the read/write site, for a couple of reasons. First, the read-only volume at the read/write site requires only a small amount of disk space, because it is a clone rather a copy of all of the data (see About Clones and Cloning). Only if a large number of files are removed or changed in the read/write volume does the read-only copy occupy much disk space. That normally does not happen because the appropriate response to changes in a replicated read/write volume is to reclone it. The other reason to place a read-only volume at the read/write site is that the Cache Manager does not attempt to access the read/write version of a replicated volume if all read-only copies become inaccessible. If the file server machine housing the read/write volume is the only accessible machine, the Cache Manager can access the data only if there is a read-only copy at the read/write site.
The number of read-only sites to define depends on several factors. Perhaps the main trade-off is between the level of demand for the volume's contents and how much disk space you are willing to use for multiple copies of the volume. Of course, each prospective read-only site must have enough available space to accommodate the volume. The limit on the number of read-only copies of a volume is determined by the maximum number of site definitions in a volume's VLDB entry, which is defined in the IBM AFS Release Notes. The site housing the read/write and backup versions of the volume counts as one site, and each read-only site counts as an additional site (even the read-only site defined on the same file server machine and partition as the read/write site counts as a separate site). Note also that the Volume Server permits only one read-only copy of a volume per file server machine.
The instructions in the following section explain how to replicate a volume for which no read-only sites are currently defined. However, you can also use the instructions in other common situations:
% bos listusers <machine name>
% vos examine <volume name or ID>
The final lines of output display the volume's site definitions from the VLDB.
To display the amount of space available on a file server machine's partitions, use the vos partinfo command, which is described fully in Creating Read/write Volumes.
% vos partinfo <machine name> [<partition name>]
% vos addsite <machine name> <partition name> <volume name or ID>
where
% bos status <machine name> fs vlserver
% vos release <volume name or ID> [-f]
where
% vos examine <volume name or ID>
If any flags appear in the output from Step 6, repeat Steps 4 and 5 until the Volume Server does not produce any error messages during the release operation and the flags no longer appear. Do not issue the vos release command when you know that the read/write site or any read-only site is inaccessible due to network, machine or server process outage.
A backup volume is a clone that resides at the same site as its read/write source (to review the concept of cloning, see About Clones and Cloning). Creating a backup version of a volume has two purposes:
The vos backupsys command creates a backup version of many read/write volumes at once. This command is useful when preparing for large-scale backups to tape using the AFS Backup System.
To clone every read/write volume listed in the VLDB, omit all of the command's options. Otherwise, combine the command's options to clone various groups of volumes. The options use one of two basic criteria to select volumes: location (the -server and -partition arguments) or presence in the volume name of one of a set of specified character strings (the -prefix, -exclude, and -xprefix options).
To clone only volumes that reside on one file server machine, include the -server argument. To clone only volumes that reside on one partition, combine the -server and -partition arguments. The -partition argument can also be used alone to clone volumes that reside on the indicated partition on every file server machine. These arguments can be combined with those that select volumes based on their names.
Combine the -prefix, -exclude, and -xprefix options (with or without the -server and -partition arguments) in the indicated ways to select volumes based on character strings contained in their names:
If the -exclude flag is combined with the -prefix and -xprefix arguments, the command creates a list of all volumes that do not match the -prefix argument and then adds to the list any volumes that match the -xprefix argument. As when the -exclude flag is not used, the result is effective only if the strings specified by the -xprefix argument designate a subset of the volumes specified by the -prefix argument.
The -prefix and -xprefix arguments both accept multiple values, which can be used to define disjoint groups of volumes. Each value can be one of two types:
-prefix '^.*aix'
To display a list of the volumes to be cloned, without actually cloning them, include the -dryrun flag. To display a statement that summarizes the criteria being used to select volume, include the -verbose flag.
To back up a single volume, use the vos backup command, which employs a more streamlined technique for finding a single volume.
Most cells find that it is best to make a new backup version of relevant volumes each day. It is best to create the backup versions at a time when usage is low, because the backup operation causes the read/write volume to be unavailable momentarily.
You can either issue the necessary the vos backupsys or vos backup commands at the console or create a cron entry in the BosConfig file on a file server machine, which eliminates the need for an administrator to initiate the backup operation.
The following example command creates a cron process called backupusers in the /usr/afs/local/BosConfig file on the machine fs3.abc.com. The process runs every day at 1:00 a.m. to create a backup version of every volume in the cell whose name starts with the string user. The -localauth flag enables the process to invoke the privileged vos backupsys command while unauthenticated. Note that the -cmd argument specifies a complete pathname for the vos binary, because the PATH environment variable for the BOS Server (running as the local superuser root) generally does not include the path to AFS binaries.
% bos create fs3.abc.com backupusers cron \ -cmd "/usr/afs/bin/vos backupsys -prefix user -localauth" "1:00"
As noted, a backup volume preserves the state of the read/write source at the time the backup is created. Many cells choose to mount backup volumes so that users can access and restore data they have accidentally deleted or changed since the last backup was made, without having to request help from administrators. The most sensible place to mount the backup version of a user volume is at a subdirectory of the user's home directory. Suitable names for this directory include OldFiles and Backup. The subdirectory looks just like the user's own home directory as it was at the time the backup was created, with all files and subdirectories in the same relative positions.
If you do create and mount backup volumes for your users, inform users of their existence. The IBM AFS User Guide does not mention backup volumes because making them available to users is optional. Explain to users how often you make a new backup, so they know what they can recover. Remind them also that the data in their backup volume cannot change; however, they can use the standard UNIX cp command to copy it into their home volume and modify it there. Reassure users that the data in their backup volumes does not count against their read/write volume quota.
% bos listusers <machine name>
% fs listacl [<dir/file path>]
Members of the system:administrators group always implicitly have the a (administer) and by default also the l (lookup) permission on every ACL and can use the fs setacl command to grant other rights as necessary.
% vos backup <volume name or ID> Created backup volume for volume name or ID
where
% fs mkmount <directory> <volume name>.backup
where
% fs lsmount <directory>
% bos listusers <machine name>
% vos backupsys [-prefix <common prefix on volume(s)>+] \ [-server <machine name>] [-partition <partition name>] \ [-exclude] [-xprefix <negative prefix on volume(s)>+] [-dryrun] [-verbose]
where
Mount points make the contents of AFS volumes visible and accessible in the AFS filespace, as described in About Mounting Volumes. This section discusses in more detail how the Cache Manager handles mount points as it traverses the filespace. It describes the three types of mount points, their purposes, and how to distinguish between them, and provides instructions for creating, removing, and examining mount points.
The Cache Manager observes three basic rules as it traverses the AFS filespace and encounters mount points:
When the Cache Manager encounters a mount point that specifies a volume with either a .readonly or a .backup extension, it accesses that type of volume only. If a mount point does not have either a .backup or .readonly extension, the Cache Manager uses Rules 2 and 3.
For example, the Cache Manager never accesses the read/write version of a volume if the mount point names the backup version. If the specified version is inaccessible, the Cache Manager reports an error.
If a mount point resides in a read-only volume and the volume that it references is replicated, the Cache Manager attempts to access a read-only copy of the volume; if the referenced volume is not replicated, the Cache Manager accesses the read/write copy. The Cache Manager is thus said to prefer a read-only path through the filespace, accessing read-only volumes when they are available.
The Cache Manager starts on the read-only path in the first place because it always accesses a read-only copy of the root.afs volume if it exists; the volume is mounted at the root of a cell's AFS filespace (named /afs by convention). That is, if the root.afs volume is replicated, the Cache Manager attempts to access a read-only copy of it rather than the read/write copy. This rule then keeps the Cache Manager on a read-only path as long as each successive volume is replicated. The implication is that both the root.afs and root.cell volumes must be replicated for the Cache Manager to access replicated volumes mounted below them in the AFS filespace. The volumes are conventionally mounted at the /afs and /afs/cellname directories, respectively.
If a mount point resides in a read/write volume and the volume name does not have a .readonly or a .backup extension, the Cache Manager attempts to access only the a read/write version of the volume. The access attempt fails with an error if the read/write version is inaccessible, even if a read-only version is accessible. In this situation the Cache Manager is said to be on a read/write path and cannot switch back to the read-only path unless mount point explicitly names a volume with a .readonly extension. (Cellular mount points are an important exception to this rule, as explained in the following discussion.
AFS uses three types of mount points, each appropriate for a different purpose because of how the Cache Manager handles them.
AFS performs best when the vast majority of mount points in the filespace are regular, because the mount point traversal rules promote the most efficient use of both replicated and nonreplicated volumes. Because there are likely to be multiple read-only copies of a replicated volume, it makes sense for the Cache Manager to access one of them rather than the single read/write version, and the second rule leads it to do so. If a volume is not replicated, the third rule means that the Cache Manager still accesses the read/write volume when that is the only type available. In other words, a regular mount point does not force the Cache Manager always to access read-only volumes (it is explicitly not a "read-only mount point").
To create a regular mount point, use the fs mkmount command as described in To create a regular or read/write mount point.
Note: | To enable the Cache Manager to access the read-only version of a replicated volume named by a regular mount point, all volumes that are mounted above it in the pathname must also be replicated. That is the only way the Cache Manager can stay on a read-only path to the target volume. |
It is conventional to create only one read/write mount point in a cell's filespace, using it to mount the cell's root.cell volume just below the AFS filespace root (by convention, /afs/.cellname). As indicated, it is conventional to place a period at the start of the read/write mount point's name (for example, /afs/.abc.com). The period distinguishes the read/write mount point from the regular mount point for the root.cell volume at the same level. This is the only case in which it is conventional to create two mount points for the same volume. A desirable side effect of this naming convention for this read/write mount point is that it does not appear in the output of the UNIX ls command unless the -a flag is included, essentially hiding it from regular users who have no use for it.
The existence of a single read/write mount point at this point in the filespace provides access to the read/write version of every volume when necessary, because it puts the Cache Manager on a read/write path right at the top of the filespace. At the same time, the regular mount point for the root.cell volume puts the Cache Manager on a read-only path most of the time.
Using a read/write mount point for a read-only or backup volume is acceptable, but unnecessary. The first rule of mount point traversal already specifies that the Cache Manager accesses them if the volume name in a regular mount point has a .readonly or .backup extension.
To create a read/write mount point, use the -rw flag on the fs mkmount command as described in To create a regular or read/write mount point.
It is inappropriate to circumvent this behavior by creating a read/write cellular mount point, because traversing the read/write path imposes an unfair load on the foreign cell's file server machines. The File Server must issue a callback for each file fetched from the read/write volume, rather than single callback required for a read-only volume. In any case, only a cell's own administrators generally need to access the read/write versions of replicated volumes.
It is conventional to create cellular mount points only at the second level in a cell's filespace, using them to mount foreign cells' root.cell volumes just below the AFS filespace root (by convention, at /afs/foreign_cellname). The mount point enables local users to access the foreign cell's filespace, assuming they have the necessary permissions on the ACL of the volume's root directory and that there is an entry for the foreign cell in each local client machine's /usr/vice/etc/CellServDB file, as described in Maintaining Knowledge of Database Server Machines.
Creating cellular mount points at other levels in the filespace and mounting foreign volumes other than the root.cell volume is not generally appropriate. It can be confusing to users if the Cache Manager switches between cells at various points in a pathname.
To create a regular cellular mount point, use the -cell argument to specify the cell name, as described in To create a cellular mount point.
To examine a mount point, use the fs lsmount command as described in To display a mount point. The command's output uses distinct notation to identify regular, read/write, and cellular mount points. To remove a mount point, use the fs rmmount command as described in To remove a mount point.
Creating a mount point in a foreign cell's filespace (as opposed to mounting a foreign volume in the local cell) is basically the same as creating a mount point in the local filespace. The differences are that the fs mkmount command's directory argument specifies a pathname in the foreign cell rather than the local cell, and you must have the required permissions on the ACL of the foreign directory where you are creating the mount point. The fs mkmount command's -cell argument always specifies the cell in which the volume resides, not the cell in which to create the mount point.
% fs lsmount <directory>
where
If the specified directory is a mount point, the output is of the following form:
'directory' is a mount point for volume 'volume name'
For a regular mount point, a number sign (#) precedes the volume name string, as in the following example command issued on a client machine in the abc.com cell.
% fs lsmount /afs/abc.com/usr/terry '/afs/abc.com/usr/terry' is a mount point for volume '#user.terry'
For a read/write mount point, a percent sign (%) precedes the volume name string, as in the following example command issued on a client machine in the abc.com cell. The cell's administrators have followed the convention of preceding the read/write mount point's name with a period.
% fs lsmount /afs/.abc.com '/afs/.abc.com' is a mount point for volume '%root.cell'
For a cellular mount point, a cell name and colon (:) follow the number or percent sign and precede the volume name string, as in the following example command issued on a client machine in the abc.com cell.
% fs lsmount /afs/ghi.gov '/afs/ghi.gov' is a mount point for volume '#ghi.gov:root.cell'
For a symbolic link to a mount point, the output is of the form shown in the following example command issued on a client machine in the abc.com cell.
% fs lsmount /afs/abc '/afs/abc' is a symbolic link, leading to a mount point for volume '#root.cell'
If the directory is not a mount point or is not in AFS, the output reads as follows.
'directory' is not a mount point.
If the output is garbled, it is possible that the mount point has become corrupted in the local cache. Use the fs flushmount command as described in To flush one or more mount points. This forces the Cache Manager to refetch the mount point.
% fs listacl [<dir/file path>]
% fs mkmount <directory> <volume name> [-rw]
where
Specify the read/write path to the mount point, to avoid the failure that results when you attempt to create a new mount point in a read-only volume. By convention, you indicate the read/write path by placing a period before the cell name at the pathname's second level (for example, /afs/.abc.com). For further discussion of the concept of read/write and read-only paths through the filespace, see The Rules of Mount Point Traversal.
% fs listacl [<dir/file path>]
Substitute your cell's name for cellname.
% cd /afs/.cellname % fs mkmount new_cells root.afs % cd new_cells
% fs mkmount <directory> <volume name> -cell <cell name>
where
Also issue the fs checkvolumes command to force the local Cache Manager to access the new replica of the root.afs volume. If desired, you can also remove the temporary new_cells mount point from the /afs/.cellname directory.
% vos release root.afs % fs checkvolumes % cd /afs/.cellname % fs rmmount new_cells
For your users to access a newly mounted foreign cell, you must also create an entry for it in each client machine's local /usr/vice/etc/CellServDB file and either reboot the machine or use the fs newcell command to insert the entry directly into its kernel memory. See the instructions in Maintaining Knowledge of Database Server Machines.
% fs listacl [<dir/file path>]
Members of the system:administrators group always implicitly have the a (administer) and by default also the l (lookup) permission on every ACL and can use the fs setacl command to grant other rights as necessary.
% fs rmmount <directory>
where
Specify the read/write path to the mount point, to avoid the failure that results when you attempt to delete a mount point from a read-only volume. By convention, you indicate the read/write path by placing a period before the cell name at the pathname's second level (for example, /afs/.abc.com). For further discussion of the concept of read/write and read-only paths through the filespace, see The Rules of Mount Point Traversal.
This section explains how to display information about volumes. If you know a volume's name or volume ID number, there are commands for displaying its VLDB entry, its volume header, or both. Other commands display the name or location of the volume that contains a specified file or directory.
For instructions on displaying a volume's quota, see Setting and Displaying Volume Quota and Current Size.
The vos listvldb command displays the VLDB entry for the volumes indicated by the combination of arguments you provide. The possibilities are listed here from most to least inclusive:
% vos listvldb [-name <volume name or ID>] [-server <machine name>] \ [-partition <partition name>] [-locked]
where
The VLDB entry for each volume includes the following information:
For further discussion of the New release and Old release flags, see Replicating Volumes (Creating Read-only Volumes).
An example of this command and its output for a single volume:
% vos listvldb user.terry user.terry RWrite: 50489902 Backup: 50489904 number of sites -> 1 server fs3.abc.com partition /vicepc RW Site
The vos listvol command displays the volume header for every volume on one or all partitions on a file server machine. The vos command interpreter obtains the information from the Volume Server on the specified machine. You can control the amount of information displayed by including one of the -fast, the -long, or the -extended flags described following the instructions in To display volume headers.
To display a single volume's volume header of one volume only, use the vos examine command as described in Displaying One Volume's VLDB Entry and Volume Header.
% vos listvol <machine name> [<partition name>] [-fast] [-long] [-extended]
where
The output is ordered alphabetically by volume name and by default provides the following information on a single line for each volume:
If the following message appears instead of the previously listed information, it indicates that a volume is not accessible to Cache Managers or the vos command interpreter, for example because a clone is being created.
**** Volume volume_ID is busy ****
If the following message appears instead of the previously listed information, it indicates that the File Server is unable to attach the volume, perhaps because it is seriously corrupted. The FileLog and VolserLog log files in the /usr/afs/logs directory on the file server machine possibly provide additional information; use the bos getlog command to display them.
**** Could not attach volume volume_ID ****
(For instructions on salvaging a corrupted or unattached volume, see Salvaging Volumes.)
The information about individual volumes is bracketed by summary lines. The first line of output specifies the number of volumes in the listing. The last line of output summarizes the number of volumes that are online, offline, and busy, as in the following example:
% vos listvol fs2.abc.com /vicepb Total number of volumes on server fs2.abc.com \ partition /vicepb : 66 sys 1969534847 RW 1582 K On-line sys.backup 1969535105 BK 1582 K On-line . . . . . . . . . . . . user.pat 1969534536 RW 17518 K On-line user.pat.backup 1969534538 BK 17537 K On-line Total volumes onLine 66 ; Total volumes offLine 0 ; Total busy 0
Output with the -fast Flag
If you include the -fast flag displays only the volume ID number of each volume, arranged in increasing numerical order, as in the following example. The final line (which summarizes the number of on-line, off-line, and busy volumes) is omitted.
% vos listvol fs3.abc.com /vicepa -f Total number of volumes on server fs3.abc.com \ partition /vicepa: 37 50489902 50489904 . . 35970325 49732810
When you include the -long flag, , the output for each volume includes all of the information in the default listing plus the following. Each item in this list corresponds to a separate line of output:
An example of the output when the -long flag is included:
% vos listvol fs2.abc.com b -long Total number of volumes on server fs2.abc.com partition /vicepb: 66 . . . . . . . . . . . . user.pat 1969534536 RW 17518 K On-line fs2.abc.com /vicepb RWrite 1969534536 ROnly 0 Backup 1969534538 MaxQuota 20000 K Creation Mon Jun 12 09:02:25 1989 Last Update Thu Jan 4 17:39:34 1990 1573 accesses in the past day (i.e., vnode references) user.pat.backup 1969534538 BK 17537 K On-line fs2.abc.com /vicepb RWrite 1969534536 ROnly 0 Backup 1969534538 MaxQuota 20000 K Creation Fri Jan 5 06:37:59 1990 Last Update Fri Jan 5 06:37:59 1990 0 accesses in the past day (i.e., vnode references) . . . . . . . . . . Total volumes onLine 66 ; Total volumes offLine 0 ; Total busy 0
Output with the -extended Flag
When you include the -extended flag, the output for each volume includes all of the information reported with the -long flag, plus two tables of statistics:
An example of the output when the -extended flag is included:
% vos listvol fs3.abc.com a -extended common.bboards 1969535592 RW 23149 K used 9401 files On-line fs3.abc.com /vicepa RWrite 1969535592 ROnly 0 Backup 1969535594 MaxQuota 30000 K Creation Mon Mar 8 14:26:05 1999 Last Update Mon Apr 26 09:20:43 1999 11533 accesses in the past day (i.e., vnode references) Raw Read/Write Stats |-------------------------------------------| | Same Network | Diff Network | |----------|----------|----------|----------| | Total | Auth | Total | Auth | |----------|----------|----------|----------| Reads | 151 | 151 | 1092 | 1068 | Writes | 3 | 3 | 324 | 324 | |-------------------------------------------| Writes Affecting Authorship |-------------------------------------------| | File Authorship | Directory Authorship| |----------|----------|----------|----------| | Same | Diff | Same | Diff | |----------|----------|----------|----------| 0-60 sec | 92 | 0 | 100 | 4 | 1-10 min | 1 | 0 | 14 | 6 | 10min-1hr | 0 | 0 | 19 | 4 | 1hr-1day | 1 | 0 | 13 | 0 | 1day-1wk | 1 | 0 | 1 | 0 | > 1wk | 0 | 0 | 0 | 0 | |-------------------------------------------|
The vos examine command displays information from both the VLDB and the volume header for a single volume. There is some redundancy in the information from the two sources, which allows you to compare the VLDB and volume header.
Because the volume header for each version of a volume (read/write, read-only, and backup) is different, you can specify which one to display. Include the .readonly or .backup extension on the volume name or ID argument as appropriate. The information from the VLDB is the same for all three versions.
% vos examine <volume name or ID>
where
The top part of the output displays the same information from a volume header as the vos listvol command with the -long flag, as described following the instructions in To display volume headers. If you specify the read-only version of the volume and it exists at more than one site, the output includes all of them. The bottom part of the output lists the same information from the VLDB as the vos listvldb command, as described following the instructions in To display VLDB entries.
Below is an example for a volume whose VLDB entry is currently locked.
% vos examine user.terry user.terry 536870981 RW 3459 K On-line fs3.abc.com /vicepa Write 5360870981 ROnly 0 Backup 536870983 MaxQuota 40000 K Creation Mon Jun 12 15:22:06 1989 Last Update Fri Jun 16 09:34:35 1989 5719 accesses in the past day (i.e., vnode references) RWrite: 5360870981 Backup: 536870983 number of sites -> 1 server fs3.abc.com partition /vicepa RW Site Volume is currently LOCKED
This section explains how to learn the name, volume ID number, or location of the volume that contains a file or directory.
You can also use one piece of information about a volume (for example, its name) to obtain other information about it (for example, its location). The following list points you to the relevant instructions:
You can also use the command to learn a volume's name by providing its ID number.
% fs listquota [<dir/file path>]
where
The following is an example of the output:
% fs listquota /afs/abc.com/usr/terry Volume Name Quota Used % Used Partition user.terry 15000 5071 34% 86%
% fs examine [<dir/file path>]
where
The following example illustrates how the output reports the volume ID number in the vid field.
% fs examine /afs/abc.com/usr/terry Volume status for vid = 50489902 named user.terry Current maximum quota is 15000 Current blocks used are 5073 The partition has 46383 blocks available out of 333305
Note: | The partition-related statistics in this command's output do not always agree with the corresponding values in the output of the standard UNIX df command. The statistics reported by this command can be up to five minutes old, because the Cache Manager polls the File Server for partition information at that frequency. Also, on some operating systems, the df command's report of partition size includes reserved space not included in this command's calculation, and so is likely to be about 10% larger. |
% fs whereis [<dir/file path>]
where
The output displays the file server machine that houses the volume containing the file, as in the following example:
% fs whereis /afs/abc.com/user/terry File /afs/abc.com/usr/terry is on host fs2.abc.com
% fs listquota [<dir/file path>]
Then issue the vos listvldb command, providing the volume name as the volume name or ID argument. For complete syntax and a description of the output, see To display VLDB entries.
% vos listvldb <volume name or ID>
There are three main reasons to move volumes:
afs: failed to store file (partition full)
You can track available space on AFS server partitions by using the scout or afsmonitor programs described in Monitoring and Auditing AFS Performance.
To move a read/write volume, use the vos move command as described in the following instructions. Before attempting to move the volume, the vos command interpreter verifies that there is enough free space for it on the destination partition. If not, it does not attempt the move operation and prints the following message.
vos: no space on target partition destination_part to move volume volume
To move a read-only volume, you actually remove the volume from the current site by issuing the vos remove command as described in To remove a volume and unmount it. Then define a new site and release the volume to it by issuing the vos addsite and vos release commands as described in To replicate a read/write volume (create a read-only volume).
A backup volume always resides at the same site as its read/write source volume, so you cannot move a backup volume except as part of moving the read/write source. The vos move command automatically deletes the backup version when you move a read/write volume. To create a new backup volume at the new site as soon as the move operation completes, issue the vos backup command as described in To create and mount a backup volume.
% bos listusers <machine name>
% vos move <volume name or ID> \ <machine name on source> <partition name on source > \ <machine name on destination> <partition name on destination>
where
Note: | It is best not to halt a vos move operation before it completes, because parts of the volume can be left on both the source and destination machines. For more information, see the command's reference page in the IBM AFS Administration Reference. |
% vos listvldb <volume name or ID>
% vos backup <volume name or ID>
AFS can provide transparent file access because the Volume Location Database (VLDB) constantly tracks volume locations. When the Cache Manager needs a file, it contacts the Volume Location (VL) Server, which reads the VLDB for the current location of the volume containing the file. Therefore, the VLDB must accurately reflect the state of volumes on the file server machines at all times. The Volume Server and VL Server automatically update a volume's VLDB entry when its status changes during a vos operation, by performing the following series of steps.
If a vos operation fails while the Volume Server is manipulating the volume (corresponding to Step 3), the volume can be left in an intermediate state, which is termed corruption. In this case, the Off-line or Off-line**needs salvage** marker usually appears at the end of the first line of output from the vos examine command. To repair the corruption, run the Salvager before attempting to resynchronize the VLDB and volume headers. For salvaging instructions, see Salvaging Volumes.
More commonly, an interruption while flags are being set or removed (corresponding to Step 1, Step 2, or Step 4) causes a discrepancy between the VLDB and volume headers. To resynchronize the VLDB and volumes, use the vos syncvldb and vos syncserv commands. To achieve complete VLDB consistency, it is best to run the vos syncvldb command on all file server machines in the cell, and then run the vos syncserv command on all file server machines in the cell.
There are several symptoms that indicate a volume operation failed:
If the only problem with a volume is that its VLDB entry is locked, you probably do not need to synchronize the entire VLDB. Instead use the vos unlock or vos unlockvldb command to unlock the entry, as described in Unlocking and Locking VLDB Entries.
The vos syncvldb command corrects the information in the Volume Location Database (VLDB) either about all volumes housed on a file server machine, about the volumes on just one partition, or about a single volume. If checking about one or more partitions, the command contacts the Volume Server to obtain a list of the volumes that actually reside on each partition. It then obtains the VLDB entry for each volume from the VL Server. It changes the VLDB entry as necessary to reflect the state of the volume on the partition. For example, it creates or updates a VLDB entry when it finds a volume for which the VLDB entry is missing or incomplete. However, if there is already a VLDB entry that defines a different location for the volume, or there are irreconcilable conflicts with other VLDB entries, it instead writes a message about the conflict to the standard error stream. The command never removes volumes from the file server machine.
When checking a single volume's VLDB entry, the command also automatically performs the operations invoked by the vos syncserv command: it not only verifies that the VLDB entry is correct for the specified volume type (read/write, backup, or read-only), but also checks that any related volume types mentioned in the VLDB entry actually exist at the site listed in the entry.
The vos syncserv command verifies that each volume type (read/write, read-only, and backup) mentioned in a VLDB entry actually exists at the site indicated in the entry. It checks all VLDB entries that mention a site either on any of a file server machine's partitions or on one partition. Note that command can end up inspecting sites other than on the specified machine or partition, if there are read-only versions of the volume at sites other than the read/write site.
The command alters any incorrect information in the VLDB, unless there is an irreconcilable conflict with other VLDB entries. In that case, it writes a message to the standard error stream instead. The command never removes volumes from their sites.
% bos listusers <machine name>
Note: | To synchronize the VLDB completely, issue the command repeatedly, substituting each file server machine in your cell for the -server argument in turn and omitting the -partition and -volume arguments, before proceeding to Step 3. |
% vos syncvldb -server <machine name> [-partition <partition name>] [-volume <volume name or ID>] [-verbose >> file]
where
Note: | To synchronize the VLDB completely, issue the command repeatedly, substituting each file server machine in your cell for the machine name argument in turn and omitting the partition name argument. |
% vos syncserv <machine name> [<partition name>] [-v >> file]
where
An unexpected interruption while the Volume Server or File Server is manipulating the data in a volume can leave the volume in an intermediate state (corrupted), rather than just creating a discrepancy between the information in the VLDB and volume headers. For example, the failure of the operation that saves changes to a file (by overwriting old data with new) can leave the old and new data mixed together on the disk.
If an operation halts because the Volume Server or File Server exits unexpectedly, the BOS Server automatically shuts down all components of the fs process and invokes the Salvager. The Salvager checks for and repairs any inconsistencies it can. Sometimes, however, there are symptoms of the following sort, which indicate corruption serious enough to create problems but not serious enough to cause the File Server component to fail. In these cases you can invoke the Salvager yourself by issuing the bos salvage command.
Possible cause: The Volume Server or File Server exited in the middle of a file-creation operation, after changing the directory structure, but before actually storing data. (Other possible causes are that the ACL on the directory does not grant the permissions you need to access the file, or there is a process, machine, or network outage. Check for these causes before assuming the file is corrupted.)
Salvager's solution: Remove the file's entry from the directory structure.
Possible cause: Two files or versions of a file are sharing the same disk blocks because of an interrupted operation. The File Server and Volume Server normally refuse to attach volumes that exhibit this type of corruption, because it can be very dangerous. If the Volume Server or File Server do attach the volume but are unsure of the status of the affected disk blocks, they sometimes try to write yet more data there. When they cannot perform the write, the data is lost. This effect can cascade, causing loss of all data on a partition.
Salvager's solution: Delete the data from the corrupted disk blocks in preference to losing an entire partition.
Possible cause: There are orphaned files and directories. An orphaned element is completely inaccessible because it is not referenced by any directory that can act as its parent (is higher in the file tree). An orphaned element is not counted in the calculation of a volume's size (or against its quota), even though it occupies space on the server partition.
Salvager's solution: By default, print a message to the /usr/afs/logs/SalvageLog file reporting how many orphans were found and the approximate number of kilobytes they are consuming. You can use the -orphans argument to remove or attach orphaned elements instead. See To salvage volumes.
When you notice symptoms such as these, use the bos salvage command to invoke the Salvager before corruption spreads. (Even though it operates on volumes, the command belongs to the bos suite because the BOS Server must coordinate the shutdown and restart of the Volume Server and File Server with the Salvager. It shuts them down before the Salvager starts, and automatically restarts them when the salvage operation finishes.)
All of the AFS data stored on a file server machine is inaccessible during the salvage of one or more partitions. If you salvage just one volume, it alone is inaccessible.
When processing one or more partitions, the command restores consistency to corrupted read/write volumes where possible. For read-only or backup volumes, it inspects only the volume header:
Combine the bos salvage command's arguments as indicated to salvage different numbers of volumes:
The Salvager always writes a trace to the /usr/afs/logs/SalvageLog file on the file server machine where it runs. To record the trace in another file as well (either in AFS or on the local disk of the machine where you issue the bos salvage command), name the file with the -file argument. Or, to display the trace on the standard output stream as it is written to the /usr/afs/logs/SalvageLog file, include the -showlog flag.
By default, multiple Salvager subprocesses run in parallel: one for each partition up to four, and four subprocesses for four or more partitions. To increase or decrease the number of subprocesses running in parallel, provide a positive integer value for the -parallel argument.
If there is more than one server partition on a physical disk, the Salvager by default salvages them serially to avoid the inefficiency of constantly moving the disk head from one partition to another. However, this strategy is often not ideal if the partitions are configured as logical volumes that span multiple disks. To force the Salvager to salvage logical volumes in parallel, provide the string all as the value for the -parallel argument. Provide a positive integer to specify the number of subprocesses to run in parallel (for example, -parallel 5all for five subprocesses), or omit the integer to run up to four subprocesses, depending on the number of logical volumes being salvaged.
The Salvager creates temporary files as it runs, by default writing them to the partition it is salvaging. The number of files can be quite large, and if the partition is too full to accommodate them, the Salvager terminates without completing the salvage operation (it always removes the temporary files before exiting). Other Salvager subprocesses running at the same time continue until they finish salvaging all other partitions where there is enough disk space for temporary files. To complete the interrupted salvage, reissue the command against the appropriate partitions, adding the -tmpdir argument to redirect the temporary files to a local disk directory that has enough space.
The -orphans argument controls how the Salvager handles orphaned files and directories that it finds on server partitions it is salvaging. An orphaned element is completely inaccessible because it is not referenced by the vnode of any directory that can act as its parent (is higher in the filespace). Orphaned objects occupy space on the server partition, but do not count against the volume's quota.
During the salvage, the output of the bos status command reports the following auxiliary status for the fs process:
Salvaging file system
% bos listusers <machine name>
% bos salvage -server <machine name> [-partition <salvage partition>] \ [-volume <salvage volume number or volume name>] \ [-file salvage log output file] [-all] [-showlog] \ [-parallel <# of max parallel partition salvaging>] \ [-tmpdir <directory to place tmp files>] \ [-orphans <ignore | remove | attach>]
where
The BOS Server never starts more Salvager subprocesses than there are partitions, and always starts only one process to salvage a single volume. If this argument is omitted, up to four Salvager subprocesses run in parallel.
_ _ORPHANFILE_ _.index for files
_ _ORPHANDIR_ _.index for directories
where index is a two-digit number that uniquely identifies each object. The orphans are charged against the volume's quota and appear in the output of the ls command issued against the volume's root directory.
Every AFS volume has an associated quota which limits the volume's size. The default quota for a newly created volume is 5,000 kilobyte blocks (slightly less that 5 MB). When a volume reaches its quota, the File Server rejects attempts to create new files or directories in it. If an application is writing data into an existing file in a full volume, the File Server allows a defined overage (by default, 1 MB). (You can use the fileserver command's -spare or -pctspare argument to change the default overage; see the command's reference page in the IBM AFS Administration Reference.)
To set a quota other than 5000 KB as you create a volume, include the -maxquota argument to the vos create command, as described in Creating Read/write Volumes. To modify an existing volume's quota, issue either the fs setquota or the fs setvol command as described in the following instructions. Do not set an existing volume's quota lower than its current size.
In general, smaller volumes are easier to administer than larger ones. If you need to move volumes, say for load-balancing purposes, it is easier to find enough free space on other partitions for small volumes. Move operations complete more quickly for small volumes, reducing the potential for outages or other errors to interrupt the move. AFS supports a maximum volume size, which can vary for different AFS releases; see the IBM AFS Release Notes for the version you are using. Also, the size of a partition or logical places an absolute limit on volume size, because a volume cannot span multiple partitions or logical volumes.
It is generally safe to overpack partitions by putting more volumes on them than can actually fit if all the volumes reach their maximum quota. However, only experience determines to what degree overpacking works in your cell. It depends on what kind of quota you assign to volumes (particularly user volumes, which are more likely than system volumes to grow unpredictably) and how much information people generate and store in comparison to their quota.
There are several commands that display a volume's quota, as described in the following instructions. They differ in how much related information they produce.
% pts membership system:administrators
% fs setquota [<dir/file path>] -max <max quota in kbytes>
where
Specify the read/write path to the file or directory, to avoid the failure that results when you attempt to change a read-only volume. By convention, you indicate the read/write path by placing a period before the cell name at the pathname's second level (for example, /afs/.abc.com). For further discussion of the concept of read/write and read-only paths through the filespace, see The Rules of Mount Point Traversal.
% pts membership system:administrators
% fs setvol [<dir/file path>+] -max <disk space quota in 1K units>
where
% fs quota [<dir/file path>+]
where
The following example illustrates the output produced by this command:
% fs quota /afs/abc.com/usr/terry 34% of quota used.
% fs listquota [<dir/file path>+]
where
As illustrated in the following example, the output reports the volume's name, its quota and current size (both in kilobyte units), the percent quota used, and the percentage of space on the volume's host partition that is used.
% fs listquota /afs/abc.com/usr/terry Volume Name Quota Used % Used Partition user.terry 15000 5071 34% 86%
% fs examine [<dir/file path>+]
where
As illustrated in the following example, the output displays the volume's volume ID number and name, its quota and current size (both in kilobyte units), and the free and total number of kilobyte blocks on the volume's host partition.
% fs examine /afs/abc.com/usr/terry Volume status for vid = 50489902 named user.terry Current maximum quota is 15000 Current blocks used are 5073 The partition has 46383 blocks available out of 333305
Note: | The partition-related statistics in this command's output do not always agree with the corresponding values in the output of the standard UNIX df command. The statistics reported by this command can be up to five minutes old, because the Cache Manager polls the File Server for partition information at that frequency. Also, on some operating systems, the df command's report of partition size includes reserved space not included in this command's calculation, and so is likely to be about 10% larger. |
To remove a volume from its site and its record from the VLDB, use the vos remove command. Use it to remove any of the three types of volumes; the effect depends on the type.
If there are no read-only copies left, it is best to remove the volume's mount point to prevent attempts to access the volume's contents. Do not remove the mount point if copies of the read-only volume remain.
If there is more than one read-only site, you must include the -server argument (and optionally -partition argument) to specify the site from which to remove the volume. If there is only one read-only site, the volume name is sufficient; if no read/write volume exists in this case, the entire VLDB entry is removed.
It is not generally appropriate to remove the volume's mount point when removing a read-only volume, especially if the read/write version of the volume still exists. If the read/write version no longer exists, remove the mount point as described in Step 5 of To remove a volume and unmount it.
In the standard configuration, there is a separate mount point for the backup version of a user volume. Remember to remove the mount point to prevent attempt to access the nonexistent volume's contents.
The vos remove command is almost always the appropriate way to remove a volume, because it automatically removes a volume's VLDB entry and both the volume header and all data from the partition. If either the VLDB entry or volume header does not exist, it is sometimes necessary to use other commands that remove only the remaining element. Do not use these commands in the normal case when both the VLDB entry and the volume header exist, because by definition they create discrepancies between them. For details on the commands' syntax, see their reference pages in the IBM AFS Administration Reference.
The vos zap command removes a volume from its site by removing the volume header and volume data for which a VLDB entry no longer exists. You can tell a VLDB entry is missing if the vos listvol command displays the volume header but the vos examine or vos listvldb command cannot locate the VLDB entry. You must run this command to correct the discrepancy, because the vos syncvldb and vos syncserv commands never remove volume headers.
The vos remsite command removes a read-only site definition from the VLDB without affecting the volume on the file server machine. Use this command when you have mistakenly issued the vos addsite command to define a read-only site, but have not yet issued the vos release command to release the volume to the site. If you have actually released a volume to the site, use the vos remove command instead.
The vos delentry command removes the entire VLDB entry that mentions the volume you specify. If versions of the volume actually exist on file server machines, they are not affected. This command is useful if you know for certain that a volume removal was not recorded in the VLDB (perhaps you used the vos zap command during an emergency), and do not want to take the time to resynchronize the entire VLDB with the vos syncvldb and vos syncserv commands.
% bos listusers <machine name>
% fs listacl [<dir/file path>]
Members of the system:administrators group always implicitly have the a (administer) and by default also the l (lookup) permission on every ACL and can use the fs setacl command to grant other rights as necessary.
Alternatively, use the AFS Backup System to create a tape copy. In this case, it can be convenient to create a temporary volume set that includes only the volume of interest. Temporary volume sets are not recorded in the Backup Database, and so do not clutter database with records for volume sets that you use only once. For instructions, see To create a dump.
% vos remove [-server machine name>] [-partition <partition name>] \ -id <volume name or ID>
where
If you are removing a backup volume that is mounted in the conventional way (at a subdirectory of its read/write volume's root directory), then removing the source volume's mount point in this step is sufficient to remove the backup volume's mount point. If you mounted the backup at a completely separate directory, you need to repeat this step for the backup volume's mount point.
% fs rmmount <directory>
Dumping a volume with the vos dump command converts its contents into ASCII format and writes them to the file you specify. The vos restore command places a dump file's contents into a volume after converting them into the volume format appropriate for the indicated file server machine.
Dumping a volume can be useful in several situations, including the following:
You can use the vos dump command to create a full dump, which contains the complete contents of the volume at the time you issue the command, or an incremental dump, which contains only those files and directories with modification timestamps (as displayed by the ls -l command) that are later than a date and time you specify. See Step 3 of the following instructions.
Dumping a volume does not change its VLDB entry or permanently affect its status on the file server machine, but the volume's contents are inaccessible during the dump operation. To avoid interrupting access to the volume, it is generally best to dump the volume's backup version, just after using the vos backup or vos backupsys command to create a new backup version.
If you do not provide a filename into which to write the dump, the vos dump command directs the output to the standard output stream. You can pipe it directly to the vos restore command if you wish.
Because a volume dump file is in ASCII format, you can read its contents using a text editor or a command such as the cat command. However, dump files sometimes contain special characters that do not have alphanumeric correlates, which can cause problems for some display programs.
By default, the vos command interpreter consults the Volume Location Database (VLDB) to learn the volume's location, so the -server and -partition arguments are not required. If the -id argument identifies a read-only volume that resides at multiple sites, then the command dumps the version from just one of them (normally, the one listed first in the volume's VLDB entry as reported by the vos examine or vos listvldb command). To dump the read-only volume from a particular site, use the -server and -partition arguments to specify the site. To bypass the VLDB lookup entirely, provide a volume ID number (rather than a volume name) as the value for the -id argument, along with the -server and -partition arguments. This makes it possible to dump a volume for which there is no VLDB entry.
% bos listusers <machine name>
% fs listacl [<dir/file path>]
Members of the system:administrators group always implicitly have the a (administer) and by default also the l (lookup) permission on every ACL and can use the fs setacl command to grant other rights as necessary.
% vos dump -id <volume name or ID> [-time <dump from time>] [-file <arg>] [-server <server>] [-partition <partition>]
where
To bypass the normal VLDB lookup of the volume's location, provide the volume ID number and combine this argument with the -server and -partition arguments.
Although you can dump any of the three types of volumes (read/write, read-only, or backup), you can restore a dump file to the file system only as a read/write volume, using the vos restore command. The command automatically translates the dump file's contents from ASCII back into the volume format appropriate for the file server machine that stores the restored version. As with the vos dump command, you can restore a dump file via a named pipe, which facilitates interoperation with third-party backup utilities.
You can restore the contents of a dump file in one of two basic ways. In either case, you must restore a full dump of the volume before restoring any incremental dumps. Any incremental dumps that you then restore must have been created after the full dump. If there is more than one incremental dump, you must restore them in the order they were created.
You can assign a volume ID number as you restore the volume, though it is best to have the Volume Server allocate a volume number automatically. The most common reason for specifying the volume ID is that a volume's VLDB entry has disappeared for some reason, but you know the former read/write volume ID number and want to reuse it.
Provide the -overwrite argument to preconfirm that you wish to overwrite the volume's contents, and to specify whether you are restoring a full or incremental dump. If you omit the -overwrite argument, the Volume Server generates the following prompt to confirm that you want to overwrite the existing volume with either a full (f) or incremental (i) dump:
Do you want to do a full/incremental restore or abort? [fia](a):
If you pipe in the dump file via the standard input stream instead of using the -file argument to name it, you must include the -overwrite argument because there is nowhere for the Volume Server to display the prompt in this case.
You can move the volume to a new site as you overwrite it with a full dump, by using the -server and -partition arguments to specify the new site. You cannot move the volume when restoring an incremental dump.
The vos restore command sets the restored volume's creation date in the volume header to the time of the restore operation, as reported in the Creation field in the output from the vos examine and vos listvol commands.
% bos listusers <machine name>
% fs listacl [<dir/file path>]
Members of the system:administrators group always implicitly have the a (administer) and by default also the l (lookup) permission on every ACL and can use the fs setacl command to grant other rights as necessary.
% vos partinfo <machine name> [<partition name>]
% vos restore <machine name> <partition name> \ <name of volume to be restored> \ [-file <dump file>] [-id <volume ID>]
where
% fs mkmount <directory> <volume name>
% fs lsmount <directory>
% bos listusers <machine name>
% fs listacl [<dir/file path>]
Members of the system:administrators group always implicitly have the a (administer) and by default also the l (lookup) permission on every ACL and can use the fs setacl command to grant other rights as necessary.
% vos restore <machine name> <partition name> \ <name of volume to be restored> \ [-file <dump file>] \ -overwrite <full | incremental>
where
% vos release <volume name or ID>
% vos backup <volume name or ID>
You can use the vos rename command to rename a volume. For example, it is appropriate to rename a user's home volume if you use the user.username convention for user volume names and you change the username. (For complete instructions for changing usernames, see Changing Usernames.)
The vos rename command accepts only read/write volume names, but automatically changes the names of the associated read-only and backup volumes. As directed in the following instructions, you need to replace the volume's current mount point with a new one that reflects the name change.
% bos listusers <machine name>
% fs listacl [<dir/file path>]
Members of the system:administrators group always implicitly have the a (administer) and by default also the l (lookup) permission on every ACL and can use the fs setacl command to grant other rights as necessary.
% vos rename <old volume name> <new volume name>
where
If there is no Volume Location Database (VLDB) entry for the specified current volume name, the command fails with the following error message:
vos: Could not find entry for volume old_volume_name.
% fs rmmount <directory>
% fs mkmount <directory> <volume name> [-rw]
As detailed in Synchronizing the VLDB and Volume Headers, The Volume Location (VL) Server locks the Volume Location Database (VLDB) entry for a volume before the Volume Server executes any operation on it. No other operation can affect a volume with a locked VLDB entry, so the lock prevents the inconsistency or corruption that can result from multiple simultaneous operations on a volume.
To verify that a VLDB entry is locked, issue the vos listvldb command as described in To display VLDB entries. The command has a -locked flag that displays locked entries only. If the VLDB entry is locked, the string Volume is currently LOCKED appears on the last line of the volume's output.
To lock a VLDB entry yourself, use the vos lock command. This is useful when you suspect something is wrong with a volume and you want to prevent any changes to it while you are investigating the problem.
To unlock a locked VLDB entry, issue the vos unlock command, which unlocks a single VLDB entry, or the vos unlockvldb command, which unlocks potentially many entries. This is useful when a volume operation fails prematurely and leaves a VLDB entry locked, preventing you from acting to correct the problems resulting from the failure.
% bos listusers <machine name>
% vos lock <volume name or ID>
where
% bos listusers <machine name>
% vos unlock <volume name or ID>
where
% bos listusers <machine name>
% vos unlockvldb [<machine name>] [<partition name>]
where