This chapter describes how to administer an AFS client machine, which is any machine from which users can access the AFS filespace and communicate with AFS server processes. (A client machine can simultaneously function as an AFS server machine if appropriately configured.) An AFS client machine has the following characteristics:
To learn how to install the client functionality on a machine, see the IBM AFS Quick Beginnings.
This chapter explains how to perform the following tasks by
using the indicated commands:
Display cache size set at reboot | cat /usr/vice/etc/cacheinfo |
Display current cache size and usage | fs getcacheparms |
Change disk cache size without rebooting | fs setcachesize |
Initialize Cache Manager | afsd |
Display contents of CellServDB file | cat /usr/vice/etc/CellServDB |
Display list of database server machines from kernel memory | fs listcells |
Change list of database server machines in kernel memory | fs newcell |
Check cell's status regarding setuid | fs getcellstatus |
Set cell's status regarding setuid | fs setcell |
Set server probe interval | fs checkservers -interval |
Display machine's cell membership | cat /usr/vice/etc/ThisCell |
Change machine's cell membership | Edit /usr/vice/etc/ThisCell |
Flush cached file/directory | fs flush |
Flush everything cached from a volume | fs flushvolume |
Update volume-to-mount-point mappings | fs checkvolumes |
Display Cache Manager's server preference ranks | fs getserverprefs |
Set Cache Manager's server preference ranks | fs setserverprefs |
Display client machine addresses to register | fs getclientaddrs |
Set client machine addresses to register | fs setclientaddrs |
Control the display of warning and status messages | fs messages |
Display and change machine's system type | fs sysname |
Enable asynchronous writes | fs storebehind |
An AFS client machine's kernel includes a set of modifications, commonly referred to as the Cache Manager, that enable access to AFS files and directories and communications with AFS server processes. It is common to speak of the Cache Manager as a process or program, and in regular usage it appears to function like one. When configuring it, though, it is helpful to keep in mind that this usage is not strictly accurate.
The Cache Manager mainly fetches files on behalf of application programs running on the machine. When an application requests an AFS file, the Cache Manager contacts the Volume Location (VL) Server to obtain a list of the file server machines that house the volume containing the file. The Cache Manager then translates the application program's system call requests into remote procedure calls (RPCs) to the File Server running on the appropriate machine. When the File Server delivers the file, the Cache Manager stores it in a local cache before delivering it to the application program.
The File Server delivers a data structure called a callback along with the file. (To be precise, it delivers a callback for each file fetched from a read/write volume, and a single callback for all data fetched from a read-only volume.) A valid callback indicates that the Cache Manager's cached copy of a file matches the central copy maintained by the File Server. If an application on another AFS client machine changes the central copy, the File Server breaks the callback, and the Cache Manager must retrieve the new version when an application program on its machine next requests data from the file. As long as the callback is unbroken, however, the Cache Manager can continue to provide the cached version of the file to applications on its machine, which eliminates unnecessary network traffic.
The indicated sections of this chapter explain how to configure and customize the following Cache Manager features. All but the first (choosing disk or memory cache) are optional, because AFS sets suitable defaults for them.
You must make all configuration changes on the client machine itself (at the console or over a direct connection such as a telnet connection). You cannot configure the Cache Manager remotely. You must be logged in as the local superuser root to issue some commands, whereas others require no privilege. All files mentioned in this chapter must actually reside on the local disk of each AFS client machine (they cannot, for example, be symbolic links to files in AFS).
AFS's package program can simplify other aspects of client machine configuration, including those normally set in the machine's AFS initialization file. See Configuring Client Machines with the package Program.
This section briefly describes the client configuration files that must reside in the local /usr/vice/etc directory on every client machine. If the machine uses a disk cache, there must be a partition devoted to cache files; by convention, it is mounted at the /usr/vice/cache directory.
Note for Windows users: Some files described in this document possibly do not exist on machines that run a Windows operating system. Also, Windows uses a backslash ( \ ) rather than a forward slash ( / ) to separate the elements in a pathname.
The /usr/vice/etc directory on a client machine's local disk must contain certain configuration files for the Cache Manager to function properly. They control the most basic aspects of Cache Manager configuration.
If it is important that the client machines in your cell perform uniformly, it is most efficient to update these files from a central source. The following descriptions include pointers to sections that discuss how best to maintain the files.
The IBM AFS Quick Beginnings explains how to create this file as you install a client machine. To change the cache size on a machine that uses a memory cache, edit the file and reboot the machine. On a machine that uses a disk cache, you can change the cache size without rebooting by issuing the fs setcachesize command. For instructions, see Determining the Cache Type, Size, and Location.
The Cache Manager must be able to reach a cell's database server machines to fetch files from its filespace. Incorrect or missing information in the CellServDB file can slow or completely block access. It is important to update the file whenever a cell's database server machines change.
As the afsd program initializes the Cache Manager, it loads the contents of the file into kernel memory. The Cache Manager does not read the file between reboots, so to incorporate changes to the file into kernel memory, you must reboot the machine. Alternatively, you can issue the fs newcell command to insert the changes directly into kernel memory without changing the file. It can also be convenient to upgrade the file from a central source. For instructions, see Maintaining Knowledge of Database Server Machines.
(The CellServDB file on client machines is not the same as the one kept in the /usr/afs/etc directory on server machines, which lists only the local cell's database server machines. For instructions on maintaining the server CellServDB file, see Maintaining the Server CellServDB File).
The IBM AFS Quick Beginnings explains how to create this file as you install the AFS client functionality. To learn about changing a client machine's cell membership, see Setting a Client Machine's Cell Membership.
In addition to these files, the /usr/vice/etc directory also sometimes contains the following types of files and subdirectories:
A client machine that uses a disk cache must have a local disk directory devoted to the cache. The conventional mount point is /usr/vice/cache, but you can use another partition that has more available space.
Do not delete or directly modify any of the files in the cache directory. Doing so can cause a kernel panic, from which the only way to recover is to reboot the machine. By default, only the local superuser root can read the files directly, by virtue of owning them.
A client machine that uses a memory cache keeps all of the information stored in these files in machine memory instead.
This section explains how to configure a memory or disk cache, how to display and set the size of either type of cache, and how to set the location of the cache directory for a disk cache.
The Cache Manager uses a disk cache by default, and it is the preferred type of caching. To configure a memory cache, include the -memcache flag on the afsd command, which is normally invoked in the machine's AFS initialization file. If configured to use a memory cache, the Cache Manager does no disk caching, even if the machine has a disk.
Cache size influences the performance of a client machine more directly than perhaps any other cache parameter. The larger the cache, the faster the Cache Manager is likely to deliver files to users. A small cache can impair performance because it increases the frequency at which the Cache Manager must discard cached data to make room for newly requested data. When an application asks for data that has been discarded, the Cache Manager must request it from the File Server, and fetching data across the network is almost always slower than fetching it from the local disk. The Cache Manager never discards data from a file that has been modified locally but not yet stored back to the File Server. If the cache is very small, the Cache Manager possible cannot find any data to discard. For more information about the algorithm it uses when discarding cached data, see How the Cache Manager Chooses Data to Discard).
The amount of disk or memory you devote to caching depends on several factors. The amount of space available in memory or on the partition housing the disk cache directory imposes an absolute limit. In addition, you cannot allocate more than 95% of the space available on the cache directory's partition to a disk cache. The afsd program exits without starting the Cache Manager and prints an appropriate message to the standard output stream if you violate this restriction. For a memory cache, you must leave enough memory for other processes and applications to run. If you try to allocate more memory than is actually available, the afsd program exits without initializing the Cache Manager and produces the following message on the standard output stream:
afsd: memCache allocation failure at number KB
where number is how many kilobytes were allocated just before the failure.
Within these hard limits, the factors that determine appropriate cache size include the number of users working on the machine, the size of the files with which they usually work, and (for a memory cache) the number of processes that usually run on the machine. The higher the demand from these factors, the larger the cache needs to be to maintain good performance.
Disk caches smaller than 10 MB do not generally perform well. Machines serving multiple users usually perform better with a cache of at least 60 to 70 MB. The point at which enlarging the cache further does not really improve performance depends on the factors mentioned previously, and is difficult to predict.
Memory caches smaller than 1 MB are nonfunctional, and the performance of caches smaller than 5 MB is usually unsatisfactory. Suitable upper limits are similar to those for disk caches but are probably determined more by the demands on memory from other sources on the machine (number of users and processes). Machines running only a few processes possibly can use a smaller memory cache.
AFS imposes an absolute limit on cache size in some versions. See the IBM AFS Release Notes for the version you are using.
The Cache Manager determines how big to make the cache by reading the /usr/vice/etc/cacheinfo file as it initializes. As directed in the IBM AFS Quick Beginnings, you must create the file before running the afsd program. The file also defines the directory on which to mount AFS (by convention, /afs), and the local disk directory to use for a cache directory.
To change any of the values in the file, log in as the local superuser root. You must reboot the machine to have the new value take effect. For instructions, see To edit the cacheinfo file.
To change the cache size at reboot without editing the cacheinfo file, include the -blocks argument to the afsd command; see the command's reference page in the IBM AFS Administration Reference.
For a disk cache, you can also use the fs setcachesize command to reset the cache size without rebooting. The value you set persists until the next reboot, at which time the cache size returns to the value specified in the cacheinfo file or by the -blocks argument to the afsd command. For instructions, see To change the disk cache size without rebooting.
To display the current cache size and the amount of space the Cache Manager is using at the moment, use the fs getcacheparms command as detailed in To display the current cache size.
% cat /usr/vice/etc/cacheinfo
% fs getcacheparms
where getca is the shortest acceptable abbreviation of getcacheparms.
The output shows the number of kilobyte blocks the Cache Manager is using as a cache at the moment the command is issued, and the current size of the cache. For example:
AFS using 13709 of the cache's available 15000 1K byte blocks.
% su root Password: root_password
The following example mounts the AFS filespace at the /afs directory, names /usr/vice/cache as the cache directory, and sets cache size to 50,000 KB:
/afs:/usr/vice/cache:50000
% su root Password: root_password
Note: | This command does not work for a memory cache. |
# fs setcachesize <size in 1K byte blocks (0 => reset)>
where
% su root Password: root_password
# fs setcachesize 0
# fs setcachesize -reset
where
When the cache is full and application programs request more data from AFS, the Cache Manager must flush out cache chunks to make room for the data. The Cache Manager considers two factors:
The Cache Manager first checks the least-recently used chunk. If it is not dirty, the Cache Manager discards the data in that chunk. If the chunk is dirty, the Cache Manager moves on to check the next least recently used chunk. It continues in this manner until it has created a sufficient number of empty chunks.
Chunks that contain data fetched from a read-only volume are by definition never dirty, so the Cache Manager can always discard them. Normally, the Cache Manager can also find chunks of data fetched from read/write volumes that are not dirty, but a small cache makes it difficult to find enough eligible data. If the Cache Manager cannot find any data to discard, it must return I/O errors to application programs that request more data from AFS. Application programs usually have a means for notifying the user of such errors, but not for revealing their cause.
There are only three cache configuration parameters you must set: the mount directory for AFS, the location of the disk cache directory, and the cache size. They correspond to the three fields in the /usr/vice/etc/cacheinfo file, as discussed in Determining the Cache Type, Size, and Location. However, if you want to experiment with fine-tuning cache performance, you can use the arguments on the afsd command to control several other parameters. This section discusses a few of these parameters that have the most direct effect on cache performance. To learn more about the afsd command's arguments, see its reference page in the IBM AFS Administration Reference.
In addition, the AFS initialization script included in the AFS distribution for each system type includes several variables that set several afsd arguments in a way that is suitable for client machines of different sizes and usage patterns. For instructions on using the script most effectively, see the section on configuring the Cache Manager in the IBM AFS Quick Beginnings.
The cache configuration parameters with the most direct effect on cache performance include the following:
This parameter does not have as much of an effect on cache performance as total size. However, adjusting it can influence how often the Cache Manager must discard cached data to make room for new data. Suppose, for example, that you set the disk cache size to 50 MB and the number of chunks (Vn files) to 1,000. If each of the ten users on the machine caches 100 AFS files that average 20 KB in size, then all 1,000 chunks are full (a chunk can contain data from only one AFS file) but the cache holds only about 20 MB of data. When a user requests more data from the File Server, the Cache Manager must discard cached data to reclaim some chunks, even though the cache is filled to less than 50% of its capacity. In such a situation, increasing the number of chunks enables the Cache Manager to discard data less often.
The main reason to change chunk size is because of its relation to the amount of data fetched per RPC. If your network links are very fast, it can improve performance to increase chunk size; if the network is especially slow, it can make sense to decrease chunk size.
For a disk cache, dcache entries reside in the /usr/vice/cache/CacheItems file; a small number are duplicated in machine memory to speed access.
For a memory cache, the number of dcache entries equals the number of cache chunks. For a discussion of the implications of this correspondence, see Controlling Memory Cache Configuration.
For a description of how the Cache Manager determines defaults for number of chunks, chunk size, and number of dcache entries in a disk cache, see Configuring a Disk Cache; for a memory cache, see Controlling Memory Cache Configuration. The instructions also explain how to use the afsd command's arguments to override the defaults.
The default number of cache chunks (Vn files) in a disk cache is calculated by the afsd command to be the greatest of the following:
You can override this value by specifying a positive integer with the -files argument. Consider increasing this value if more than 75% of the Vn files are already used soon after the Cache Manager finishes initializing. Consider decreasing it if only a small percentage of the chunks are used at that point. In any case, never specify a value less than 100, because a smaller value can cause performance problems.
The following example sets the number of Vn files to 2,000:
/usr/vice/etc/afsd -files 2000
Note: | It is conventional to place the afsd command in a machine's AFS initialization file, rather than entering it in a command shell. Furthermore, the values specified in this section are examples only, and are not necessarily suitable for a specific machine. |
The default chunk size for a disk cache is 64 KB. In general, the only reason to change it is to adjust to exceptionally slow or fast networks; see Setting Cache Configuration Parameters. You can use the -chunksize argument to override the default. Chunk size must be a power of 2, so provide an integer between 0 (zero) and 30 to be used as an exponent of 2. For example, a value of 10 sets chunk size to 1 KB (210 = 1024); a value of 16 equals the default for disk caches (216 = 64 KB). Specifying a value of 0 (zero) or greater than 30 returns chunk size to the default. Values less than 10 (1 KB) are not recommended. The following example sets chunk size to 16 KB (214):
/usr/vice/etc/afsd -chunksize 14
For a disk cache, the default number of dcache entries duplicated in memory is one-half the number of chunks specified with the -files argument, to a maximum of 2,000 entries. You can use the -dcache argument to change the default, even exceeding 2,000 if you wish. Duplicating more than half the dcache entries in memory is not usually necessary, but sometimes improves performance slightly, because access to memory is faster than access to disk. The following example sets the number to 750:
/usr/vice/etc/afsd -dcache 750
When configuring a disk cache, you can combine the afsd command's arguments in any way. The main reason for this flexibility is that the setting you specify for disk cache size (in the cacheinfo file or with the -blocks argument) is an absolute maximum limit. You cannot override it by specifying higher values for the -files or -chunksize arguments, alone or in combination. A related reason is that the Cache Manager does not have to reserve a set amount of memory on disk. Vn files (the chunks in a disk cache) are initially zero-length, but can expand up to the specified chunk size and shrink again, as needed. If you set the number of Vn files to such a large value that expanding all of them to the full allowable size exceeds the total cache size, they simply never grow to full size.
Configuring a memory cache differs from configuring a disk cache in that not all combinations of the afsd command's arguments are allowed. This limitation results from the greater interaction between the configuration parameters in a memory cache than a disk cache. If all combinations are allowed, it is possible to set the parameters in an inconsistent way. A list of the acceptable and unacceptable combinations follows a discussion of default values.
The default chunk size for a memory cache is 8 KB. In general, the only reason to change it is to adjust to exceptionally slow or fast networks; see Setting Cache Configuration Parameters.
There is no predefined default for number of chunks in a memory cache. The Cache Manager instead calculates the correct number by dividing the total cache size by the chunk size. Recall that for a memory cache, all dcache entries must be in memory. This implies that the number of chunks equals the number of dcache entries in memory, and that there is no default for number of dcache entries (like the number of chunks, it is calculated by dividing the total size by the chunk size).
The following are acceptable combinations of the afsd command's arguments when configuring a memory cache:
/usr/vice/etc/afsd -memcache -blocks 5120
/usr/vice/etc/afsd -memcache -chunksize 12
/usr/vice/etc/afsd -memcache -blocks 6144 -chunksize 12
The following arguments or combinations explicitly set the number of chunks and dcache entries. It is best not to use them, because they set the cache size indirectly, forcing you to perform a hand calculation to determine the size of the cache. Instead, set the -blocks and -chunksize arguments alone or in combination; in those cases, the Cache Manager determines the number of chunks and dcache entries itself. Because the following combinations are not recommended, no examples are included.
Do not use the following arguments for a memory cache:
For the users of an AFS client machine to access a cell's AFS filespace and other services, the Cache Manager and other client-side agents must have an accurate list of the cell's database server machines. The affected functions include the following:
To enable a machine's users to access a cell, you must list the names and IP addresses of its database server machines in the /usr/vice/etc/CellServDB file on the machine's local disk. In addition to the machine's home cell, you can list any foreign cells that you want to enable users to access. (To enable access to a cell's filespace, you must also mount its root.cell volume in the local AFS filespace; the conventional location is just under the AFS root directory, /afs. For instructions, see the IBM AFS Quick Beginnings.)
As the afsd program runs and initializes the Cache Manager, it reads the contents of the CellServDB file into kernel memory. The Cache Manager does not consult the file again until the machine next reboots. In contrast, the command interpreters for the AFS command suites (such as fs and pts) read the CellServDB file each time they need to contact a database server process.
When a cell's list of database server machines changes, you must change both the CellServDB file and the list in kernel memory to preserve consistent client performance; some commands probably fail if the two lists of machines disagree. One possible method for updating both the CellServDB file and kernel memory is to edit the file and reboot the machine. To avoid needing to reboot, you can instead perform both of the following steps:
The consequences of missing or incorrect information in the CellServDB file or kernel memory are as follows:
When editing the /usr/vice/etc/CellServDB file, you must use the correct format for cell and machine entries. Each cell has a separate entry. The first line has the following format:
>cell_name #organization
where cell_name is the cell's complete Internet domain name (for example, abc.com) and organization is an optional field that follows any number of spaces and the number sign (#) and can name the organization to which the cell corresponds (for example, the ABC Corporation). After the first line comes a separate line for each database server machine. Each line has the following format:
IP_address #machine_name
where IP_address is the machine's IP address in dotted decimal format (for example, 192.12.105.3). Following any number of spaces and the number sign (#) is machine_name, the machine's fully-qualified hostname (for example, db1.abc.com). In this case, the number sign does not indicate a comment: machine_name is a required field.
The order in which the cells appear is not important, but it is convenient to put the client machine's home cell first. Do not include any blank lines in the file, not even after the last entry.
The following example shows entries for two cells, each of which has three database server machines:
>abc.com #ABC Corporation (home cell) 192.12.105.3 #db1.abc.com 192.12.105.4 #db2.abc.com 192.12.105.55 #db3.abc.com >stateu.edu #State University cell 138.255.68.93 #serverA.stateu.edu 138.255.68.72 #serverB.stateu.edu 138.255.33.154 #serverC.stateu.edu
Because a correct entry in the CellServDB file is vital for consistent client performance, you must also update the file on each client machine whenever a cell's list of database server machines changes (for instance, when you follow the instructions in the IBM AFS Quick Beginnings to add or remove a database server machine). To facilitate the client updates, you can use the package program, which copies files from a central source in AFS to the local disk of client machines. It is conventional to invoke the package program in a client machine's AFS initialization file so that it runs as the machine reboots, but you can also issue the package command at any time. For instructions, see Running the package program.
If you use the package program, the conventional location for your cell's central source CellServDB file is /afs/cell_name/common/etc/CellServDB, where cell_name is your cell name.
Creating a symbolic or hard link from /usr/vice/etc/CellServDB to a central source file in AFS is not a viable option. The afsd program reads the file into kernel memory before the Cache Manager is completely initialized and able to access AFS.
Because every client machine has its own copy of the CellServDB file, you can in theory make the set of accessible cells differ on various machines. In most cases, however, it is best to maintain consistency between the files on all client machines in the cell: differences between machines are particularly confusing if users commonly use a variety of machines rather than just one.
The AFS Product Support group maintains a central CellServDB file that includes all cells that have agreed to make their database server machines access to other AFS cells. It is advisable to check this file periodically for updated information. See Making Your Cell Visible to Others.
An entry in the local CellServDB is one of the two requirements for accessing a cell. The other is that the cell's root.cell volume is mounted in the local filespace, by convention as a subdirectory of the /afs directory. For instructions, see To create a cellular mount point.
Note: | The /usr/vice/etc/CellServDB file on a client machine is not the same as the /usr/afs/etc/CellServDB file on the local disk of a file server machine. The server version lists only the database server machines in the server machine's home cell, because server processes never need to contact foreign cells. It is important to update both types of CellServDB file on all machines in the cell whenever there is a change to your cell's database server machines. For more information about maintaining the server version of the CellServDB file, see Maintaining the Server CellServDB File. |
% cat /usr/vice/etc/CellServDB
% fs listcells [&]
where listc is the shortest acceptable abbreviation of listcells.
To have your shell prompt return immediately, include the ampersand (&), which makes the command run in the background. It can take a while to generate the complete output because the kernel stores database server machines' IP addresses only, and the fs command interpreter has the cell's name resolution service (such as the Domain Name Service or a local host table) translate them into hostnames. You can halt the command at any time by issuing an interrupt signal such as Ctrl-c.
The output includes a single line for each cell, in the following format:
Cell cell_name on hosts list_of_hostnames.
The name service sometimes returns hostnames in uppercase letters, and if it cannot resolve a name at all, it returns its IP address. The following example illustrates all three possibilities:
% fs listcells . . Cell abc.com on hosts db1.abc.com db2.abc.com db3.abc.com Cell stateu.edu on hosts SERVERA.STATEU.EDU SERVERB.STATEU.EDU SERVERC.STATEU.EDU Cell ghi.org on hosts 191.255.64.111 191.255.64.112 . .
% su root Password: root_password
# fs listacl [<dir/file path>]
Note: | You cannot use this command to remove a cell's entry completely from kernel memory. In the rare cases when you urgently need to prevent access to a specific cell, you must edit the CellServDB file and reboot the machine. |
# fs newcell <cell name> <primary servers>+ \ [-linkedcell <linked cell name>]
where
# /etc/package -v -c <name of package file>
A setuid program is one whose binary file has the UNIX setuid mode bit turned on. While a setuid program runs, the user who initialized it assumes the local identity (UNIX UID) of the binary file's owner, and so is granted the permissions in the local file system that pertain to the owner. Most commonly, the issuer's assumed identity (often referred to as effective UID) is the local superuser root.
AFS does not recognize effective UID: if a setuid program accesses AFS files and directories, it uses the current AFS identity of the user who initialized the program, not of the program's owner. Nevertheless, it can be useful to store setuid programs in AFS for use on more than one client machine. AFS enables a client machine's administrator to determine whether the local Cache Manager allows setuid programs to run or not.
By default, the Cache Manager allows programs from its home cell to run with setuid permission, but denies setuid permission to programs from foreign cells. A program belongs to the same cell as the file server machine that houses the volume in which the file resides, as specified in the file server machine's /usr/afs/etc/ThisCell file. The Cache Manager determines its own home cell by reading the /usr/vice/etc/ThisCell file at initialization.
To change a cell's setuid status with respect to the local machine, become the local superuser root and issue the fs setcell command. To determine a cell's current setuid status, use the fs getcellstatus command.
When you issue the fs setcell command, you directly alter a cell's setuid status as recorded in kernel memory, so rebooting the machine is not necessary. However, nondefault settings do not persist across reboots of the machine unless you add the appropriate fs setcell command to the machine's AFS initialization file.
Only members of the system:administrators group can turn on the setuid mode bit on an AFS file or directory. When the setuid mode bit is turned on, the UNIX ls -l command displays the third user mode bit as an s instead of an x, but for an AFS file or directory, the s appears only if setuid permission is enabled for the cell in which the file resides.
% fs getcellstatus <cell name>
where
The output reports the setuid status of each cell:
% su root Password: root_password
# fs setcell <cell name>+ [-suid] [-nosuid]
where
The Cache Manager periodically sends a probe to server machines to verify that they are still accessible. Specifically, it probes the database server machines in its cell and those file servers that house data it has cached.
If a server process does not respond to a probe, the client machine assumes that it is inaccessible. By default, the interval between probes is three minutes, so it can take up to three minutes for a client to recognize that a server process is once again accessible after it was inaccessible.
To adjust the probe interval, include the -interval argument to the fs checkservers command while logged in as the local superuser root. The new interval setting persists until you again issue the command or reboot the machine, at which time the setting returns to the default. To preserve a nondefault setting across reboots, include the appropriate fs checkservers command in the machine's AFS initialization file.
% su root Password: root_password
# fs checkservers -interval <seconds between probes>
where
Each client machine belongs to a particular cell, as named in the /usr/vice/etc/ThisCell on its local disk. The machine's cell membership determines three defaults important to users of the machine:
% cat /usr/vice/etc/ThisCell
% su root Password: root_password
# sync # shutdown
AFS's callback mechanism normally guarantees that the Cache Manager provides the most current version of a file or directory to the application programs running on its machine. However, you can force the Cache Manager to discard (flush) cached data so that the next time an application program requests it, the Cache Manager fetches the latest version available at the File Server.
You can control how many file system elements to flush at a time:
In addition to callbacks, the Cache Manager has a mechanism for tracking other kinds of possible changes, such as changes in a volume's location. If a volume moves and the Cache Manager has not accessed any data in it for a long time, the Cache Manager's volume location record can be wrong. To resynchronize it, use the fs checkvolumes command. When you issue the command, the Cache Manager creates a new table of mappings between volume names, ID numbers, and locations. This forces the Cache Manager to reference newly relocated and renamed volumes before it can provide data from them.
It is also possible for information about mount points to become corrupted in the cache. Symptoms of a corrupted mount point included garbled output from the fs lsmount command, and failed attempts to change directory to or list the contents of a mount point. Use the fs flushmount command to discard a corrupted mount point. The Cache Manager must refetch the mount point the next time it crosses it in a pathname. (The Cache Manager periodically refreshes cached mount points, but the only other way to discard them immediately is to reinitialize the Cache Manager by rebooting the machine.
% fs flush [<dir/file path>+]
where
% fs flushvolume [<dir/file path>+]
where
% fs checkvolumes
where checkv is the shortest acceptable abbreviation of checkvolumes.
The following command confirms that the command completed successfully:
All volumeID/name mappings checked.
% fs flush [<dir/file path>+]
where
As mentioned in the introduction to this chapter, AFS uses client-side data caching and callbacks to reduce the amount of network traffic in your cell. The Cache Manager also tries to make its use of the network as efficient as possible by assigning preference ranks to server machines based on their network proximity to the local machine. The ranks bias the Cache Manager to fetch information from the server machines that are on its own subnetwork or network rather than on other networks, if possible. Reducing the network distance that data travels between client and server machine tends to reduce network traffic and speed the Cache Manager's delivery of data to applications.
The Cache Manager stores two separate sets of preference ranks in kernel memory. The first set of ranks applies to machines that run the Volume Location (VL) Server process, hereafter referred to as VL Server machines. The second set of ranks applies to machines that run the File Server process, hereafter referred to as file server machines. This section explains how the Cache Manager sets default ranks, how to use the fs setserverprefs command to change the defaults or set new ranks, and how to use the fs getserverprefs command to display the current set of ranks.
As the afsd program initializes the Cache Manager, it assigns a preference rank of 10,000 to each of the VL Server machines listed in the local /usr/vice/etc/CellServDB file. It then randomizes the ranks by adding an integer randomly chosen from the range 0 (zero) to 126. It avoids assigning the same rank to machines in one cell, but it is possible for machines from different cells to have the same rank. This does not present a problem in use, because the Cache Manager compares the ranks of only one cell's database server machines at a time. Although AFS supports the use of multihomed database server machines, the Cache Manager only uses the single address listed for each database server machine in the local /usr/vice/etc/CellServDB file. Only Ubik can take advantage of a multihomed database server machine's multiple interfaces.
The Cache Manager assigns preference ranks to a file server machine when it obtains the server's VLDB record from the VL Server, the first time that it accesses a volume that resides on the machine. If the machine is multihomed, the Cache Manager assigns a distinct rank to each of its interfaces (up to the number of interfaces that the VLDB can store for each machine, which is specified in the IBM AFS Release Notes). The Cache Manager compares the interface's IP address to the local machine's address and applies the following algorithm:
If the client machine has only one interface, the Cache Manager compares it to the server interface's IP address and sets a rank according to the algorithm. If the client machine is multihomed, the Cache Manager compares each of the local interface addresses to the server interface, and assigns to the server interface the lowest rank that results from comparing it to all of the client interfaces.
After assigning a base rank to a file server machine interface, the Cache Manager adds to it a number randomly chosen from the range 0 (zero) to 15. As an example, a file server machine interface in the same subnetwork as the local machine receives a base rank of 20,000, but the Cache Manager records the actual rank as an integer between 20,000 and 20,015. This process reduces the number of interfaces that have exactly the same rank. As with VL Server machine ranks, it is possible for file server machine interfaces from foreign cells to have the same rank as interfaces in the local cell, but this does not present a problem. Only the relative ranks of the interfaces that house a specific volume are relevant, and AFS supports storage of a volume in only one cell at a time.
Each preference rank pairs an interface's IP address with an integer that can range from 1 to 65,534. A lower rank (lower number) indicates a stronger preference. Once set, a rank persists until the machine reboots, or until you use the fs setserverprefs command to change it.
The Cache Manager uses VL Server machine ranks when it needs to fetch volume location information from a cell. It compares the ranks for the cell's VL Server machines and attempts to contact the VL Server process on the machine with the best (lowest integer) rank. If it cannot reach that VL Server, it tries to contact the VL Server with the next best rank, and so on. If all of a cell's VL Server machines are inaccessible, the Cache Manager cannot fetch data from the cell.
Similarly, when the Cache Manager needs to fetch data from a volume, it compares the ranks for the interfaces of machines that house the volume, and attempts to contact the interface that has the best rank. If it cannot reach the fileserver process via that interface, it tries to contact the interface with the next best integer rank, and so on. If it cannot reach any of the interfaces for machines that house the volume, it cannot fetch data from the volume.
To display the file server machine ranks that the Cache Manager is using, use the fs getserverprefs command. Include the -vlservers flag to display VL Server machine ranks instead. By default, the output appears on the standard output stream (stdout), but you can write it to a file instead by including the -file argument.
The Cache Manager stores IP addresses rather than hostnames in its kernel list of ranks, but by default the output identifies interfaces by hostname after calling a translation routine that refers to either the cell's name service (such as the Domain Name Server) or the local host table. If an IP address appears in this case, it is because the translation attempt failed. To bypass the translation step and display IP addresses rather than hostnames, include the -numeric flag. This can significantly speed up the output.
You can use the fs setserverprefs command to reset an existing preference rank, or to set the initial rank of a file server machine interface or VL Server machine for which the Cache Manager has no rank. The ranks you set persist until the machine reboots or until you issue the fs setserverprefs command again. To make a rank persist across a reboot, place the appropriate fs setserverprefs command in the machine's AFS initialization file.
As with default ranks, the Cache Manager adds a randomly chosen integer to each rank range that you assign. For file server machine interfaces, the randomizing number is from the range 0 (zero) to 15; for VL Server machines, it is from the range 0 (zero) to 126. For example, if you assign a rank of 15,000 to a file server machine interface, the Cache Manager stores an integer between 15,000 to 15,015.
To assign VL Server machine ranks, list them after the -vlserver argument to the fs setserverprefs command.
To assign file server machine ranks, use or more of the three possible methods:
You can combine any of the -servers, -file, and -stdin options on the same command line if you wish. If more than one of them specifies a rank for the same interface, the one assigned with the -servers argument takes precedence. You can also provide the -vlservers argument on the same command line to set VL Server machine ranks at the same time as file server machine ranks.
The fs command interpreter does not verify hostnames or IP addresses, and so willingly stores ranks for hostnames and addresses that don't actually exist. The Cache Manager never uses such ranks unless the same VLDB record for a server machine records the same incorrect information.
% fs getserverprefs [-file <output to named file>] [-numeric] [-vlservers]
where
The following example displays file server machine ranks. The -numeric flag is not used, so the appearance of an IP address indicates that is not currently possible to translate it to a hostname.
% fs gp fs5.abc.com 20000 fs1.abc.com 30014 server1.stateu.edu 40011 fs3.abc.com 20001 fs4.abc.com 30001 192.12.106.120 40002 192.12.106.119 40001 . . . . . . .
% su root Password: root_password
# fs setserverprefs [-servers <fileserver names and ranks>+] \ [-vlservers <VL server names and ranks>+] \ [-file <input from named file>] [-stdin]
where
The File Server can choose the interface to which to send a message when it initiates communication with the Cache Manager on a multihomed client machine (one with more than one network interface and IP address). If that interface is inaccessible, it automatically switches to an alternate. This improves AFS performance, because it means that the outage of an interface does not interrupt communication between File Server and Cache Manager.
The File Server can choose the client interface when it sends two types of messages:
(The File Server does not choose which client interface to respond to when filling a Cache Manager's request for AFS data. In that case, it always responds to the client interface via which the Cache Manager sent the request.)
The Cache Manager compiles the list of eligible interfaces on its client machine automatically as it initializes, and records them in kernel memory. When the Cache Manager first establishes a connection with the File Server, it sends along the list of interface addresses. The File Server records the addresses, and uses the one at the top of the list when it needs to break a callback or send a ping to the Cache Manager. If that interface is inaccessible, the File Server simultaneously sends a message to all of the other interfaces in the list. Whichever interface replies first is the one to which the File Server sends future messages.
You can control which addresses the Cache Manager registers with File Servers by listing them in two files in the /usr/vice/etc directory on the client machine's local disk: NetInfo and NetRestrict. If the NetInfo file exists when the Cache Manager initializes, the Cache Manager uses its contents as the basis for the list of interfaces. Otherwise, the Cache Manager uses the list of interfaces configured with the operating system. It then removes from the list any addresses that appear in the /usr/vice/etc/NetRestrict file, if it exists. The Cache Manager records the resulting list in kernel memory.
You can also use the fs setclientaddrs command to change the list of addresses stored in the Cache Manager's kernel memory, without rebooting the client machine. The list of addresses you provide on the command line completely replaces the current list in kernel memory. The changes you make persist only until the client machine reboots, however. To preserve the revised list across reboots, list the interfaces in the NetInfo file (and if appropriate, the NetRestrict file) in the local /usr/vice/etc directory. (You can also place the appropriate fs setclientaddrs command in the machine's AFS initialization script, but that is less efficient: by the time the Cache Manager reads the command in the script, it has already compiled a list of interfaces.)
To display the list of addresses that the Cache Manager is currently registering with File Servers, use the fs getclientaddrs command.
Keep the following in mind when you change the NetInfo or NetRestrict file, or issue the fs getclientaddrs or fs setclientaddrs commands:
% su root Password: root_password
% su root Password: root_password
% fs getclientaddrs
where gc is an acceptable alias for getclientaddrs (getcl is the shortest acceptable abbreviation).
The output lists each IP address on its own line, in dotted decimal format.
% su root Password: root_password
# fs setclientaddrs [-address <client network interfaces>+]
where
By default, the Cache Manager generates two types of warning and informational messages:
You can use the fs messages command to control whether the Cache Manager displays either type of message, both types, or neither. It is best not to disable messages completely, because they provide useful information.
If you want to monitor Cache Manager status and performance more actively, you can use the afsmonitor program to collect an extensive set of statistics (it also gathers File Server statistics). If you experience performance problems, you can use fstrace suite of commands to gather a low-level trace of Cache Manager operations, which the AFS Support and Development groups can analyze to help solve your problem. To learn about both utilities, see Monitoring and Auditing AFS Performance.
% su root Password: root_password
# fs messages -show <user|console|all|none>
where
The Cache Manager stores the system type name of the local client machine in kernel memory. It reads in the default value from a hardcoded definition in the AFS client software.
The Cache Manager uses the system name as a substitute for the @sys variable in AFS pathnames. The variable is useful when creating a symbolic link from the local disk to an AFS directory that houses binaries for the client machine's system type. Because the @sys variable automatically steers the Cache Manager to the appropriate directory, you can create the same symbolic link on client machines of different system types. (You can even automate the creation operation by using the package utility described in Configuring Client Machines with the package Program.) The link also remains valid when you upgrade the machine to a new system type.
Configuration is simplest if you use the system type names that AFS assigns. For a list, see the IBM AFS Release Notes.
To display the system name stored in kernel memory, use the sys or fs sysname command. To change the name, add the latter command's -newsys argument.
% fs sysname % sys
The output of the fs sysname command has the following format:
Current sysname is 'system_name'
The sys command displays the system_name string with no other text.
% su root Password: root_password
# fs sysname <new sysname>
where
By default, the Cache Manager writes all data to the File Server immediately and synchronously when an application program closes a file. That is, the close system call does not return until the Cache Manager has actually written all of the cached data from the file back to the File Server. You can enable the Cache Manager to write files asynchronously by specifying the number of kilobytes of a file that can remain to be written to the File Server when the Cache Manager returns control to the application.
Enabling asynchronous writes can be helpful to users who commonly work with very large files, because it usually means that the application appears to perform faster. However, it introduces some complications. It is best not to enable asynchronous writes unless the machine's users are sophisticated enough to understand the potential problems and how to avoid them. The complications include the following:
No space left on device
To avoid losing data because of insufficient quota, before closing a file users must verify that the volume housing the file has enough free space to accommodate it.
When you enable asynchronous writes by issuing the fs storebehind command, you set the number of kilobytes of a file that can still remain to be written to the File Server when the Cache Manager returns control to the application program. You can apply the setting either to all files manipulated by applications running on the machine, or only to certain files:
% su root Password: root_password
# fs storebehind -allfiles <new default (KB)> [-verbose]
where
% fs listacl dir/file path
Alternatively, become the local superuser root on the client machine, if you are not already, by issuing the su command.
% su root Password: root_password
# fs storebehind -kbytes <asynchrony for specified names> \ -files <specific pathnames>+ \ [-verbose]
where
% fs storebehind [-verbose]
where
% fs storebehind -files <specific pathnames>+
where
The output lists each file separately. If a value has previously been set for the specified files, the output reports the following:
Will store up to y kbytes of file asynchronously. Default store asynchrony is x kbytes.
If the default store asynchrony applies to a file (because you have not set a -kbytes value for it), the output reports the following:
Will store file according to default. Default store asynchrony is x kbytes.