Administering Server Machines server machine administering administering server machine This chapter describes how to administer an AFS server machine. It describes the following configuration information and administrative tasks: The binary and configuration files that must reside in the subdirectories of the /usr/afs directory on every server machine's local disk; see Local Disk Files on a Server Machine. The various roles or functions that an AFS server machine can perform, and how to determine which machines are taking a role; see The Four Roles for File Server Machines. How to maintain database server machines; see Administering Database Server Machines. How to maintain the list of database server machines in the /usr/afs/etc/CellServDB file; see Maintaining the Server CellServDB File. How to control authorization checking on a server machine; see Managing Authentication and Authorization Requirements. How to install new disks or partitions on a file server machine; see Adding or Removing Disks and Partitions. How to change a server machine's IP addresses and manager VLDB server entries; see Managing Server IP Addresses and VLDB Server Entries. How to reboot a file server machine; see Rebooting a Server Machine. To learn how to install and configure a new server machine, see the OpenAFS Quick Beginnings. To learn how to administer the server processes themselves, see Monitoring and Controlling Server Processes. To learn how to administer volumes, see Managing Volumes. Summary of Instructions This chapter explains how to perform the following tasks by using the indicated commands: Install new binaries bos install Examine binary check-and-restart time bos getrestart Set binary check-and-restart time bos setrestart Examine compilation dates on binary files bos getdate Restart a process to use new binaries bos restart Revert to old version of binaries bos uninstall Remove obsolete .BAK and .OLD versions bos prune List partitions on a file server machine vos listpart Shutdown AFS server processes bos shutdown List volumes on a partition vos listvldb Move read/write volumes vos move List a cell's database server machines bos listhosts Add a database server machine to server CellServDB file bos addhost Remove a database server machine from server CellServDB file bos removehost Set authorization checking requirements bos setauth Prevent authentication for bos, pts, and vos commands Include -noauth flag Prevent authentication for kas commands Include -noauth flag on some commands or issue noauthentication while in interactive mode Display all VLDB server entries vos listaddrs Remove a VLDB server entry vos changeaddr Reboot a server machine remotely bos exec reboot_command Local Disk Files on a Server Machine Several types of files must reside in the subdirectories of the /usr/afs directory on an AFS server machine's local disk. They include binaries, configuration files, the administrative database files (on database server machines), log files, and volume header files. Note for Windows users: Some files described in this document possibly do not exist on machines that run a Windows operating system. Also, Windows uses a backslash (\) rather than a forward slash (/) to separate the elements in a pathname. usr/afs/bin directory on server machines contents listed directory /usr/afs/bin on server machines server process binaries in /usr/afs/bin Binaries in the /usr/afs/bin Directory The /usr/afs/bin directory stores the AFS server process and command suite binaries appropriate for the machine's system (CPU and operating system) type. If a process has both a server portion and a client portion (as with the Update Server) or if it has separate components (as with the fs process), each component resides in a separate file. To ensure predictable system performance, all file server machines must run the same AFS build version of a given process. To maintain consistency easily, use the Update Server process to distribute binaries from a binary distribution machine of each system type, as described further in Binary Distribution Machines. It is best to keep the binaries for all processes in the /usr/afs/bin directory, even if you do not run the process actively on the machine. It simplifies the process of reconfiguring machines (for example, adding database server functionality to an existing file server machine). Similarly, it is best to keep the command suite binaries in the directory, even if you do not often issue commands while working on the server machine. It enables you to issue commands during recovery from server and machine outages. The following lists the binary files in the /usr/afs/bin directory that are directly related to the AFS server processes or command suites. Other binaries (for example, for the klog command) sometimes appear in this directory on a particular file server machine's disk or in an AFS distribution. files backup command binary backup commands binary in /usr/afs/bin backup The command suite for the AFS Backup System (the binary for the Backup Server is buserver). files bos command binary bos commands binary in /usr/afs/bin bos The command suite for communicating with the Basic OverSeer (BOS) Server (the binary for the BOS Server is bosserver). bosserver binary in /usr/afs/bin bosserver BOS Server files bosserver binary programs bosserver processes BOS Server, binary in /usr/afs/bin BOS Server binary in /usr/afs/bin bosserver The binary for the Basic OverSeer (BOS) Server process. buserver binary in /usr/afs/bin buserver Backup Server files buserver programs buserver processes Backup Server, binary in /usr/afs/bin Backup Server binary in /usr/afs/bin buserver The binary for the Backup Server process. fileserver binary in /usr/afs/bin fileserver File Server files fileserver programs fileserver processes File Server, binary in /usr/afs/bin File Server binary in /usr/afs/bin fileserver The binary for the File Server component of the fs process. files kas command binary kas commands binary in /usr/afs/bin kas The command suite for communicating with the Authentication Server (the binary for the Authentication Server is kaserver). kaserver process binary in /usr/afs/bin kaserver process Authentication Server files kaserver binary file programs kaserver processes Authentication Server, binary in /usr/afs/bin Authentication Server binary in /usr/afs/bin kaserver The binary for the Authentication Server process. ntpd binary in /usr/afs/bin files ntpd programs ntpd processes NTPD, binary in /usr/afs/bin NTPD Network Time Protocol Daemon NTPD ntpd The binary for the Network Time Protocol Daemon (NTPD). AFS redistributes this binary and uses the runntp program to configure and initialize the NTPD process. ntpdc binary in /usr/afs/bin files ntpdc programs ntpdc ntpdc A debugging utility furnished with the ntpd program. files pts command binary pts commands binary in /usr/afs/bin pts The command suite for communicating with the Protection Server process (the binary for the Protection Server is ptserver). ptserver process binary in /usr/afs/bin ptserver process Protection Server files ptserver binary programs ptserver processes Protection Server, binary in /usr/afs/bin Protection Server binary in /usr/afs/bin ptserver The binary for the Protection Server process. runntp binary in /usr/afs/bin runntp NTPD files runntp programs runntp runntp The binary for the program used to configure NTPD most appropriately for use with AFS. Salvager binary in /usr/afs/bin Salvager Salvager files salvager programs salvager processes Salvager, binary in /usr/afs/bin salvager The binary for the Salvager component of the fs process. udebug binary in /usr/afs/bin files udebug commands udebug programs udebug udebug The binary for a program that reports the status of AFS's distributed database technology, Ubik. upclient binary in /usr/afs/bin upclient Update Server files upclient programs upclient processes Update Server, binaries in /usr/afs/bin Update Server binaries in /usr/afs/bin upclient The binary for the client portion of the Update Server process. upserver binary in /usr/afs/bin upserver Update Server files upserver programs upserver upserver The binary for the server portion of the Update Server process. vlserver binary in /usr/afs/bin vlserver VL Server files vlserver programs vlserver processes VL Server, binary in /usr/afs/bin VL Server binary in /usr/afs/bin Volume Location Server VL Server vlserver The binary for the Volume Location (VL) Server process. volserver binary in /usr/afs/bin volserver Volume Server files volserver programs volserver processes Volume Server, binary in /usr/afs/bin Volume Server binary in /usr/afs/bin volserver The binary for the Volume Server component of the fs process. files vos command binary vos commands binary in /usr/afs/bin vos The command suite for communicating with the Volume and VL Server processes (the binaries for the servers are volserver and vlserver, respectively). usr/afs/etc directory on server machines contents listed directory /usr/afs/etc files server configuration, in /usr/afs/etc directory common configuration files (server) server machine configuration files in /usr/afs/etc Common Configuration Files in the /usr/afs/etc Directory The directory /usr/afs/etc on every file server machine's local disk contains configuration files in ASCII and machine-independent binary format. For predictable AFS performance throughout a cell, all server machines must have the same version of each configuration file: Update Server distributing server configuration files Cells that run the United States edition of AFS conventionally use the Update Server to distribute a common version of each file from the cell's system control machine to other server machines (for more on the system control machine, see The System Control Machine). Run the Update Server's server portion on the system control machine, and the client portion on all other server machines. Update the files on the system control machine only, except as directed by instructions for dealing with emergencies. Cells that run the international edition of AFS must not use the Update Server to distribute the contents of the /usr/afs/etc directory. Due to United States government regulations, the data encryption routines that AFS uses to protect the files in this directory as they cross the network are not available to the Update Server in the international edition of AFS. You must instead update the files on each server machine individually, taking extra care to issue exactly the same bos command for each machine. The necessary data encryption routines are available to the bos commands, so information is safe as it crosses the network from the machine where the bos command is issued to the server machines. Never directly edit any of the files in the /usr/afs/etc directory, except as directed by instructions for dealing with emergencies. In normal circumstances, use the appropriate bos commands to change the files. The following list includes pointers to instructions. The files in this directory include: CellServDB file (server) about files CellServDB (server) CellServDB An ASCII file that names the cell's database server machines, which run the Authentication, Backup, Protection, and VL Server processes. You create the initial version of this file by issuing the bos setcellname command while installing your cell's first server machine. It is very important to update this file when you change the identity of your cell's database server machines. The server CellServDB file is not the same as the CellServDB file stored in the /usr/vice/etc directory on client machines. The client version lists the database server machines for every AFS cell that you choose to make accessible from the client machine. The server CellServDB file lists only the local cell's database server machines, because server processes never contact processes in other cells. For instructions on maintaining this file, see Maintaining the Server CellServDB File. KeyFile file function of files KeyFile server encryption key KeyFile A machine-independent, binary-format file that lists the server encryption keys the AFS server processes use to encrypt and decrypt tickets. The information in this file is the basis for secure communication in the cell, and so is extremely sensitive. The file is specially protected so that only privileged users can read or change it. For instructions on maintaining this file, see Managing Server Encryption Keys. ThisCell file (server) files ThisCell (server) ThisCell An ASCII file that consists of a single line defining the complete Internet domain-style name of the cell (such as abc.com). You create this file with the bos setcellname command during the installation of your cell's first file server machine, as instructed in the OpenAFS Quick Beginnings. Note that changing this file is only one step in changing your cell's name. For discussion, see Choosing a Cell Name. UserList file files UserList UserList An ASCII file that lists the usernames of the system administrators authorized to issue privileged bos, vos, and backup commands. For instructions on maintaining the file, see Administering the UserList File. usr/afs/local directory on server machines contents listed directory /usr/afs/local on server machines local configuration files (server) file server machine configuration files in /usr/afs/local Local Configuration Files in the /usr/afs/local Directory The directory /usr/afs/local contains configuration files that are different for each file server machine in a cell. Thus, they are not updated automatically from a central source like the files in /usr/afs/bin and /usr/afs/etc directories. The most important file is the BosConfig file; it defines which server processes are to run on that machine. As with the common configuration files in /usr/afs/etc, you must not edit these files directly. Use commands from the bos command suite where appropriate; some files never need to be altered. The files in this directory include the following: BosConfig file files BosConfig BosConfig This file lists the server processes to run on the server machine, by defining which processes the BOS Server monitors and what it does if the process fails. It also defines the times at which the BOS Server automatically restarts processes for maintenance purposes. As you create server processes during a file server machine's installation, their entries are defined in this file automatically. The OpenAFS Quick Beginnings outlines the bos commands to use. For a more complete description of the file, and instructions for controlling process status by editing the file with commands from the bos suite, see Monitoring and Controlling Server Processes. NetInfo file (server version) files NetInfo (server version) NetInfo This optional ASCII file lists one or more of the network interface addresses on the server machine. If it exists when the File Server initializes, the File Server uses it as the basis for the list of interfaces that it registers in its Volume Location Database (VLDB) server entry. See Managing Server IP Addresses and VLDB Server Entries. NetRestrict file (server version) files NetRestrict (server version) NetRestrict This optional ASCII file lists one or more network interface addresses. If it exists when the File Server initializes, the File Server removes the specified addresses from the list of interfaces that it registers in its VLDB server entry. See Managing Server IP Addresses and VLDB Server Entries. NoAuth file files NoAuth NoAuth This zero-length file instructs all AFS server processes running on the machine not to perform authorization checking. Thus, they perform any action for any user, even anonymous. This very insecure state is useful only in rare instances, mainly during the installation of the machine. The file is created automatically when you start the initial bosserver process with the -noauth flag, or issue the bos setauth command to turn off authentication requirements. When you use the bos setauth command to turn on authentication, the BOS Server removes this file. For more information, see Managing Authentication and Authorization Requirements. SALVAGE.fs file files SALVAGE.fs SALVAGE.fs This zero-length file controls how the BOS Server handles a crash of the File Server component of the fs process. The BOS Server creates this file each time it starts or restarts the fs process. If the file is present when the File Server crashes, then the BOS Server runs the Salvager before restarting the File Server and Volume Server again. When the File Server exits normally, the BOS Server removes the file so that the Salvager does not run. Do not create or remove this file yourself; the BOS Server does so automatically. If necessary, you can salvage a volume or partition by using the bos salvage command; see Salvaging Volumes. salvage.lock file files salvage.lock salvage.lock This file guarantees that only one Salvager process runs on a file server machine at a time (the single process can fork multiple subprocesses to salvage multiple partitions in parallel). As the Salvager initiates (when invoked by the BOS Server or by issue of the bos salvage command), it creates this zero-length file and issues the flock system call on it. It removes the file when it completes the salvage operation. Because the Salvager must lock the file in order to run, only one Salvager can run at a time. sysid file files sysid File Server interfaces registered in VLDB listed in sysid file VLDB server machine interfaces registered listed in sysid file sysid This file records the network interface addresses that the File Server (fileserver process) registers in its VLDB server entry. When the Cache Manager requests volume location information, the Volume Location (VL) Server provides all of the interfaces registered for each server machine that houses the volume. This enables the Cache Manager to make use of multiple addresses when accessing AFS data stored on a multihomed file server machine. For further information, see Managing Server IP Addresses and VLDB Server Entries. usr/afs/db directory on server machines contents listed directory /usr/afs/db on server machines database files replicated database files log files for replicated databases file server machine database files in /usr/afs/db Replicated Database Files in the /usr/afs/db Directory The directory /usr/afs/db contains two types of files pertaining to the four replicated databases in the cell--the Authentication Database, Backup Database, Protection Database, and Volume Location Database (VLDB): A file that contains each database, with a .DB0 extension. A log file for each database, with a .DBSYS1 extension. The database server process logs each database operation in this file before performing it. If the operation is interrupted, the process consults this file to learn how to finish it. Each database server process (Authentication, Backup, Protection, or VL Server) maintains its own database and log files. The database files are in binary format, so you must always access or alter them using commands from the kas suite (for the Authentication Database), backup suite (for the Backup Database), pts suite (for the Protection Database), or vos suite (for the VLDB). If a cell runs more than one database server machine, each database server process keeps its own copy of its database on its machine's hard disk. However, it is important that all the copies of a given database are the same. To synchronize them, the database server processes call on AFS's distributed database technology, Ubik, as described in Replicating the OpenAFS Administrative Databases. The files listed here appear in this directory only on database server machines. On non-database server machines, this directory is empty. files bdb.DB0 bdb.DB0 file bdb.DB0 The Backup Database file. files bdb.DBSYS1 bdb.DBSYS1 file bdb.DBSYS1 The Backup Database log file. files kaserver.DB0 kaserver.DB0 file kaserver.DB0 The Authentication Database file. files kaserver.DBSYS1 kaserver.DBSYS1 file kaserver.DBSYS1 The Authentication Database log file. files prdb.DB0 prdb.DB0 file prdb.DB0 The Protection Database file. files prdb.DBSYS1 prdb.DBSYS1 file prdb.DBSYS1 The Protection Database log file. files vldb.DB0 vldb.DB0 file vldb.DB0 The Volume Location Database file. files vldb.DBSYS1 vldb.DBSYS1 file vldb.DBSYS1 The Volume Location Database log file. usr/afs/logs directory on server machines contents listed directory /usr/afs/logs on server machines file server machine core files in /usr/afs/logs file server machine log files in /usr/afs/logs log files for server processes core files for server processes Log Files in the /usr/afs/logs Directory The /usr/afs/logs directory contains log files from various server processes. The files detail interesting events that occur during normal operations. For instance, the Volume Server can record volume moves in the VolserLog file. Events are recorded at completion, so the server processes do not use these files to reconstruct failed operations unlike the ones in the /usr/afs/db directory. The information in log files can be very useful as you evaluate process failures and other problems. For instance, if you receive a timeout message when you try to access a volume, checking the FileLog file possibly provides an explanation, showing that the File Server was unable to attach the volume. To examine a log file remotely, use the bos getlog command as described in Displaying Server Process Log Files. This directory also contains the core image files generated if a process being monitored by the BOS Server crashes. The BOS Server attempts to add an extension to the standard core name to indicate which process generated the core file (for example, naming a core file generated by the Protection Server core.ptserver). The BOS Server cannot always assign the correct extension if two processes fail at about the same time, so it is not guaranteed to be correct. The directory contains the following files: AuthLog file files AuthLog AuthLog The Authentication Server's log file. BackupLog file files BackupLog BackupLog The Backup Server's log file. BosLog file files BosLog BosLog The BOS Server's log file. files FileLog FileLog file FileLog The File Server's log file. files SalvageLog SalvageLog file SalvageLog The Salvager's log file. VLLog file files VLLog VLLog The Volume Location (VL) Server's log file. VolserLog file files VolserLog VolserLog The Volume Server's log file. core.process If present, a core image file produced as an AFS server process on the machine crashed (probably the process named by process). To prevent log files from growing unmanageably large, restart the server processes periodically, particularly the database server processes. To avoid restarting the processes, use the UNIX rm command to remove the file as the process runs; it re-creates it automatically. vicep directory on server machines contents listed directory /vicep on server machines volume header in /vicep directories partition housing AFS volumes file server machine partitions, naming Volume Headers on Server Partitions A partition that houses AFS volumes must be mounted at a subdirectory of the machine's root ( / ) directory (not, for instance under the /usr directory). The file server machine's file system registry file (/etc/fstab or equivalent) must correctly map the directory name and the partition's device name. The directory name is of the form /vicepindex, where each index is one or two lowercase letters. By convention, the first AFS partition on a machine is mounted at /vicepa, the second at /vicepb, and so on. If there are more than 26 partitions, continue with /vicepaa, /vicepab and so on. The OpenAFS Release Notes specifies the number of supported partitions per server machine. Do not store non-AFS files on AFS partitions. The File Server and Volume Server expect to have available all of the space on the partition. The /vicep directories contain two types of files: V.vol_ID.vol file files V.vol_ID.vol Vvol_ID.vol Each such file is a volume header. The vol_ID corresponds to the volume ID number displayed in the output from the vos examine, vos listvldb, and vos listvol commands. FORCESALVAGE file files FORCESALVAGE FORCESALVAGE This zero-length file triggers the Salvager to salvage the entire partition. The AFS-modified version of the fsck program creates this file if it discovers corruption. For most system types, it is important never to run the standard fsck program provided with the operating system on an AFS file server machine. It removes all AFS volume data from server partitions because it does not recognize their format. roles for server machine server machine roles summarized The Four Roles for File Server Machines In cells that have more than one server machine, not all server machines have to perform exactly the same functions. The are four possible roles a machine can assume, determined by which server processes it is running. A machine can assume more than one role by running all of the relevant processes. The following list summarizes the four roles, which are described more completely in subsequent sections. A simple file server machine runs only the processes that store and deliver AFS files to client machines. You can run as many simple file server machines as you need to satisfy your cell's performance and disk space requirements. A database server machine runs the four database server processes that maintain AFS's replicated administrative databases: the Authentication, Backup, Protection, and Volume Location (VL) Server processes. A binary distribution machine distributes the AFS server binaries for its system type to all other server machines of that system type. The single system control machine distributes common server configuration files to all other server machines in the cell, in a cell that runs the United States edition of AFS (cells that use the international edition of AFS must not use the system control machine for this purpose). The machine conventionally also serves as the time synchronization source for the cell, adjusting its clock according to a time source outside the cell. If a cell has a single server machine, it assumes the simple file server and database server roles. The instructions in the OpenAFS Quick Beginnings also have you configure it as the system control machine and binary distribution machine for its system type, but it does not actually perform those functions until you install another server machine. It is best to keep the binaries for all of the AFS server processes in the /usr/afs/bin directory, even if not all processes are running. You can then change which roles a machine assumes simply by starting or stopping the processes that define the role. simple file server machine server machine simple file server role Simple File Server Machines A simple file server machine runs only the server processes that store and deliver AFS files to client machines, monitor process status, and pick up binaries and configuration files from the cell's binary distribution and system control machines. In general, only cells with more than three server machines need to run simple file server machines. In cells with three or fewer machines, all of them are usually database server machines (to benefit from replicating the administrative databases); see Database Server Machines. The following processes run on a simple file server machine: The BOS Server (bosserver process) The fs process, which combines the File Server, Volume Server, and Salvager processes so that they can coordinate their operations on the data in volumes and avoid the inconsistencies that can result from multiple simultaneous operations on the same data The NTP coordinator (runntp process), which helps keep the machine's clock synchronized with the clocks on the other server machines in the cell A client portion of the Update Server that picks up binary files from the binary distribution machine of its AFS system type (the upclientbin process) A client portion of the Update Server that picks up common configuration files from the system control machine, in cells running the United States edition of AFS (the upclientetc process) database server machine defined server machine database server role Backup Server runs on database server machine Authentication Server runs on database server machine Protection Server runs on database server machine VL Server runs on database server machine Database Server Machines A database server machine runs the four processes that maintain the AFS replicated administrative databases: the Authentication Server, Backup Server, Protection Server, and Volume Location (VL) Server, which maintain the Authentication Database, Backup Database, Protection Database, and Volume Location Database (VLDB), respectively. To review the functions of these server processes and their databases, see AFS Server Processes and the Cache Manager. If a cell has more than one server machine, it is best to run more than one database server machine, but more than three are rarely necessary. Replicating the databases in this way yields the same benefits as replicating volumes: increased availability and reliability of information. If one database server machine or process goes down, the information in the database is still available from others. The load of requests for database information is spread across multiple machines, preventing any one from becoming overloaded. Unlike replicated volumes, however, replicated databases do change frequently. Consistent system performance demands that all copies of the database always be identical, so it is not possible to record changes in only some of them. To synchronize the copies of a database, the database server processes use AFS's distributed database technology, Ubik. See Replicating the OpenAFS Administrative Databases. It is critical that the AFS server processes on every server machine in a cell know which machines are the database server machines. The database server processes in particular must maintain constant contact with their peers in order to coordinate the copies of the database. The other server processes often need information from the databases. Every file server machine keeps a list of its cell's database server machines in its local /usr/afs/etc/CellServDB file. Cells that use the States edition of AFS can use the system control machine to distribute this file (see The System Control Machine). The following processes define a database server machine: The Authentication Server (kaserver process) The Backup Server (buserver process) The Protection Server (ptserver process) The VL Server (vlserver process) Database server machines can also run the processes that define a simple file server machine, as listed in Simple File Server Machines. One database server machine can act as the cell's system control machine, and any database server machine can serve as the binary distribution machine for its system type; see The System Control Machine and Binary Distribution Machines. binary distribution machine defined server machine binary distribution role Update Server server portion on binary distribution machine Update Server client portion for binaries Binary Distribution Machines A binary distribution machine stores and distributes the binary files for the AFS processes and command suites to all other server machines of its system type. Each file server machine keeps its own copy of AFS server process binaries on its local disk, by convention in the /usr/afs/bin directory. For consistent system performance, however, all server machines must run the same version (build level) of a process. For instructions for checking a binary's build level, see Displaying A Binary File's Build Level. The easiest way to keep the binaries consistent is to have a binary distribution machine of each system type distribute them to its system-type peers. The process that defines a binary distribution machine is the server portion of the Update Server (upserver process). The client portion of the Update Server (upclientbin process) runs on the other server machines of that system type and references the binary distribution machine. Binary distribution machines usually also run the processes that define a simple file server machine, as listed in Simple File Server Machines. One binary distribution machine can act as the cell's system control machine, and any binary distribution machine can serve as a database server machine; see The System Control Machine and Database Server Machines. system control machine configuration files server machine, common server machine system control role The System Control Machine In cells that run the United States edition of AFS, the system control machine stores and distributes system configuration files shared by all of the server machines in the cell. Each file server machine keeps its own copy of the configuration files on its local disk, by convention in the /usr/afs/etc directory. For consistent system performance, however, all server machines must use the same files. The easiest way to keep the files consistent is to have the system control machine distribute them. You make changes only to the copy stored on the system control machine, as directed by the instructions in this document. The United States edition of AFS is available to cells in the United States and Canada and to selected institutions in other countries, as determined by United States government regulations. Cells that run the international version of AFS do not use the system control machine to distribute system configuration files. Some of the files contain information that is too sensitive to cross the network unencrypted, and United States government regulations forbid the export of the necessary encryption routines in the form that the Update Server uses. You must instead update the configuration files on each file server machine individually. The bos commands that you use to update the files encrypt the information using an exportable form of the encryption routines. For a list of the configuration files stored in the /usr/afs/etc directory, see Common Configuration Files in the /usr/afs/etc Directory. The OpenAFS Quick Beginnings configures a cell's first server machine as the system control machine. If you wish, you can reassign the role to a different machine that you install later, but you must then change the client portion of the Update Server (upclientetc) process running on all other server machines to refer to the new system control machine. The following processes define the system control machine: Update Server server portion on system control machine Update Server client portion for configuration files The server portion of the Update Server (upserver) process, in cells using the United States edition of AFS. The client portion of the Update Server (upclientetc process) runs on the other server machines and references the system control machine. The NTP coordinator (runntp process) which points to a time source outside the cell, if the cell uses NTPD to synchronize its clocks. The runntp process on other machines reference the system control machine as their main time source. The system control machine can also run the processes that define a simple file server machine, as listed in Simple File Server Machines. It can also server as a database server machine, and by convention acts as the binary distribution machine for its system type. A single upserver process can distribute both configuration files and binaries. See Database Server Machines and Binary Distribution Machines. determining roles taken by server machine identifying roles taken by server machine server machine determining roles roles for server machine determining database server machine identifying with bos status determining identity of database server machines identifying database server machine To locate database server machines Issue the bos listhosts command. % bos listhosts <machine name> The machines listed in the output are the cell's database server machines. For complete instructions and example output, see To display a cell's database server machines. (Optional) Issue the bos status command to verify that a machine listed in the output of the bos listhosts command is actually running the processes that define it as a database server machine. For complete instructions, see Displaying Process Status and Information from the BosConfig File. % bos status <machine name> buserver kaserver ptserver vlserver If the specified machine is a database server machine, the output from the bos status command includes the following lines: Instance buserver, currently running normally. Instance kaserver, currently running normally. Instance ptserver, currently running normally. Instance vlserver, currently running normally. system control machine identifying with bos status determining identity of system control machine identifying system control machine To locate the system control machine Issue the bos status command for any server machine. Complete instructions appear in Displaying Process Status and Information from the BosConfig File. % bos status <machine name> upserver upclientbin upclientetc -long The output you see depends on the machine you have contacted: a simple file server machine, the system control machine, or a binary distribution machine. See Interpreting the Output from the bos status Command. binary distribution machine identifying with bos status determining identity of binary distribution machine identifying binary distribution machine To locate the binary distribution machine for a system type Issue the bos status command for a file server machine of the system type you are checking (to determine a machine's system type, issue the fs sysname or sys command as described in Displaying and Setting the System Type Name. Complete instructions for the bos status command appear in Displaying Process Status and Information from the BosConfig File. % bos status <machine name> upserver upclientbin upclientetc -long The output you see depends on the machine you have contacted: a simple file server machine, the system control machine, or a binary distribution machine. See Interpreting the Output from the bos status Command. simple file server machine identifying with bos status determining identity of: simple file server machines identifying simple file server machine Interpreting the Output from the bos status Command Interpreting the output of the bos status command is most straightforward for a simple file server machine. There is no upserver process, so the output includes the following message: bos: failed to get instance info for 'upserver' (no such entity) A simple file server machine runs the upclientbin process, so the output includes a message like the following. It indicates that fs7.abc.com is the binary distribution machine for this system type. Instance upclientbin, (type is simple) currently running normally. Process last started at Wed Mar 10 23:37:09 1999 (1 proc start) Command 1 is '/usr/afs/bin/upclient fs7.abc.com -t 60 /usr/afs/bin' If you run the United States edition of AFS, a simple file server machine also runs the upclientetc process, so the output includes a message like the following. It indicates that fs1.abc.com is the system control machine. Instance upclientetc, (type is simple) currently running normally. Process last started at Mon Mar 22 05:23:49 1999 (1 proc start) Command 1 is '/usr/afs/bin/upclient fs1.abc.com -t 60 /usr/afs/etc' The Output on the System Control Machine If you run the United States edition of AFS and have issued the bos status command for the system control machine, the output includes an entry for the upserver process similar to the following: Instance upserver, (type is simple) currently running normally. Process last started at Mon Mar 22 05:23:54 1999 (1 proc start) Command 1 is '/usr/afs/bin/upserver' If you are using the default configuration recommended in the OpenAFS Quick Beginnings, the system control machine is also the binary distribution machine for its system type, and a single upserver process distributes both kinds of updates. In that case, the output includes the following messages: bos: failed to get instance info for 'upclientbin' (no such entity) bos: failed to get instance info for 'upclientetc' (no such entity) If the system control machine is not a binary distribution machine, the output includes an error message for the upclientetc process, but a complete a listing for the upclientbin process (in this case it refers to the machine fs5.abc.com as the binary distribution machine): Instance upclientbin, (type is simple) currently running normally. Process last started at Mon Mar 22 05:23:49 1999 (1 proc start) Command 1 is '/usr/afs/bin/upclient fs5.abc.com -t 60 /usr/afs/bin' bos: failed to get instance info for 'upclientetc' (no such entity) The Output on a Binary Distribution Machine If you have issued the bos status command for a binary distribution machine, the output includes an entry for the upserver process similar to the following and error message for the upclientbin process: Instance upserver, (type is simple) currently running normally. Process last started at Mon Apr 5 05:23:54 1999 (1 proc start) Command 1 is '/usr/afs/bin/upserver' bos: failed to get instance info for 'upclientbin' (no such entity) Unless this machine also happens to be the system control machine, a message like the following references the system control machine (in this case, fs3.abc.com): Instance upclientetc, (type is simple) currently running normally. Process last started at Mon Apr 5 05:23:49 1999 (1 proc start) Command 1 is '/usr/afs/bin/upclient fs3.abc.com -t 60 /usr/afs/etc' Administering Database Server Machines This section explains how to administer database server machines. For installation instructions, see the OpenAFS Quick Beginnings. distribution of databases database, distributed administrative database distributed database administrative database administrative database about replicating database server machine maintaining Ubik operation described synchronization site (Ubik) defined secondary site (Ubik) coordinator (Ubik) defined Ubik automatic updates automatic update to admin. databases by Ubik Replicating the OpenAFS Administrative Databases There are several benefits to replicating the AFS administrative databases (the Authentication, Backup, Protection, and Volume Location Databases), as discussed in Replicating the OpenAFS Administrative Databases. For correct cell functioning, the copies of each database must be identical at all times. To keep the databases synchronized, AFS uses library of utilities called Ubik. Each database server process runs an associated lightweight Ubik process, and client-side programs call Ubik's client-side subroutines when they submit requests to read and change the databases. Ubik is designed to work with minimal administrator intervention, but there are several configuration requirements, as detailed in Configuring the Cell for Proper Ubik Operation. The following brief overview of Ubik's operation is helpful for understanding the requirements. For more details, see How Ubik Operates Automatically. Ubik is designed to distribute changes made in an AFS administrative database to all copies as quickly as possible. Only one copy of the database, the synchronization site, accepts change requests from clients; the lightweight Ubik process running there is the Ubik coordinator. To maintain maximum availability, there is a separate Ubik coordinator for each database, and the synchronization site for each of the four databases can be on a different machine. The synchronization site for a database can also move from machine to machine in response to process, machine, or network outages. The other copies of a database, and the Ubik processes that maintain them, are termed secondary. The secondary sites do not accept database changes directly from client-side programs, but only from the synchronization site. After the Ubik coordinator records a change in its copy of a database, it immediately sends the change to the secondary sites. During the brief distribution period, clients cannot access any of the copies of the database, even for reading. If the coordinator cannot reach a majority of the secondary sites, it halts the distribution and informs the client that the attempted change failed. To avoid distribution failures, the Ubik processes maintain constant contact by exchanging time-stamped messages. As long as a majority of the secondary sites respond to the coordinator's messages, there is a quorum of sites that are synchronized with the coordinator. If a process, machine, or network outage breaks the quorum, the Ubik processes attempt to elect a new coordinator in order to establish a new quorum among the highest possible number of sites. See A Flexible Coordinator Boosts Availability. Ubik requirements summarized database server process need to run all on every database server machine CellServDB file (server) importance to Ubik operation Configuring the Cell for Proper Ubik Operation This section describes how to configure your cell to maintain proper Ubik operation. Run all four database server processes--Authentication Server, Backup Server, Protection Server, and VL Server--on all database server machines. Both the client and server portions of Ubik expect that all the database server machines listed in the CellServDB file are running all of the database server processes. There is no mechanism for indicating that only some database server processes are running on a machine. Maintain correct information in the /usr/afs/etc/CellServDB file at all times. Ubik consults the /usr/afs/etc/CellServDB file to determine the sites with which to establish and maintain a quorum. Incorrect information can result in unsynchronized databases or election of a coordinator in each of several subgroups of machines, because the Ubik processes on various machines do not agree on which machines need to participate in the quorum. If you run the United States version of AFS and use the Update Server, it is simplest to maintain the /usr/afs/etc/CellServDB file on the system control machine, which distributes its copy to all other server machines. The OpenAFS Quick Beginnings explains how to configure the Update Server. If you run the international version of AFS, you must update the file on each machine individually. The only reason to alter the file is when configuring or decommissioning a database server machine. Use the appropriate bos commands rather than editing the file by hand. For instructions, see Maintaining the Server CellServDB File. The instructions in Monitoring and Controlling Server Processes for stopping and starting processes remind you to alter the CellServDB file when appropriate, as do the instructions in the OpenAFS Quick Beginnings for installing or decommissioning a database server machine. (Client processes and the server processes that do not maintain databases also rely on correct information in the CellServDB file for proper operation, but their use of the information does not affect Ubik's operation. See Maintaining the Server CellServDB File and Maintaining Knowledge of Database Server Machines.) clocks need to synchronize for Ubik Keep the clocks synchronized on all machines in the cell, especially the database server machines. In the conventional configuration specified in the OpenAFS Quick Beginnings, you run the runntp process to supervise the local Network Time Protocol Daemon (NTPD) on every AFS server machine. The NTPD on the system control machine synchronizes its clock with a reliable source outside the cell and broadcasts the time to the NTPDs on the other server machines. You can choose to run a different time synchronization protocol if you wish. Keeping clocks synchronized is important because the Ubik processes at a database's sites timestamp the messages which they exchange to maintain constant contact. Timestamping the messages is necessary because in a networked environment it is not safe to assume that a message reaches its destination instantly. Ubik compares the timestamp on an incoming message with the current time. If the difference is too great, it is possible that an outage is preventing reliable communication between the Ubik sites, which can possibly result in unsynchronized databases. Ubik considers the message invalid, which can prompt it to attempt election of a different coordinator. Electing a new coordinator is appropriate if a timestamped message is expired due to actual interruption of communication, but not if a message appears expired only because the sender and recipient do not share the same time. For detailed examples of how unsynchronized clocks can destabilize Ubik operation, see How Ubik Uses Timestamped Messages. Ubik features summarized process lightweight Ubik Ubik server and client portions How Ubik Operates Automatically The following Ubik features help keep its maintenance requirements to a minimum: Ubik's server and client portions operate automatically. Each database server process runs a lightweight process to call on the server portion of the Ubik library. It is common to refer to this lightweight process itself as Ubik. Because it is lightweight, the Ubik process does not appear in process listings such as those generated by the UNIX ps command. Client-side programs that need to read and change the databases directly call the subroutines in the Ubik library's client portion, rather than running a separate lightweight process. Examples of such programs are the klog command and the commands in the pts suite. Ubik tracks database version numbers. As the coordinator records a change to a database, it increments the database's version number. The version number makes it easy for the coordinator to determine if a site has the most recent version or not. The version number speeds the return to normal functioning after election of a new coordinator or when communication is restored after an outage, because it makes it easy to determine which site has the most current database and which need to be updated. Ubik's use of timestamped messages guarantees that database copies are always synchronized during normal operation. Replicating a database to increase data availability is pointless if all copies of the database are not the same. Inconsistent performance can result if clients receive different information depending on which copy of the database they access. As previously noted, Ubik sites constantly track the status of their peers by exchanging timestamped messages. For a detailed description, see How Ubik Uses Timestamped Messages. The ability to move the coordinator maximizes database availability. Suppose, for example, that in a cell with three database server machines a network partition separates the two secondary sites from the coordinator. The coordinator retires because it is no longer in contact with a majority of the sites listed in the CellServDB file. The two sites on the other side of the partition can elect a new coordinator among themselves, and it can then accept database changes from clients. If the coordinator cannot move in this way, the database has to be read-only until the network partition is repaired. For a detailed description of Ubik's election procedure, see A Flexible Coordinator Boosts Availability. consistency guarantees administrative databases Ubik consistency guarantees How Ubik Uses Timestamped Messages Ubik synchronizes the copies of a database by maintaining constant contact between the synchronization site and the secondary sites. The Ubik coordinator frequently sends a time-stamped guarantee message to each of the secondary sites. When the secondary site receives the message, it concludes that it is in contact with the coordinator. It considers its copy of the database to be valid until time T, which is usually 60 seconds from the time the coordinator sent the message. In response, the secondary site returns a vote message that acknowledges the coordinator as valid until a certain time X, which is usually 120 seconds in the future. The coordinator sends guarantee messages more frequently than every T seconds, so that the expiration periods overlap. There is no danger of expiration unless a network partition or other outage actually interrupts communication. If the guarantee expires, the secondary site's copy of the database it not necessarily current. Nonetheless, the database server continues to service client requests. It is considered better for overall cell functioning that a secondary site remains accessible even if the information it is distributing is possibly out of date. Most of the AFS administrative databases do not change that frequently, in any case, and making a database inaccessible causes a timeout for clients that happen to access that copy. As previously mentioned, Ubik's use of timestamped messages makes it vital to synchronize the clocks on database server machines. There are two ways that skewed clocks can interrupt normal Ubik functioning, depending on which clock is ahead of the others. Suppose, for example, that the Ubik coordinator's clock is ahead of the secondary sites: the coordinator's clock says 9:35:30, but the secondary clocks say 9:31:30. The secondary sites send votes messages that acknowledge the coordinator as valid until 9:33:30. This is two minutes in the future according to the secondary clocks, but is already in the past from the coordinator's perspective. The coordinator concludes that it no longer has enough support to remain coordinator and forces election of a new coordinator. Election takes about three minutes, during which time no copy of the database accepts changes. The opposite possibility is that a secondary site's clock (14:50:00) is ahead of the coordinator's (14:46:30). When the coordinator sends a guarantee message good until 14:47:30), it has already expired according to the secondary clock. Believing that it is out of contact with the coordinator, the secondary site stops sending votes for the coordinator and tries get itself elected as coordinator. This is appropriate if the coordinator has actually failed, but is inappropriate when there is no actual outage. The attempt of a single secondary site to get elected as the new coordinator usually does not affect the performance of the other sites. As long as their clocks agree with the coordinator's, they ignore the other secondary site's request for votes and continue voting for the current coordinator. However, if enough of the secondary sites's clocks get ahead of the coordinator's, they can force election of a new coordinator even though the current one is actually working fine. Ubik election of coordinator coordinator (Ubik) election procedure described election of Ubik coordinator flexible synchronization site (Ubik) synchronization site (Ubik) flexibility Ubik majority defined majority defined for Ubik outages due to Ubik election system outages due to Ubik election A Flexible Coordinator Boosts Availability Ubik uses timestamped messages to determine when coordinator election is necessary, just as it does to keep the database copies synchronized. As long as the coordinator receives vote messages from a majority of the sites (it implicitly votes for itself), it is appropriate for it to continue as coordinator because it is successfully distributing database changes. A majority is defined as more than 50% of all database sites when there are an odd number of sites; with an even number of sites, the site with the lowest Internet address has an extra vote for breaking ties as necessary.If the coordinator is not receiving sufficient votes, it retires and the Ubik sites elect a new coordinator. This does not happen spontaneously, but only when the coordinator really fails or stops receiving a majority of the votes. The secondary sites have a built-in bias to continue voting for an existing coordinator, which prevents undue elections. The election of the new coordinator is by majority vote. The Ubik subprocesses have a bias to vote for the site with the lowest Internet address, which helps it gather the necessary majority quicker than if all the sites were competing to receive votes themselves. During the election (which normally lasts less than three minutes), clients can read information from the database, but cannot make any changes. Ubik's election procedure makes it possible for each database server process's coordinator to be on a different machine. For example, if the Ubik coordinators for all four processes start out on machine A and the Protection Server on machine A fails for some reason, then a different site (say machine B) must be elected as the new Protection Database Ubik coordinator. Machine B remains the coordinator for the Protection Database even after the Protection Server on machine A is working again. The failure of the Protection Server has no effect on the Authentication, Backup, or VL Servers, so their coordinators remain on machine A. Backing Up and Restoring the Administrative Databases The AFS administrative databases store information that is critical for AFS operation in your cell. If a database becomes corrupted due to a hardware failure or other problem on a database server machine, it likely to be difficult and time-consuming to recreate all of the information from scratch. To protect yourself against loss of data, back up the administrative databases to a permanent media, such as tape, on a regular basis. The recommended method is to use a standard local disk backup utility such as the UNIX tar command. When deciding how often to back up a database, consider the amount of data that you are willing to recreate by hand if it becomes necessary to restore the database from a backup copy. In most cells, the databases differ quite a bit in how often and how much they change. Changes to the Authentication Database are probably the least frequent, and consist mostly of changed user passwords. Protection Database and VLDB changes are probably more frequent, as users add or delete groups and change group memberships, and as you and other administrators create or move volumes. The number and frequency of changes is probably greatest in the Backup Database, particularly if you perform backups every day. The ease with which you can recapture lost changes also differs for the different databases: If regular users make a large proportion of the changes to the Authentication Database and Protection Database in your cell, then recovering them possibly requires a large amount of detective work and interviewing of users, assuming that they can even remember what changes they made at what time. Recovering lost changes to the VLDB is more straightforward, because you can use the vos syncserv and vos syncvldb commands to correct any discrepancies between the VLDB and the actual state of volumes on server machines. Running these commands can be time-consuming, however. The configuration information in the Backup Database (Tape Coordinator port offsets, volume sets and entries, the dump hierarchy, and so on) probably does not change that often, in which case it is not that hard to recover a few recent changes. In contrast, there are likely to be a large number of new dump records resulting from dump operations. You can recover these records by using the -dbadd argument to the backup scantape command, reading in information from the backup tapes themselves. This can take a long time and require numerous tape changes, however, depending on how much data you back up in your cell and how you append dumps. Furthermore, the backup scantape command is subject to several restrictions. The most basic is that it halts if it finds that an existing dump record in the database has the same dump ID number as a dump on the tape it is scanning. If you want to continue with the scanning operation, you must locate and remove the existing record from the database. For further discussion, see the backup scantape command's reference page in the OpenAFS Administration Reference. These differences between the databases possibly suggest backing up the database at different frequencies, ranging from every few days or weekly for the Backup Database to every few weeks for the Authentication Database. On the other hand, it is probably simpler from a logistical standpoint to back them all up at the same time (and frequently), particularly if tape consumption is not a major concern. Also, it is not generally necessary to keep backup copies of the databases for a long time, so you can recycle the tapes fairly frequently. administrative database backing up backing up administrative databases To back up the administrative databases Log in as the local superuser root on a database server machine that is not the synchronization site. The machine with the highest IP address is normally the best choice, since it is least likely to become the synchronization site in an election. Issue the bos shutdown command to shut down the relevant server process on the local machine. For a complete description of the command, see To stop processes temporarily. For the -instance argument, specify one or more database server process names (buserver for the Backup Server, kaserver for the Authentication Server, ptserver for the Protection Server, or vlserver for the Volume Location Server. Include the -localauth flag because you are logged in as the local superuser root but do not necessarily have administrative tokens. # bos shutdown <machine name> -instance <instances>+ -localauth [-wait] Use a local disk backup utility, such as the UNIX tar command, to transfer one or more database files to tape. If the local database server machine does not have a tape device attached, use a remote copy command to transfer the file to a machine with a tape device, then use the tar command there. The following command sequence backs up the complete contents of the /usr/afs/db directory # cd /usr/afs/db # tar cvf tape_device . To back up individual database files, substitute their names for the period in the preceding tar command: bdb.DB0 for the Backup Database kaserver.DB0 for the Authentication Database prdb.DB0 for the Protection Database vldb.DB0 for the VLDB Issue the bos start command to restart the server processes on the local machine. For a complete description of the command, see To start processes by changing their status flags to Run. Provide the same values for the -instance argument as in Step 2, and the -localauth flag for the same reason. # bos start <machine name> -instance <server process name>+ -localauth administrative database restoring restoring administrative databases To restore an administrative database Log in as the local superuser root on each database server machine in the cell. Working on one of the machines, issue the bos shutdown command once for each database server machine, to shut down the relevant server process on all of them. For a complete description of the command, see To stop processes temporarily. For the -instance argument, specify one or more database server process names (buserver for the Backup Server, kaserver for the Authentication Server, ptserver for the Protection Server, or vlserver for the Volume Location Server. Include the -localauth flag because you are logged in as the local superuser root but do not necessarily have administrative tokens. # bos shutdown <machine name> -instance <instances>+ -localauth [-wait] Remove the database from each database server machine, by issuing the following commands on each one. # cd /usr/afs/db For the Backup Database: # rm bdb.DB0 # rm bdb.DBSYS1 For the Authentication Database: # rm kaserver.DB0 # rm kaserver.DBSYS1 For the Protection Database: # rm prdb.DB0 # rm prdb.DBSYS1 For the VLDB: # rm vldb.DB0 # rm vldb.DBSYS1 Using the local disk backup utility that you used to back up the database, copy the most recently backed-up version of it to the appropriate file on the database server machine with the lowest IP address. The following is an appropriate tar command if the synchronization site has a tape device attached: # cd /usr/afs/db # tar xvf tape_device database_file where database_file is one of the following: bdb.DB0 for the Backup Database kaserver.DB0 for the Authentication Database prdb.DB0 for the Protection Database vldb.DB0 for the VLDB Working on one of the machines, issue the bos start command to restart the server process on each of the database server machines in turn. Start with the machine with the lowest IP address, which becomes the synchronization site for the Backup Database. Wait for it to establish itself as the synchronization site before repeating the command to restart the process on the other database server machines. For a complete description of the command, see To start processes by changing their status flags to Run. Provide the same values for the -instance argument as in Step 2, and the -localauth flag for the same reason. # bos start <machine name> -instance <server process name>+ -localauth If the database has changed since you last backed it up, issue the appropriate commands from the instructions in the indicated sections to recreate the information in the restored database. If issuing pts commands, you must first obtain administrative tokens. The backup and vos commands accept the -localauth flag if you are logged in as the local superuser root, so you do not need administrative tokens. The Authentication Server always performs a separate authentication anyway, so you only need to include the -admin argument if issuing kas commands. To define or remove volume sets and volume entries in the Backup Database, see Defining and Displaying Volume Sets and Volume Entries. To edit the dump hierarchy in the Backup Database, see Defining and Displaying the Dump Hierarchy. To define or remove Tape Coordinator port offset entries in the Backup Database, see Configuring Tape Coordinator Machines and Tape Devices. To restore dump records in the Backup Database, see To scan the contents of a tape. To recreate Authentication Database entries or password changes for users, see the appropriate section of Administering User Accounts. To recreate Protection Database entries or group membership information, see the appropriate section of Administering the Protection Database. To synchronize the VLDB with volume headers, see Synchronizing the VLDB and Volume Headers. installing server process binaries, about server process binaries installing BOS Server maintainer of server process binaries server process binaries server process binaries directories (server) /usr/afs/bin server machine need for consistent version of software Installing Server Process Software This section explains how to install new server process binaries on file server machines, how to revert to a previous version if the current version is not working properly, and how to install new disks to house AFS volumes on a file server machine. The most frequent reason to replace a server process's binaries is to upgrade AFS to a new version. In general, installation instructions accompany the updated software, but this chapter provides an additional reference. Each AFS server machine must store the server process binaries in a local disk directory, called /usr/afs/bin by convention. For predictable system performance, it is best that all server machines run the same build level, or at least the same version, of the server software. For instructions on checking AFS build level, see Displaying A Binary File's Build Level. The Update Server makes it easy to distribute a consistent version of software to all server machines. You designate one server machine of each system type as the binary distribution machine by running the server portion of the Update Server (upserver process) on it. All other server machines of that system type run the client portion of the Update Server (upclientbin process) to retrieve updated software from the binary distribution machine. The OpenAFS Quick Beginnings explains how to install the appropriate processes. For more on binary distribution machines, see Binary Distribution Machines. When you use the Update Server, you install new binaries on binary distribution machines only. If you install binaries directly on a machine that is running the upclientbin process, they are overwritten the next time the process compares the contents of the local /usr/afs/bin directory to the contents on the system control machine, by default within five minutes. The following instructions explain how to use the appropriate commands from the bos suite to install and uninstall server binaries. installing server binaries server process binaries installing command suite binaries installing file server machine installing command and process binaries server process restarting for changed binaries Installing New Binaries An AFS server process does not automatically switch to a new process binary file as soon as it is installed in the /usr/afs/bin directory. The process continues to use the previous version of the binary file until it (the process) next restarts. By default, the BOS Server restarts processes for which there are new binary files every day at 5:00 a.m., as specified in the /usr/afs/local/BosConfig file. To display or change this binary restart time, use the bos getrestart and bos setrestart commands, as described in Setting the BOS Server's Restart Times. You can force the server machine to start using new server process binaries immediately by issuing the bos restart command as described in the following instructions. You do not need to restart processes when you install new command suite binaries. The new binary is invoked automatically the next time a command from the suite is issued. file extension .BAK file extension .OLD BAK version of binary file created by bos install command OLD version of binary file created by bos install command saving previous version of server binaries When you use the bos install command, the BOS Server automatically saves the current version of a binary file by adding a .BAK extension to its name. It renames the current .BAK version, if any, to the .OLD version, if there is no .OLD version already. If there is a current .OLD version, the current .BAK version must be at least seven days old to replace it. It is best to store AFS binaries in the /usr/afs/bin directory, because that is the only directory the BOS Server automatically checks for new binaries. You can, however, use the bos install command's -dir argument to install non-AFS binaries into other directories on a server machine's local disk. See the command's reference page in the OpenAFS Administration Reference for further information. bos commands install commands bos install To install new server binaries Verify that you are listed in the /usr/afs/etc/UserList file. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file. % bos listusers <machine name> Verify that the binaries are available in the source directory from which you are installing them. If the machine is also an AFS client, you can retrieve the binaries from a central directory in AFS. Otherwise, you can obtain them directly from the AFS distribution media, from a local disk directory where you previously installed them, or from a remote machine using a transfer utility such as the ftp command. Issue the bos install command for the binary distribution machine. (If you have forgotten which machine is performing that role, see To locate the binary distribution machine for a system type.) % bos install <machine name> <files to install>+ where i Is the shortest acceptable abbreviation of install. machine name Names the binary distribution machine. files to install Names each binary file to install into the local /usr/afs/bin directory. Partial pathnames are interpreted relative to the current working directory. The last element in each pathname (the filename itself) matches the name of the file it is replacing, such as bosserver or volserver for server processes, bos or vos for commands. Each AFS server process other than the fs process uses a single binary file. The fs process uses three binary files: fileserver, volserver, and salvager. Installing a new version of one component does not necessarily mean that you need to replace all three. Repeat Step 3 for each binary distribution machine. (Optional) If you want to restart processes to use the new binaries immediately, wait until the upclientbin process retrieves them from the binary distribution machine. You can verify the timestamps on binary files by using the bos getdate command as described in Displaying Binary Version Dates. When the binary files are available on each server machine, issue the bos restart command, for which complete instructions appear in Stopping and Immediately Restarting Processes. If you are working on an AFS client machine, it is a wise precaution to have a copy of the bos command suite binaries on the local disk before restarting server processes. In the conventional configuration, the /usr/afsws/bin directory that houses the bos command binary on client machines is a symbolic link into AFS, which conserves local disk space. However, restarting certain processes (particularly the database server processes) can make the AFS filespace inaccessible, particularly if a problem arises during the restart. Having a local copy of the bos binary enables you to uninstall or reinstall process binaries or restart processes even in this case. Use the cp command to copy the bos command binary from the /usr/afsws/bin directory to a local directory such as /tmp. Restarting a process causes a service outage. It is best to perform the restart at times of low system usage if possible. % bos restart <machine name> <instances>+ uninstalling server process and command suite binaries reverting to old version of server process and command binaries server process binaries uninstalling server process binaries reverting to old version command suite binaries uninstalling server machine uninstalling command & process binaries BAK version of binary file used by bos uninstall command OLD version of binary file used by bos uninstall command Reverting to the Previous Version of Binaries In rare cases, installing a new binary can cause problems serious enough to require reverting to the previous version. Just as with installing binaries, consistent system performance requires reverting every server machine back to the same version. Issue the bos uninstall command described here on each binary distribution machine. When you use the bos uninstall command, the BOS Server discards the current version of a binary file and promotes the .BAK version of the file by removing the extension. It renames the current .OLD version, if any, to .BAK. If there is no current .BAK version, the bos uninstall command operation fails and generates an error message. If a .OLD version still exists, issue the mv command to rename it to .BAK before reissuing the bos uninstall command. Just as when you install new binaries, the server processes do not start using a reverted version immediately. Presumably you are reverting because the current binaries do not work, so the following instructions have you restart the relevant processes. bos commands uninstall commands bos install To revert to the previous version of binaries Verify that you are listed in the /usr/afs/etc/UserList file. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file. % bos listusers <machine name> Verify that the .BAK version of each relevant binary is available in the /usr/afs/bin directory on each binary distribution machine. If necessary, you can use the bos getdate command as described in Displaying Binary Version Dates. If necessary, rename the .OLD version to .BAK Issue the bos uninstall command for a binary distribution machine. (If you have forgotten which machine is performing that role, see To locate the binary distribution machine for a system type.) % bos uninstall <machine name> <files to uninstall>+ where u Is the shortest acceptable abbreviation of uninstall. machine name Names the binary distribution machine. files to uninstall Names each binary file in the /usr/afs/bin directory to replace with its .BAK version. The file name alone is sufficient, because the /usr/afs/bin directory is assumed. Repeat Step 3 for each binary distribution machine. Wait until the upclientbin process on each server machine retrieves the reverted from the binary distribution machine. You can verify the timestamps on binary files by using the bos getdate command as described in Displaying Binary Version Dates. When the binary files are available on each server machine, issue the bos restart command, for which complete instructions appear in Stopping and Immediately Restarting Processes. If you are working on an AFS client machine, it is a wise precaution to have a copy of the bos command suite binaries on the local disk before restarting server processes. In the conventional configuration, the /usr/afsws/bin directory that houses the bos command binary on client machines is a symbolic link into AFS, which conserves local disk space. However, restarting certain processes (particularly the database server processes) can make the AFS filespace inaccessible, particularly if a problem arises during the restart. Having a local copy of the bos binary enables you to uninstall or reinstall process binaries or restart processes even in this case. Use the cp command to copy the bos command binary from the /usr/afsws/bin directory to a local directory such as /tmp. % bos restart <machine name> <instances>+ server process binaries displaying time stamp command suite binaries displaying time stamp time stamp on binary file, listing date on binary file, listing compilation date of, listing on binary file displaying time stamp on binary file Displaying Binary Version Dates You can check the compilation dates for all three versions of a binary file in the /usr/afs/bin directory--the current, .BAK and .OLD versions. This is useful for verifying that new binaries have been copied to a file server machine from its binary distribution machine before restarting a server process to use the new binaries. To check dates on binaries in a directory other than /usr/afs/bin, add the -dir argument. See the OpenAFS Administration Reference. bos commands getdate commands bos getdate To display binary version dates Issue the bos getdate command. % bos getdate <machine name> <files to check>+ where getd Is the shortest acceptable abbreviation of getdate. machine name Name the file server machine for which to display binary dates. files to check Names each binary file to display. BAK version of binary file removing obsolete OLD version of binary file removing obsolete removing obsolete .BAK and .OLD version of binaries server process binaries removing obsolete command suite binaries removing obsolete removing core files from /usr/afs/logs core files removing from /usr/afs/logs directory usr/afs/bin directory removing obsolete .BAK and .OLD files Removing Obsolete Binary Files When processes with new binaries have been running without problems for a number of days, it is generally safe to remove the .BAK and .OLD versions from the /usr/afs/bin directory, both to reduce clutter and to free space on the file server machine's local disk. You can use the bos prune command's flags to remove the following types of files: To remove files in the /usr/afs/bin directory with a .BAK extension, use the -bak flag. To remove files in the /usr/afs/bin directory with a .OLD extension, use the -old flag. To remove files in the /usr/afs/logs directory called core, with any extension, use the -core flag. To remove all three types of files, use the -all flag. commands bos prune bos commands prune To remove obsolete binaries Verify that you are listed in the /usr/afs/etc/UserList file. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file. % bos listusers <machine name> Issue the bos prune command with one or more of its flags. % bos prune <machine name> [-bak] [-old] [-core] [-all] where p Is the shortest acceptable abbreviation of prune. machine name Names the file server machine on which to remove obsolete files. -bak Removes all the files with a .BAK extension from the /usr/afs/bin directory. Do not combine this flag with the -all flag. -old Removes all the files a .OLD extension from the /usr/afs/bin directory. Do not combine this flag with the -all flag. -core Removes all core files from the /usr/afs/logs directory. Do not combine this flag with the -all flag -all Combines the effect of the other three flags. Do not combine it with the other three flags. Displaying A Binary File's Build Level For the most consistent performance on a server machine, and cell-wide, it is best for all server processes to come from the same AFS distribution. Every AFS binary includes an ASCII string that specifies its version, or build level. To display it, use the strings and grep commands, which are included in most UNIX distributions. commands which commands strings strings command which command To display an AFS binary's build level Change to the directory that houses the binary file . If you are not sure where the binary resides, issue the which command. % which binary_file /bin_dir_path/binary_file % cd bin_dir_path Issue the strings command to extract all ASCII strings from the binary file. Pipe the output to the grep command to locate the relevant line. % strings ./binary_file | grep Base The output reports the AFS build level in a format like the following: @(#)Base configuration afsversion build_level For example, the following string indicates the binary is from AFS M.m build 3.0: @(#)Base configuration afsM.m 3.0 CellServDB file (server) maintaining files CellServDB (server) database server process use of CellServDB file Ubik use of CellServDB file server process use of CellServDB file Maintaining the Server CellServDB File Every file server machine maintains a list of its home cell's database server machines in the local disk file /usr/afs/etc/CellServDB on its local disk. Both database server processes and non-database server processes consult the file: The database server processes (the Authentication, Backup, Protection, and Volume Location Servers) maintain constant contact with their peers in order to keep their copies of the replicated administrative databases synchronized. As detailed in Replicating the OpenAFS Administrative Databases, the database server processes use the Ubik utility to synchronize the information in the databases they maintain. The Ubik coordinator at the synchronization site for each database maintains the single read/write copy of the database and distributes changes to the secondary sites as necessary. It must maintain contact with a majority of the secondary sites to remain the coordinator, and consults the CellServDB file to learn how many peers it has and on which machines they are running. If the coordinator loses contact with the majority of its peers, they all cooperate to elect a new coordinator by majority vote. During the election, all of the Ubik processes consult the CellServDB file to learn where to send their votes, and what number constitutes a majority. The non-database server processes must know which machines are running the database server processes in order to retrieve information from the databases. For example, the first time that a user accesses an AFS file, the File Server that houses it contacts the Protection Server to obtain a list of the user's group memberships (the list is called a current protection subgroup, or CPS). The File Server uses the CPS as it determines if the access control list (ACL) protecting the file grants the required permissions to the user (for more details, see About the Protection Database). CellServDB file (server) effect of wrong information in The consequences of missing or incorrect information in the CellServDB file are as follows: If the file does not list a machine, then it is effectively not a database server machine even if the database server processes are running. The Ubik coordinator does not send it database updates or include it in the count that establishes a majority. It does not participate in Ubik elections, and so refuses to distribute database information to any client machines that happen to contact it (which they can do if their /usr/vice/etc/CellServDB file lists it). Users of the client machine must wait for a timeout before they can contact a correctly functioning database server machine. If the file lists a machine that is not running the database server processes, the consequences can be serious. The Ubik coordinator cannot send it database updates, but includes it in the count that establishes a majority. If valid secondary sites go down and stop sending their votes to the coordinator, it can wrongly appear that the coordinator no longer has the majority it needs. The resulting election of a new coordinator causes a service outage during which information from the database becomes unavailable. Furthermore, the lack of a vote from the incorrectly listed site can disturb the election, if it makes the other sites believe that a majority of sites are not voting for the new coordinator. A more minor consequence is that non-database server processes attempt to contact the database server processes on the machine. They experience a timeout delay because the processes are not running. Note that the /usr/afs/etc/CellServDB file on a server machine is not the same as the /usr/vice/etc/CellServDB file on client machine. The client version includes entries for foreign cells as well as the local cell. However, it is important to update both versions of the file whenever you change your cell's database server machines. A server machine that is also a client needs to have both files, and you need to update them both. For more information on maintaining the client version of the CellServDB file, see Maintaining Knowledge of Database Server Machines. system control machine CellServDB file, distributing to server machines distribution of CellServDB file (server) Update Server CellServDB file (server), distributing Distributing the Server CellServDB File To avoid the negative consequences of incorrect information in the /usr/afs/etc/CellServDB file, you must update it on all of your cell's server machines every time you add or remove a database server machine. The OpenAFS Quick Beginnings provides complete instructions for installing or removing a database server machine and for updating the CellServDB file in that context. This section explains how to distribute the file to your server machines and how to make other cells aware of the changes if you participate in the AFS global name space. If you use the United States edition of AFS, use the Update Server to distribute the central copy of the server CellServDB file stored on the cell's system control machine. If you use the international edition of AFS, instead change the file on each server machine individually. For further discussion of the system control machine and why international cells must not use it for files in the /usr/afs/etc directory, see The System Control Machine. For instructions on configuring the Update Server when using the United States version of AFS, see the OpenAFS Quick Beginnings. To avoid formatting errors that can cause errors, always use the bos addhost and bos removehost commands, rather than editing the file directly. You must also restart the database server processes running on the machine, to initiate a coordinator election among the new set of database server machines. This step is included in the instructions that appear in To add a database server machine to the CellServDB file and To remove a database server machine from the CellServDB file. For instructions on displaying the contents of the file, see To display a cell's database server machines. If you make your cell accessible to foreign users as part of the AFS global name space, you also need to inform other cells when you change your cell's database server machines. The AFS Support group maintains a CellServDB file that lists all cells that participate in the AFS global name space, and can change your cell's entry at your request. For further details, see Making Your Cell Visible to Others. Another way to advertise your cell's database server machines is to maintain a copy of the file at the conventional location in your AFS filespace, /afs/cellname/service/etc/CellServDB.local. For further discussion, see The Third Level. bos commands listhosts commands bos listhosts CellServDB file (server) displaying displaying CellServDB file (server) database server machine displaying list in server CellServDB file displaying database server machines in server CellServDB file To display a cell's database server machines Issue the bos listhosts command. If you have maintained the file properly, the output is the same on every server machine, but the machine name argument enables you to check various machines if you wish. % bos listhosts <machine name> [<cell name>] where listh Is the shortest acceptable abbreviation of listhosts. machine name Specifies the server machine from which to display the /usr/afs/etc/CellServDB file. cell name Specifies the complete Internet domain name of a foreign cell. You must already know the name of at least one server machine in the cell, to provide as the machine name argument. The output lists the machines in the order they appear in the CellServDB file on the specified server machine. It assigns each one a Host index number, as in the following example. There is no implied relationship between the index and a machine's IP address, name, or role as Ubik coordinator or secondary site. % bos listhosts fs1.abc.com Cell name is abc.com Host 1 is fs1.abc.com Host 2 is fs7.abc.com Host 3 is fs4.abc.com The output lists machines by name rather than IP address as long as the naming service (such as the Domain Name Service or local host table) is functioning properly. To display IP addresses, login to a server machine as the local superuser root and use a text editor or display command, such as the cat command, to view the /usr/afs/etc/CellServDB file. adding database server machine to server CellServDB file database server machine adding to server CellServDB file CellServDB file (server) adding database server machine adding CellServDB file (server) entry for database server machine database server machine CellServDB file (server) entry adding database server process restarting after adding entry to server CellServDB file Protection Server restarting after adding entry to server CellServDB file Authentication Server restarting after adding entry to server CellServDB file VL Server restarting after adding entry to server CellServDB file Backup Server restarting after adding entry to server CellServDB file bos commands addhost commands bos addhost To add a database server machine to the CellServDB file Verify that you are listed in the /usr/afs/etc/UserList file. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file. % bos listusers <machine name> Issue the bos addhost command to add each new database server machine to the CellServDB file. If you use the United States edition of AFS, specify the system control machine as machine name. (If you have forgotten which machine is the system control machine, see The Output on the System Control Machine.) If you use the international edition of AFS, repeat the command on each or your cell's server machines in turn by substituting its name for machine name. % bos addhost <machine name> <host name>+ where addh Is the shortest acceptable abbreviation of addhost. machine name Names the system control machine, if you are using the United States edition of AFS. If you are using the international edition of AFS, it names each of your server machines in turn. host name Specifies the fully qualified hostname of each database server machine to add to the CellServDB file (for example: fs4.abc.com). The BOS Server uses the gethostbyname() routine to obtain each machine's IP address and records both the name and address automatically. Restart the Authentication Server, Backup Server, Protection Server, and VL Server on every database server machine, so that the new set of machines participate in the election of a new Ubik coordinator. The instruction uses the conventional names for the processes; make the appropriate substitution if you use different process names. For complete syntax, see Stopping and Immediately Restarting Processes. Important: Repeat the following command in quick succession on all of the database server machines. % bos restart <machine name> buserver kaserver ptserver vlserver Edit the /usr/vice/etc/CellServDB file on each of your cell's client machines. For instructions, see Maintaining Knowledge of Database Server Machines. If you participate in the AFS global name space, please have one of your cell's designated site contacts register the changes you have made with the AFS Product Support group. If you maintain a central copy of your cell's server CellServDB file in the conventional location (/afs/cellname/service/etc/CellServDB.local), edit the file to reflect the change. removing database server machine from server CellServDB file database server machine removing from server CellServDB file CellServDB file (server) removing database server machine database server machine CellServDB file (server) entry removing database server process restarting after removing entry from server CellServDB file Protection Server restarting after removing entry from server CellServDB file Authentication Server restarting after removing entry from server CellServDB file VL Server restarting after removing entry from server CellServDB file Backup Server restarting after removing entry from server CellServDB file bos commands removehost commands bos removehost To remove a database server machine from the CellServDB file Verify that you are listed in the /usr/afs/etc/UserList file. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file. % bos listusers <machine name> Issue the bos removehost command to remove each database server machine from the CellServDB file. If you use the United States edition of AFS, specify the system control machine as machine name. (If you have forgotten which machine is the system control machine, see The Output on the System Control Machine.) If you use the international edition of AFS, repeat the command on each or your cell's server machines in turn by substituting its name for machine name. % bos removehost <machine name> <host name>+ where removeh Is the shortest acceptable abbreviation of removehost. machine name Names the system control machine, if you are using the United States edition of AFS. If you are using the international edition of AFS, it names each of your server machines in turn. host name Specifies the fully qualified hostname of each database server machine to remove from the CellServDB file (for example: fs4.abc.com). Restart the Authentication Server, Backup Server, Protection Server, and VL Server on every database server machine, so that the new set of machines participate in the election of a new Ubik coordinator. The instruction uses the conventional names for the processes; make the appropriate substitution if you use different process names. For complete syntax, see Stopping and Immediately Restarting Processes. Important: Repeat the following command in quick succession on all of the database server machines. % bos restart <machine name> buserver kaserver ptserver vlserver Edit the /usr/vice/etc/CellServDB file on each of your cell's client machines. For instructions, see Maintaining Knowledge of Database Server Machines. If you participate in the AFS global name space, please have one of your cell's designated site contacts register the changes you have made with the AFS Product Support group. If you maintain a central copy of your cell's server CellServDB file in the conventional location (/afs/cellname/service/etc/CellServDB.local), edit the file to reflect the change. Managing Authentication and Authorization Requirements This section describes how the AFS server processes guarantee that only properly authorized users perform privileged commands, by checking authorization checking and mutually authenticating with their clients. It explains how you can control authorization checking requirements on a per-machine or per-cell basis, and how to bypass mutual authentication when issuing commands. authorization checking compared to authentication authentication compared to authorization checking privileged commands commands privileged, defined anonymous user identity assigned to unauthenticated user authorization checking defined Authentication versus Authorization Many AFS commands are privileged in that the AFS server process invoked by the command performs it only for a properly authorized user. The server process performs the following two tests to determine if someone is properly authorized: In the authentication test, the server process mutually authenticates with the command interpreter, Cache Manager, or other client process that is acting on behalf of a user or application. The goal of this test is to determine who is issuing the command. The server process verifies that the issuer really is who he or she claims to be, by examining the server ticket and other components of the issuer's token. (Secondarily, it allows the client process to verify that the server process is genuine.) If the issuer has no token or otherwise fails the test, the server process assigns him or her the identity anonymous, a completely unprivileged user. For a more complete description of mutual authentication, see A More Detailed Look at Mutual Authentication. Many individual commands enable you to bypass the authentication test by assuming the anonymous identity without even attempting to mutually authenticate. Note, however, that this is futile if the command is privileged and the server process is still performing the authorization test, because in that case the process refuses to execute privileged commands for the anonymous user. In the authorization test, the server process determines if the issuer is authorized to use the command by consulting a list of privileged users. The goal of this test is to determine what the issuer is allowed to do. Different server processes consult different lists of users, as described in Managing Administrative Privilege. The server process refuses to execute any privileged command for an unauthorized issuer. If a command has no privilege requirements, the server process skips this step and executes and immediately. Never place the anonymous user or the system:anyuser group on a privilege list; it makes authorization checking meaningless. You can use the bos setauth command to control whether the server processes on a server machine check for authorization. Other server machines are not affected. Keep in mind that turning off authorization checking is a grave security risk, because the server processes on that machine perform any action for any user. controlling authorization checking for entire cell authorization checking controlling cell-wide restarting server process when changing authorization checking authorization checking and restarting processes Controlling Authorization Checking on a Server Machine Disabling authorization checking is a serious breach of security because it means that the AFS server processes on a file server machine performs any action for any user, even the anonymous user. The only time it is common to disable authorization checking is when installing a new file server machine (see the IBM AFS Quick Beginnings). It is necessary then because it is not possible to configure all of the necessary security mechanisms before performing other actions that normally make use of them. For greatest security, work at the console of the machine you are installing and enable authorization checking as soon as possible. During normal operation, the only reason to disable authorization checking is if an error occurs with the server encryption keys, leaving the servers unable to authenticate users properly. For instructions on handling key-related emergencies, see Handling Server Encryption Key Emergencies. You control authorization checking on each file server machine separately; turning it on or off on one machine does not affect the others. Because client machines generally choose a server process at random, it is hard to predict what authorization checking conditions prevail for a given command unless you make the requirement the same on all machines. To turn authorization checking on or off for the entire cell, you must repeat the appropriate command on every file server machine. The server processes constantly monitor the directory /usr/afs/local on their local disks to determine if they need to check for authorization. If the file called NoAuth appears in that directory, then the servers do not check for authorization. When it is not present (the usual case), they perform authorization checking. Control the presence of the NoAuth file through the BOS Server. When you disable authorization checking with the bos setauth command (or, during installation, by putting the -noauth flag on the command that starts up the BOS Server), the BOS Server creates the zero-length NoAuth file. When you reenable authorization checking, the BOS Server removes the file. bos commands setauth commands bos setauth authorization checking disabling turning off authorization checking disabling authorization checking To disable authorization checking on a server machine Verify that you are listed in the /usr/afs/etc/UserList file. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file. % bos listusers <machine name> Issue the bos setauth command to disable authorization checking. % bos setauth <machine name> off where seta Is the shortest acceptable abbreviation of setauth. machine name Specifies the file server machine on which server processes do not check for authorization. authorization checking enabling enabling authorization checking turning on authorization checking To enable authorization checking on a server machine Reenable authorization checking. (No privilege is required because the machine is not currently checking for authorization.) For detailed syntax information, see the preceding section. % bos setauth <machine name> on mutual authentication preventing preventing mutual authentication Bypassing Mutual Authentication for an Individual Command Several of the server processes allow any user (not just system administrators) to disable mutual authentication when issuing a command. The server process treats the issuer as the unauthenticated user anonymous. The facilities for preventing mutual authentication are provided for use in emergencies (such as the key emergency discussed in Handling Server Encryption Key Emergencies). During normal circumstances, authorization checking is turned on, making it useless to prevent authentication: the server processes refuse to perform privileged commands for the user anonymous. It can be useful to prevent authentication when authorization checking is turned off. The very act of trying to authenticate can cause problems if the server cannot understand a particular encryption key, as is likely to happen in a key emergency. bos commands mutual authentication, bypassing kas commands mutual authentication, bypassing pts commands mutual authentication, bypassing vos commands mutual authentication, bypassing kas commands interactive commands kas interactive entering kas interactive mode interactive mode (kas commands) To bypass mutual authentication for bos, kas, pts, and vos commands Provide the -noauth flag which is available on many of the commands in the suites. To verify that a command accepts the flag, issue the help command in its suite, or consult the command's reference page in the OpenAFS Administration Reference (the reference page also specifies the shortest acceptable abbreviation for the flag on each command). The suites' apropos and help commands do not themselves accept the flag. You can bypass mutual authentication for all kas commands issued during an interactive session by including the -noauth flag on the kas interactive command. If you have already entered interactive mode with an authenticated identity, issue the (kas) noauthentication command to assume the anonymous identity. fs commands mutual authentication, bypassing To bypass mutual authentication for fs commands This is not possible, except by issuing the unlog command to discard your tokens before issuing the fs command. Adding or Removing Disks and Partitions AFS makes it very easy to add storage space to your cell, just by adding disks to existing file server machines. This section explains how to install or remove a disk used to store AFS volumes. (Another way to add storage space is to install additional server machines, as instructed in the OpenAFS Quick Beginnings.) Both adding and removing a disk cause at least a brief file system outage, because you must restart the fs process to have it recognize the new set of server partitions. Some operating systems require that you shut the machine off before adding or removing a disk, in which case you must shut down all of the AFS server processes first. Otherwise, the AFS-related aspects of adding or removing a disk are not complicated, so the duration of the outage depends mostly on how long it takes to install or remove the disk itself. The following instructions for installing a new disk completely prepare it to house AFS volumes. You can then use the vos create command to create new volumes, or the vos move command to move existing ones from other partitions. For instructions, see Creating Read/write Volumes and Moving Volumes. The instructions for removing a disk are basically the reverse of the installation instructions, but include extra steps that protect against data loss. A server machines can house 256 AFS server partitions, each one mounted at a directory with a name of the form /vicepindex, where index is one or two lowercase letters. By convention, the first partition on a machine is mounted at /vicepa, the second at /vicepb, and so on to the twenty-sixth at /vicepz. Additional partitions are mounted at /vicepaa through /vicepaz and so on up to /vicepiv. Using the letters consecutively is not required, but is simpler. Mount each /vicep directory directly under the local file system's root directory ( / ), not as a subdirectory of any other directory; for example, /usr/vicepa is not an acceptable location. You must also map the directory to the partition's device name in the file server machine's file systems registry file (/etc/fstab or equivalent). These instructions assume that the machine's AFS initialization file includes the following command to restart the BOS Server after each reboot. The BOS Server starts the other AFS server processes listed in the local /usr/afs/local/BosConfig file. For information on the bosserver command's optional arguments, see its reference page in the OpenAFS Administration Reference. /usr/afs/bin/bosserver & adding disk to file server machine installing disk on file server machine disk file server machine adding/installing file server machine disk adding/installing mounting disk on file server machine commands vos listpart vos commands listpart To add and mount a new disk to house AFS volumes Become the local superuser root on the machine, if you are not already, by issuing the su command. % su root Password: root_password Decide how many AFS partitions to divide the new disk into and the names of the directories at which to mount them (the introduction to this section describes the naming conventions). To display the names of the existing server partitions on the machine, issue the vos listpart command. Include the -localauth flag because you are logged in as the local superuser root but do not necessarily have administrative tokens. # vos listpart <machine name> -localauth where listp Is the shortest acceptable abbreviation of listpart. machine name Names the local file server machine. -localauth Constructs a server ticket using a key from the local /usr/afs/etc/KeyFile file. The bos command interpreter presents it to the BOS Server during mutual authentication. Create each directory at which to mount a partition. # mkdir /vicepx[x] files file systems registry (fstab) file systems registry file adding new disk to file server machine etc/fstab file file systems registry file fstab file file systems registry file Using a text editor, create an entry in the machine's file systems registry file (/etc/fstab or equivalent) for each new disk partition, mapping its device name to the directory you created in the previous step. Refer to existing entries in the file to learn the proper format, which varies for different operating systems. If the operating system requires that you shut off the machine to install a new disk, issue the bos shutdown command to shut down all AFS server processes other than the BOS Server (it terminates safely when you shut off the machine). Include the -localauth flag because you are logged in as the local superuser root but do not necessarily have administrative tokens. For a complete description of the command, see To stop processes temporarily. # bos shutdown <machine name> -localauth [-wait] If necessary, shut off the machine. Install and format the new disk according to the instructions provided by the disk and operating system vendors. If necessary, edit the disk's partition table to reflect the changes you made to the files system registry file in step 4; consult the operating system documentation for instructions. If you shut off the machine down in step 6, turn it on. Otherwise, issue the bos restart command to restart the fs process, forcing it to recognize the new set of server partitions. Include the -localauth flag because you are logged in as the local superuser root but do not necessarily have administrative tokens. For complete instructions for the bos restart command, see Stopping and Immediately Restarting Processes. # bos restart <machine name> fs -localauth Issue the bos status command to verify that all server processes are running correctly. For more detailed instructions, see Displaying Process Status and Information from the BosConfig File. # bos status <machine name> removing disk from file server machine disk file server machine removing file server machine disk removing unmounting file server machine disk vos commands move when removing file server machine disk To unmount and remove a disk housing AFS volumes Verify that you are listed in the /usr/afs/etc/UserList file. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file. % bos listusers <machine name> Issue the vos listvol command to list the volumes housed on each partition of each disk you are about to remove, in preparation for removing them or moving them to other partitions. For detailed instructions, see Displaying Volume Headers. % vos listvol <machine name> [<partition name>] Move any volume you wish to retain in the file system to another partition. You can move only read/write volumes. For more detailed instructions, and for instructions on moving read-only and backup volumes, see Moving Volumes. % vos move <volume name or ID> \ <machine name on source> <partition name on source> \ <machine name on destination> <partition name on destination> (Optional) If there are any volumes you do not wish to retain, back them up using the vos dump command or the AFS Backup System. See Dumping and Restoring Volumes or Backing Up Data, respectively. Become the local superuser root on the machine, if you are not already, by issuing the su command. % su root Password: root_password umount command commands umount Issue the umount command, repeating it for each partition on the disk to be removed. # cd / # umount /dev/<partition_block_device_name> file systems registry file removing disk from file server machine Using a text editor, remove or comment out each partition's entry from the machine's file systems registry file (/etc/fstab or equivalent). Remove the /vicep directory associated with each partition. # rmdir /vicepxx If the operating system requires that you shut off the machine to remove a disk, issue the bos shutdown command to shut down all AFS server processes other than the BOS Server (it terminates safely when you shut off the machine). Include the -localauth flag because you are logged in as the local superuser root but do not necessarily have administrative tokens. For a complete description of the command, see To stop processes temporarily. # bos shutdown <machine name> -localauth [-wait] If necessary, shut off the machine. Remove the disk according to the instructions provided by the disk and operating system vendors. If necessary, edit the disk's partition table to reflect the changes you made to the files system registry file in step 7; consult the operating system documentation for instructions. If you shut off the machine down in step 10, turn it on. Otherwise, issue the bos restart command to restart the fs process, forcing it to recognize the new set of server partitions. Include the -localauth flag because you are logged in as the local superuser root but do not necessarily have administrative tokens. For complete instructions for the bos restart command, see Stopping and Immediately Restarting Processes. # bos restart <machine name> fs -localauth Issue the bos status command to verify that all server processes are running correctly. For more detailed instructions, see Displaying Process Status and Information from the BosConfig File. # bos status <machine name> File Server use of NetInfo file File Server use of NetRestrict file File Server use of sysid file Ubik use of NetInfo and NetRestrict files database server machine use of NetInfo and NetRestrict files File Server interfaces registered in VLDB server entry setting server machine interfaces registered in VLDB controlling server machine interfaces registered in VLDB displaying server entries from VLDB displaying VLDB server entries server entry in VLDB Managing Server IP Addresses and VLDB Server Entries The AFS support for multihomed file server machines is largely automatic. The File Server process records the IP addresses of its file server machine's network interfaces in the local /usr/afs/local/sysid file and also registers them in a server entry in the Volume Location Database (VLDB). The sysid file and server entry are identified by the same unique number, which creates an association between them. When the Cache Manager requests volume location information, the Volume Location (VL) Server provides all of the interfaces registered for each server machine that houses the volume. This enables the Cache Manager to make use of multiple addresses when accessing AFS data stored on a multihomed file server machine. If you wish, you can control which interfaces the File Server registers in its VLDB server entry by creating two files in the local /usr/afs/local directory: NetInfo and NetRestrict. Each time the File Server restarts, it builds a list of the local machine's interfaces by reading the NetInfo file, if it exists. If you do not create the file, the File Server uses the list of network interfaces configured with the operating system. It then removes from the list any addresses that appear in the NetRestrict file, if it exists. The File Server records the resulting list in the sysid file and registers the interfaces in the VLDB server entry that has the same unique identifier. On database server machines, the NetInfo and NetRestrict files also determine which interfaces the Ubik database synchronization library uses when communicating with the database server processes running on other database server machines. There is a maximum number of IP addresses in each server entry, as documented in the OpenAFS Release Notes. If a multihomed file server machine has more interfaces than the maximum, AFS simply ignores the excess ones. It is probably appropriate for such machines to use the NetInfo and NetRestrict files to control which interfaces are registered. If for some reason the sysid file no longer exists, the File Server creates a new one with a new unique identifier. When the File Server registers the contents of the new file, the Volume Location (VL) Server normally recognizes automatically that the new file corresponds to an existing server entry, and overwrites the existing server entry with the new file contents and identifier. However, it is best not to remove the sysid file if that can be avoided. Similarly, it is important not to copy the sysid file from one file server machine to another. If you commonly copy the contents of the /usr/afs directory from an existing machine as part of installing a new file server machine, be sure to remove the sysid file from the /usr/afs/local directory on the new machine before starting the File Server. There are certain cases where the VL Server cannot determine whether it is appropriate to overwrite an existing server entry with a new sysid file's contents and identifier. It then refuses to allow the File Server to register the interfaces, which prevents the File Server from starting. This can happen if, for example, a new sysid file includes two interfaces that currently are registered by themselves in separate server entries. In such cases, error messages in the /usr/afs/log/VLLog file on the VL Server machine and in the /usr/afs/log/FileLog file on the file server machine indicate that you need to use the vos changeaddr command to resolve the problem. Contact the AFS Product Support group for instructions and assistance. Except in this type of rare error case, the only appropriate use of the vos changeaddr command is to remove a VLDB server entry completely when you remove a file server machine from service. The VLDB can accommodate a maximum number of server entries, as specified in the OpenAFS Release Notes. Removing obsolete entries makes it possible to allocate server entries for new file server machines as required. See the instructions that follow. Do not use the vos changeaddr command to change the list of interfaces registered in a VLDB server entry. To change a file server machine's IP addresses and server entry, see the instructions that follow. NetInfo file (server version) creating/editing creating NetInfo file (server version) editing NetInfo file (server version) To create or edit the server NetInfo file Become the local superuser root on the machine, if you are not already, by issuing the su command. % su root Password: root_password Using a text editor, open the /usr/afs/local/NetInfo file. Place one IP address in dotted decimal format (for example, 192.12.107.33) on each line. The order of entries is not significant. If you want the File Server to start using the revised list immediately, use the bos restart command to restart the fs process. For instructions, see Stopping and Immediately Restarting Processes. NetRestrict file (server version) creating/editing creating NetRestrict file (server version) editing NetRestrict file (server version) To create or edit the server NetRestrict file Become the local superuser root on the machine, if you are not already, by issuing the su command. % su root Password: root_password Using a text editor, open the /usr/afs/local/NetRestrict file. Place one IP address in dotted decimal format on each line. The order of the addresses is not significant. Use the value 255 as a wildcard that represents all possible addresses in that field. For example, the entry 192.12.105.255 indicates that the Cache Manager does not register any of the addresses in the 192.12.105 subnet. If you want the File Server to start using the revised list immediately, use the bos restart command to restart the fs process. For instructions, see Stopping and Immediately Restarting Processes. vos commands listaddrs commands vos listaddrs To display all server entries from the VLDB Issue the vos listaddrs command to display all server entries from the VLDB. % vos listaddrs where lista is the shortest acceptable abbreviation of listaddrs. The output displays all server entries from the VLDB, each on its own line. If a file server machine is multihomed, all of its registered addresses appear on the line. The first one is the one reported as a volume's site in the output from the vos examine and vos listvldb commands. VLDB server entries record IP addresses, and the command interpreter has the local name service (either a process like the Domain Name Service or a local host table) translate them to hostnames before displaying them. If an IP address appears in the output, it is not possible to translate it. The existence of an entry does not necessarily indicate that the machine that is still an active file server machine. To remove obsolete server entries, see the following instructions. vos commands changeaddr commands vos changeaddr To remove obsolete server entries from the VLDB Verify that you are listed in the /usr/afs/etc/UserList file. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file. % bos listusers <machine name> Issue the vos changeaddr command to remove a server entry from the VLDB. % vos changeaddr <original IP address> -remove where ch Is the shortest acceptable abbreviation of changeaddr. original IP address Specifies one of the IP addresses currently registered for the file server machine in the VLDB. Any of a multihomed file server machine's addresses are acceptable to identify it. -remove Removes the server entry. To change a server machine's IP addresses Verify that you are listed in the /usr/afs/etc/UserList file. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file. % bos listusers <machine name> If the machine is the system control machine or a binary distribution machine, and you are also changing its hostname, redefine all relevant upclient processes on other server machines to refer to the new hostname. Use the bos delete and bos create commands as instructed in Creating and Removing Processes. If the machine is a database server machine, edit its entry in the /usr/afs/etc/CellServDB file on every server machine in the cell to list one of the new IP addresses. If you use the United States edition of AFS, you can edit the file on the system control machine and wait the required time (by default, five minutes) for the Update Server to distribute the changed file to all server machines. If the machine is a database server machine, issue the bos shutdown command to stop all server processes. If the machine is also a file server, the volumes on it are inaccessible during this time. For a complete description of the command, see To stop processes temporarily. % bos shutdown <machine name> Use the utilities provided with the operating system to change one or more of the machine's IP addresses. If appropriate, edit the /usr/afs/local/NetInfo file, the /usr/afs/local/NetRestrict file, or both, to reflect the changed addresses. Instructions appear earlier in this section. If the machine is a database server machine, issue the bos restart command to restart all server processes on the machine. For complete instructions for the bos restart command, see Stopping and Immediately Restarting Processes. % bos restart <machine name> -all At the same time, issue the bos restart command on all other database server machines in the cell to restart the database server processes only (the Authentication, Backup, Protection, and Volume Location Servers). Issue the commands in quick succession so that all of the database server processes vote in the quorum election. % bos restart <machine name> kaserver buserver ptserver vlserver If you are changing IP addresses on every database server machine in the cell, you must also issue the bos restart command on every file server machine in the cell to restart the fs process. If the machine is not a database server machine, issue the bos restart command to restart the fs process (if the machine is a database server, you already restarted the process in the previous step). The File Server automatically compiles a new list of interfaces, records them in the /usr/afs/local/sysid file, and registers them in its VLDB server entry. % bos restart <machine name> fs If the machine is a database server machine, edit its entry in the /usr/vice/etc/CellServDB file on every client machine in the cell to list one of the new IP addresses. Instructions appear in Maintaining Knowledge of Database Server Machines. If there are machine entries in the Protection Database for the machine's previous IP addresses, use the pts rename command to change them to the new addresses. For instructions, see Changing a Protection Database Entry's Name. rebooting server machine, instructions server machine rebooting BOS Server role in reboot of server machine Rebooting a Server Machine You can reboot a server machine either by typing the appropriate commands at its console or by issuing the bos exec command on a remote machine. Remote rebooting can be more convenient, because you do not need to leave your present location, but you cannot track the progress of the reboot as you can at the console. Remote rebooting is possible because the server machine's operating system recognizes the BOS Server, which executes the bos exec command, as the local superuser root. Rebooting server machines is part of routine maintenance in some cells, and some instructions in the AFS documentation include it as a step. It is certainly not intended to be the standard method for recovering from AFS-related problems, however, but only a last resort when the machine is unresponsive and you have tried all other reasonable options. Rebooting causes a service outage. If the machine stores volumes, they are all inaccessible until the reboot completes and the File Server reattaches them. If the machine is a database server machine, information from the databases can become unavailable during the reelection of the synchronization site for each database server process; the VL Server outage generally has the greatest impact, because the Cache Manager must be able to access the VLDB to fetch AFS data. By convention, a server machine's AFS initialization file includes the following command to restart the BOS Server after each reboot. It starts the other AFS server processes listed in the local /usr/afs/local/BosConfig file. These instructions assume that the initialization file includes the command. /usr/afs/bin/bosserver & To reboot a file server machine from its console Become the local superuser root on the machine, if you are not already, by issuing the su command. % su root Password: root_password Issue the bos shutdown command to shut down all AFS server processes other than the BOS Server, which terminates safely when you reboot the machine. Include the -localauth flag because you are logged in as the local superuser root but do not necessarily have administrative tokens. For a complete description of the command, see To stop processes temporarily. # bos shutdown <machine name> -localauth [-wait] Reboot the machine. On many system types, the appropriate command is shutdown, but the appropriate options vary; consult your UNIX administrator's guide. # shutdown commands bos exec bos commands exec To reboot a file server machine remotely Verify that you are listed in the /usr/afs/etc/UserList file on the machine you are rebooting. If necessary, issue the bos listusers command, which is fully described in To display the users in the UserList file. % bos listusers <machine name> Issue the bos shutdown to halt AFS server processes other than the BOS Server, which terminates safely when you turn off the machine. For a complete description of the command, see To stop processes temporarily. % bos shutdown <machine name> [-wait] Issue the bos exec command to reboot the machine remotely. % bos exec <machine name> reboot_command where machine name Names the file server machine to reboot. reboot_command Is the rebooting command for the machine's operating system. The shutdown command is appropriate on many system types, but consult your operating system documentation.