Installing the First AFS Machine file server machine first AFS machine file server machine, additional instructions first AFS machine installing first AFS machine This chapter describes how to install the first AFS machine in your cell, configuring it as both a file server machine and a client machine. After completing all procedures in this chapter, you can remove the client functionality if you wish, as described in Removing Client Functionality. To install additional file server machines after completing this chapter, see Installing Additional Server Machines. To install additional client machines after completing this chapter, see Installing Additional Client Machines. requirements first AFS machine Requirements and Configuration Decisions The instructions in this chapter assume that you meet the following requirements. You are logged onto the machine's console as the local superuser root A standard version of one of the operating systems supported by the current version of AFS is running on the machine You have either installed the provided OpenAFS packages for your system, have access to a binary distribution tarball, or have successfully built OpenAFS from source You have a Kerberos v5 realm running for your site. If you are working with an existing cell which uses kaserver or Kerberos v4 for authentication, please see kaserver and Legacy Kerberos v4 Authentication for the modifications required to this installation procedure. You have NTP or a similar time service deployed to ensure rough clock syncronistation between your clients and servers. You must make the following configuration decisions while installing the first AFS machine. To speed the installation itself, it is best to make the decisions before beginning. See the chapter in the OpenAFS Administration Guide about issues in cell administration and configuration for detailed guidelines. cell name choosing AFS filespace deciding how to configure filespace AFS filespace Select the first AFS machine Select the cell name Decide which partitions or logical volumes to configure as AFS server partitions, and choose the directory names on which to mount them Decide how big to make the client cache Decide how to configure the top levels of your cell's AFS filespace This chapter is divided into three large sections corresponding to the three parts of installing the first AFS machine. Perform all of the steps in the order they appear. Each functional section begins with a summary of the procedures to perform. The sections are as follows: Installing server functionality (begins in Overview: Installing Server Functionality) Installing client functionality (begins in Overview: Installing Client Functionality) Configuring your cell's filespace, establishing further security mechanisms, and enabling access to foreign cells (begins in Overview: Completing the Installation of the First AFS Machine) overview installing server functionality on first AFS machine first AFS machine server functionality installing server functionality first AFS machine Overview: Installing Server Functionality In the first phase of installing your cell's first AFS machine, you install file server and database server functionality by performing the following procedures: Choose which machine to install as the first AFS machine Create AFS-related directories on the local disk Incorporate AFS modifications into the machine's kernel Configure partitions or logical volumes for storing AFS volumes On some system types, install and configure an AFS-modified version of the fsck program If the machine is to remain a client machine, incorporate AFS into its authentication system Start the Basic OverSeer (BOS) Server Define the cell name and the machine's cell membership Start the database server processes: Backup Server, Protection Server, and Volume Location (VL) Server Configure initial security mechanisms Start the fs process, which incorporates three component processes: the File Server, Volume Server, and Salvager Optionally, start the server portion of the Update Server Choosing the First AFS Machine The first AFS machine you install must have sufficient disk space to store AFS volumes. To take best advantage of AFS's capabilities, store client-side binaries as well as user files in volumes. When you later install additional file server machines in your cell, you can distribute these volumes among the different machines as you see fit. These instructions configure the first AFS machine as a database server machine, the binary distribution machine for its system type, and the cell's system control machine. For a description of these roles, see the OpenAFS Administration Guide. Installation of additional machines is simplest if the first machine has the lowest IP address of any database server machine you currently plan to install. If you later install database server functionality on a machine with a lower IP address, you must first update the /usr/vice/etc/CellServDB file on all of your cell's client machines. For more details, see Installing Database Server Functionality. Creating AFS Directories usr/afs directory first AFS machine first AFS machine /usr/afs directory creating /usr/afs directory first AFS machine usr/vice/etc directory first AFS machine first AFS machine /usr/vice/etc directory creating /usr/vice/etc directory first AFS machine / as start to file and directory names see alphabetized entries without initial slash If you are installing from packages (such as Debian .deb or Fedora/SuSe .rpm files), you should now install all of the available OpenAFS packages for your system type. Typically, these will include packages for client and server functionality, and a seperate package containing a suitable kernel module for your running kernel. Consult the package lists on the OpenAFS website to determine the packages appropriate for your system. If you are installing from a tarfile, or from a locally compiled source tree you should create the /usr/afs and /usr/vice/etc directories on the local disk, to house server and client files respectively. Subsequent instructions copy files from the distribution tarfile into them. # mkdir /usr/afs # mkdir /usr/vice # mkdir /usr/vice/etc Performing Platform-Specific Procedures Several of the initial procedures for installing a file server machine differ for each system type. For convenience, the following sections group them together for each system type: kernel extensions AFS kernel extensions loading AFS kernel extensions incorporating building AFS extensions into kernel incorporating AFS kernel extensions Incorporate AFS modifications into the kernel. The kernel on every AFS client machine and, on some systems, the AFS fileservers, must incorporate AFS extensions. On machines that use a dynamic kernel module loader, it is conventional to alter the machine's initialization script to load the AFS extensions at each reboot. AFS server partition mounted on /vicep directory partition AFS server partition logical volume AFS server partition requirements AFS server partition name and location naming conventions for AFS server partition vicepxx directory AFS server partition directories /vicepxx AFS server partition Configure server partitions or logical volumes to house AFS volumes. Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes (for convenience, the documentation hereafter refers to partitions only). Each server partition is mounted at a directory named /vicepxx, where xx is one or two lowercase letters. By convention, the first 26 partitions are mounted on the directories called /vicepa through /vicepz, the 27th one is mounted on the /vicepaa directory, and so on through /vicepaz and /vicepba, continuing up to the index corresponding to the maximum number of server partitions supported in the current version of AFS (which is specified in the OpenAFS Release Notes). The /vicepxx directories must reside in the file server machine's root directory, not in one of its subdirectories (for example, /usr/vicepa is not an acceptable directory location). The fileserver will refuse to mount any /vicepxx folders that are not separate partitions. The separate partition requirement may be overridden by creating a file named /vicepxx/AlwaysAttach; however, mixed-use partitions, whether cache or fileserver, have the risk that a non-AFS use will fill the partition and not leave enough free space for AFS. Even though it is allowed, be wary of configuring a mixed-use partition without understanding the ramifications of doing so with the workload on your filesystem. AFS server partition AlwaysAttach You can also add or remove server partitions on an existing file server machine. For instructions, see the chapter in the OpenAFS Administration Guide about maintaining server machines. Not all file system types supported by an operating system are necessarily supported as AFS server partitions. For possible restrictions, see the OpenAFS Release Notes. On system types using the inode storage format, install and configure a modified fsck program which recognizes the structures that the File Server uses to organize volume data on AFS server partitions. The fsck program provided with the operating system does not understand the AFS data structures, and so removes them to the lost+found directory. If the machine is to remain an AFS client machine, modify the machine's authentication system so that users obtain an AFS token as they log into the local file system. Using AFS is simpler and more convenient for your users if you make the modifications on all client machines. Otherwise, users must perform a two or three step login procedure (login to the local system, then obtain Kerberos credentials, and then issue the aklog command). For further discussion of AFS authentication, see the chapter in the OpenAFS Administration Guide about cell configuration and administration issues. To continue, proceed to the appropriate section: Getting Started on AIX Systems Getting Started on HP-UX Systems Getting Started on IRIX Systems Getting Started on Linux Systems Getting Started on Solaris Systems Getting Started on AIX Systems Begin by running the AFS initialization script to call the AIX kernel extension facility, which dynamically loads AFS modifications into the kernel. Then use the SMIT program to configure partitions for storing AFS volumes, and replace the AIX fsck program helper with a version that correctly handles AFS volumes. If the machine is to remain an AFS client machine, incorporate AFS into the AIX secondary authentication system. incorporating AFS kernel extensions first AFS machine AIX AFS kernel extensions on first AFS machine AIX first AFS machine AFS kernel extensions on AIX AIX AFS kernel extensions on first AFS machine Loading AFS into the AIX Kernel The AIX kernel extension facility is the dynamic kernel loader provided by IBM Corporation. AIX does not support incorporation of AFS modifications during a kernel build. For AFS to function correctly, the kernel extension facility must run each time the machine reboots, so the AFS initialization script (included in the AFS distribution) invokes it automatically. In this section you copy the script to the conventional location and edit it to select the appropriate options depending on whether NFS is also to run. After editing the script, you run it to incorporate AFS into the kernel. In later sections you verify that the script correctly initializes all AFS components, then configure the AIX inittab file so that the script runs automatically at reboot. Unpack the distribution tarball. The examples below assume that you have unpacked the files into the /tmp/afsdist directory. If you pick a different location, substitute this in all of the following examples. Once you have unpacked the distribution, change directory as indicated. # cd /tmp/afsdist/rs_aix42/dest/root.client/usr/vice/etc Copy the AFS kernel library files to the local /usr/vice/etc/dkload directory, and the AFS initialization script to the /etc directory. # cp -rp dkload /usr/vice/etc # cp -p rc.afs /etc/rc.afs Edit the /etc/rc.afs script, setting the NFS variable as indicated. If the machine is not to function as an NFS/AFS Translator, set the NFS variable as follows. NFS=$NFS_NONE If the machine is to function as an NFS/AFS Translator and is running AIX 4.2.1 or higher, set the NFS variable as follows. Note that NFS must already be loaded into the kernel, which happens automatically on systems running AIX 4.1.1 and later, as long as the file /etc/exports exists. NFS=$NFS_IAUTH Invoke the /etc/rc.afs script to load AFS modifications into the kernel. You can ignore any error messages about the inability to start the BOS Server or the Cache Manager or AFS client. # /etc/rc.afs configuring AFS server partition on first AFS machine AIX AFS server partition configuring on first AFS machine AIX first AFS machine AFS server partition on AIX AIX AFS server partition on first AFS machine Configuring Server Partitions on AIX Systems Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each server partition is mounted at a directory named /vicepxx, where xx is one or two lowercase letters. The /vicepxx directories must reside in the file server machine's root directory, not in one of its subdirectories (for example, /usr/vicepa is not an acceptable directory location). For additional information, see Performing Platform-Specific Procedures. To configure server partitions on an AIX system, perform the following procedures: Create a directory called /vicepxx for each AFS server partition you are configuring (there must be at least one). Repeat the command for each partition. # mkdir /vicepxx Use the SMIT program to create a journaling file system on each partition to be configured as an AFS server partition. Mount each partition at one of the /vicepxx directories. Choose one of the following three methods: Use the SMIT program Use the mount -a command to mount all partitions at once Use the mount command on each partition in turn Also configure the partitions so that they are mounted automatically at each reboot. For more information, refer to the AIX documentation. replacing fsck program first AFS machine AIX fsck program on first AFS machine AIX first AFS machine fsck program on AIX AIX fsck program on first AFS machine Replacing the fsck Program Helper on AIX Systems The AFS modified fsck program is not required on AIX 5.1 systems, and the v3fshelper program refered to below is not shipped for these systems. In this section, you make modifications to guarantee that the appropriate fsck program runs on AFS server partitions. The fsck program provided with the operating system must never run on AFS server partitions. Because it does not recognize the structures that the File Server uses to organize volume data, it removes all of the data. To repeat: Never run the standard fsck program on AFS server partitions. It discards AFS volumes. On AIX systems, you do not replace the fsck binary itself, but rather the program helper file included in the AIX distribution as /sbin/helpers/v3fshelper. Move the AIX fsck program helper to a safe location and install the version from the AFS distribution in its place. # cd /sbin/helpers # mv v3fshelper v3fshelper.noafs # cp -p /tmp/afsdist/rs_aix42/dest/root.server/etc/v3fshelper v3fshelper If you plan to retain client functionality on this machine after completing the installation, proceed to Enabling AFS Login on AIX Systems. Otherwise, proceed to Starting the BOS Server. enabling AFS login file server machine AIX AFS login on file server machine AIX first AFS machine AFS login on AIX AIX AFS login on file server machine secondary authentication system (AIX) server machine Enabling AFS Login on AIX Systems If you plan to remove client functionality from this machine after completing the installation, skip this section and proceed to Starting the BOS Server. In modern AFS installations, you should be using Kerberos v5 for user login, and obtaining AFS tokens following this authentication step. There are currently no instructions available on configuring AIX to automatically obtain AFS tokens at login. Following login, users can obtain tokens by running the aklog command Sites which still require kaserver or external Kerberos v4 authentication should consult Enabling kaserver based AFS login on AIX systems for details of how to enable AIX login. Proceed to Starting the BOS Server (or if referring to these instructions while installing an additional file server machine, return to Starting Server Programs). Getting Started on HP-UX Systems Begin by building AFS modifications into a new kernel; HP-UX does not support dynamic loading. Then create partitions for storing AFS volumes, and install and configure the AFS-modified fsck program to run on AFS server partitions. If the machine is to remain an AFS client machine, incorporate AFS into the machine's Pluggable Authentication Module (PAM) scheme. incorporating AFS kernel extensions first AFS machine HP-UX AFS kernel extensions on first AFS machine HP-UX first AFS machine AFS kernel extensions on HP-UX HP-UX AFS-modified kernel on first AFS machine Building AFS into the HP-UX Kernel Use the following instructions to build AFS modifications into the kernel on an HP-UX system. Move the existing kernel-related files to a safe location. # cp /stand/vmunix /stand/vmunix.noafs # cp /stand/system /stand/system.noafs Unpack the OpenAFS HP-UX distribution tarball. The examples below assume that you have unpacked the files into the /tmp/afsdist directory. If you pick a different location, substitute this in all of the following examples. Once you have unpacked the distribution, change directory as indicated. # cd /tmp/afsdist/hp_ux110/dest/root.client Copy the AFS initialization file to the local directory for initialization files (by convention, /sbin/init.d on HP-UX machines). Note the removal of the .rc extension as you copy the file. # cp usr/vice/etc/afs.rc /sbin/init.d/afs Copy the file afs.driver to the local /usr/conf/master.d directory, changing its name to afs as you do. # cp usr/vice/etc/afs.driver /usr/conf/master.d/afs Copy the AFS kernel module to the local /usr/conf/lib directory. If the machine's kernel supports NFS server functionality: # cp bin/libafs.a /usr/conf/lib If the machine's kernel does not support NFS server functionality, change the file's name as you copy it: # cp bin/libafs.nonfs.a /usr/conf/lib/libafs.a Incorporate the AFS driver into the kernel, either using the SAM program or a series of individual commands. To use the SAM program: Invoke the SAM program, specifying the hostname of the local machine as local_hostname. The SAM graphical user interface pops up. # sam -display local_hostname:0 Choose the Kernel Configuration icon, then the Drivers icon. From the list of drivers, select afs. Open the pull-down Actions menu and choose the Add Driver to Kernel option. Open the Actions menu again and choose the Create a New Kernel option. Confirm your choices by choosing Yes and OK when prompted by subsequent pop-up windows. The SAM program builds the kernel and reboots the system. Login again as the superuser root. login: root Password: root_password To use individual commands: Edit the file /stand/system, adding an entry for afs to the Subsystems section. Change to the /stand/build directory and issue the mk_kernel command to build the kernel. # cd /stand/build # mk_kernel Move the new kernel to the standard location (/stand/vmunix), reboot the machine to start using it, and login again as the superuser root. # mv /stand/build/vmunix_test /stand/vmunix # cd / # shutdown -r now login: root Password: root_password configuring AFS server partition on first AFS machine HP-UX AFS server partition configuring on first AFS machine HP-UX first AFS machine AFS server partition on HP-UX HP-UX AFS server partition on first AFS machine Configuring Server Partitions on HP-UX Systems Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each server partition is mounted at a directory named /vicepxx, where xx is one or two lowercase letters. The /vicepxx directories must reside in the file server machine's root directory, not in one of its subdirectories (for example, /usr/vicepa is not an acceptable directory location). For additional information, see Performing Platform-Specific Procedures. Create a directory called /vicepxx for each AFS server partition you are configuring (there must be at least one). Repeat the command for each partition. # mkdir /vicepxx Use the SAM program to create a file system on each partition. For instructions, consult the HP-UX documentation. On some HP-UX systems that use logical volumes, the SAM program automatically mounts the partitions. If it has not, mount each partition by issuing either the mount -a command to mount all partitions at once or the mount command to mount each partition in turn. replacing fsck program first AFS machine HP-UX fsck program on first AFS machine HP-UX first AFS machine fsck program on HP-UX HP-UX fsck program on first AFS machine Configuring the AFS-modified fsck Program on HP-UX Systems In this section, you make modifications to guarantee that the appropriate fsck program runs on AFS server partitions. The fsck program provided with the operating system must never run on AFS server partitions. Because it does not recognize the structures that the File Server uses to organize volume data, it removes all of the data. To repeat: Never run the standard fsck program on AFS server partitions. It discards AFS volumes. On HP-UX systems, there are several configuration files to install in addition to the AFS-modified fsck program (the vfsck binary). Create the command configuration file /sbin/lib/mfsconfig.d/afs. Use a text editor to place the indicated two lines in it: format_revision 1 fsck 0 m,P,p,d,f,b:c:y,n,Y,N,q, Create and change directory to an AFS-specific command directory called /sbin/fs/afs. # mkdir /sbin/fs/afs # cd /sbin/fs/afs Copy the AFS-modified version of the fsck program (the vfsck binary) and related files from the distribution directory to the new AFS-specific command directory. # cp -p /tmp/afsdist/hp_ux110/dest/root.server/etc/* . Change the vfsck binary's name to fsck and set the mode bits appropriately on all of the files in the /sbin/fs/afs directory. # mv vfsck fsck # chmod 755 * Edit the /etc/fstab file, changing the file system type for each AFS server partition from hfs to afs. This ensures that the AFS-modified fsck program runs on the appropriate partitions. The sixth line in the following example of an edited file shows an AFS server partition, /vicepa. /dev/vg00/lvol1 / hfs defaults 0 1 /dev/vg00/lvol4 /opt hfs defaults 0 2 /dev/vg00/lvol5 /tmp hfs defaults 0 2 /dev/vg00/lvol6 /usr hfs defaults 0 2 /dev/vg00/lvol8 /var hfs defaults 0 2 /dev/vg00/lvol9 /vicepa afs defaults 0 2 /dev/vg00/lvol7 /usr/vice/cache hfs defaults 0 2 If you plan to retain client functionality on this machine after completing the installation, proceed to Enabling AFS Login on HP-UX Systems. Otherwise, proceed to Starting the BOS Server. enabling AFS login file server machine HP-UX AFS login on file server machine HP-UX first AFS machine AFS login on HP-UX HP-UX AFS login on file server machine PAM on HP-UX file server machine Pluggable Authentication Module PAM Enabling AFS Login on HP-UX Systems If you plan to remove client functionality from this machine after completing the installation, skip this section and proceed to Starting the BOS Server. At this point you incorporate AFS into the operating system's Pluggable Authentication Module (PAM) scheme. PAM integrates all authentication mechanisms on the machine, including login, to provide the security infrastructure for authenticated access to and from the machine. In modern AFS installations, you should be using Kerberos v5 for user login, and obtaining AFS tokens subsequent to this authentication step. OpenAFS does not currently distribute a PAM module allowing AFS tokens to be automatically gained at login. Whilst there are a number of third party modules providing this functionality, it is not know if these have been tested with HP/UX. Following login, users can obtain tokens by running the aklog command Sites which still require kaserver or external Kerberos v4 authentication should consult Enabling kaserver based AFS login on HP-UX systems for details of how to enable HP-UX login. Proceed to Starting the BOS Server (or if referring to these instructions while installing an additional file server machine, return to Starting Server Programs). Getting Started on IRIX Systems incorporating AFS kernel extensions first AFS machine IRIX AFS kernel extensions on first AFS machine IRIX first AFS machine AFS kernel extensions on IRIX replacing fsck program not necessary on IRIX fsck program on first AFS machine IRIX first AFS machine fsck program on IRIX IRIX fsck program replacement not necessary To incorporate AFS into the kernel on IRIX systems, choose one of two methods: Run the AFS initialization script to invoke the ml program distributed by Silicon Graphics, Incorporated (SGI), which dynamically loads AFS modifications into the kernel Build a new static kernel Then create partitions for storing AFS volumes. You do not need to replace the IRIX fsck program because SGI has already modified it to handle AFS volumes properly. If the machine is to remain an AFS client machine, verify that the IRIX login utility installed on the machine grants an AFS token. In preparation for either dynamic loading or kernel building, perform the following procedures: Unpack the OpenAFS IRIX distribution tarball. The examples below assume that you have unpacked the files into the /tmp/afsdist directory. If you pick a different location, substitue this in all of the following examples. Once you have unpacked the distribution, change directory as indicated. # cd /tmp/afsdist/sgi_65/dest/root.client Copy the AFS initialization script to the local directory for initialization files (by convention, /etc/init.d on IRIX machines). Note the removal of the .rc extension as you copy the script. # cp -p usr/vice/etc/afs.rc /etc/init.d/afs Issue the uname -m command to determine the machine's CPU board type. The IPxx value in the output must match one of the supported CPU board types listed in the OpenAFS Release Notes for the current version of AFS. # uname -m Proceed to either Loading AFS into the IRIX Kernel or Building AFS into the IRIX Kernel. IRIX AFS kernel extensions on first AFS machine afsml variable (IRIX) first AFS machine variables afsml (IRIX) first AFS machine IRIX afsml variable first AFS machine afsxnfs variable (IRIX) first AFS machine variables afsxnfs (IRIX) first AFS machine IRIX afsxnfs variable first AFS machine Loading AFS into the IRIX Kernel The ml program is the dynamic kernel loader provided by SGI for IRIX systems. If you use it rather than building AFS modifications into a static kernel, then for AFS to function correctly the ml program must run each time the machine reboots. Therefore, the AFS initialization script (included on the AFS CD-ROM) invokes it automatically when the afsml configuration variable is activated. In this section you activate the variable and run the script. In later sections you verify that the script correctly initializes all AFS components, then create the links that incorporate AFS into the IRIX startup and shutdown sequence. Create the local /usr/vice/etc/sgiload directory to house the AFS kernel library file. # mkdir /usr/vice/etc/sgiload Copy the appropriate AFS kernel library file to the /usr/vice/etc/sgiload directory. The IPxx portion of the library file name must match the value previously returned by the uname -m command. Also choose the file appropriate to whether the machine's kernel supports NFS server functionality (NFS must be supported for the machine to act as an NFS/AFS Translator). Single- and multiprocessor machines use the same library file. (You can choose to copy all of the kernel library files into the /usr/vice/etc/sgiload directory, but they require a significant amount of space.) If the machine's kernel supports NFS server functionality: # cp -p usr/vice/etc/sgiload/libafs.IPxx.o /usr/vice/etc/sgiload If the machine's kernel does not support NFS server functionality: # cp -p usr/vice/etc/sgiload/libafs.IPxx.nonfs.o \ /usr/vice/etc/sgiload Issue the chkconfig command to activate the afsml configuration variable. # /etc/chkconfig -f afsml on If the machine is to function as an NFS/AFS Translator and the kernel supports NFS server functionality, activate the afsxnfs variable. # /etc/chkconfig -f afsxnfs on Run the /etc/init.d/afs script to load AFS extensions into the kernel. The script invokes the ml command, automatically determining which kernel library file to use based on this machine's CPU type and the activation state of the afsxnfs variable. You can ignore any error messages about the inability to start the BOS Server or the Cache Manager or AFS client. # /etc/init.d/afs start Proceed to Configuring Server Partitions on IRIX Systems. IRIX AFS-modified kernel on first AFS machine Building AFS into the IRIX Kernel Use the following instructions to build AFS modifications into the kernel on an IRIX system. Copy the kernel initialization file afs.sm to the local /var/sysgen/system directory, and the kernel master file afs to the local /var/sysgen/master.d directory. # cp -p bin/afs.sm /var/sysgen/system # cp -p bin/afs /var/sysgen/master.d Copy the appropriate AFS kernel library file to the local file /var/sysgen/boot/afs.a; the IPxx portion of the library file name must match the value previously returned by the uname -m command. Also choose the file appropriate to whether the machine's kernel supports NFS server functionality (NFS must be supported for the machine to act as an NFS/AFS Translator). Single- and multiprocessor machines use the same library file. If the machine's kernel supports NFS server functionality: # cp -p bin/libafs.IPxx.a /var/sysgen/boot/afs.a If the machine's kernel does not support NFS server functionality: # cp -p bin/libafs.IPxx.nonfs.a /var/sysgen/boot/afs.a Issue the chkconfig command to deactivate the afsml configuration variable. # /etc/chkconfig -f afsml off If the machine is to function as an NFS/AFS Translator and the kernel supports NFS server functionality, activate the afsxnfs variable. # /etc/chkconfig -f afsxnfs on Copy the existing kernel file, /unix, to a safe location. Compile the new kernel, which is created in the file /unix.install. It overwrites the existing /unix file when the machine reboots in the next step. # cp /unix /unix_noafs # autoconfig Reboot the machine to start using the new kernel, and login again as the superuser root. # cd / # shutdown -i6 -g0 -y login: root Password: root_password configuring AFS server partition on first AFS machine IRIX AFS server partition configuring on first AFS machine IRIX first AFS machine AFS server partition on IRIX IRIX AFS server partition on first AFS machine Configuring Server Partitions on IRIX Systems Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each server partition is mounted at a directory named /vicepxx, where xx is one or two lowercase letters. The /vicepxx directories must reside in the file server machine's root directory, not in one of its subdirectories (for example, /usr/vicepa is not an acceptable directory location). For additional information, see Performing Platform-Specific Procedures. AFS supports use of both EFS and XFS partitions for housing AFS volumes. SGI encourages use of XFS partitions. Create a directory called /vicepxx for each AFS server partition you are configuring (there must be at least one). Repeat the command for each partition. # mkdir /vicepxx Add a line with the following format to the file systems registry file, /etc/fstab, for each partition (or logical volume created with the XLV volume manager) to be mounted on one of the directories created in the previous step. For an XFS partition or logical volume: /dev/dsk/disk /vicepxx xfs rw,raw=/dev/rdsk/disk 0 0 For an EFS partition: /dev/dsk/disk /vicepxx efs rw,raw=/dev/rdsk/disk 0 0 The following are examples of an entry for each file system type: /dev/dsk/dks0d2s6 /vicepa xfs rw,raw=/dev/rdsk/dks0d2s6 0 0 /dev/dsk/dks0d3s1 /vicepb efs rw,raw=/dev/rdsk/dks0d3s1 0 0 Create a file system on each partition that is to be mounted on a /vicepxx directory. The following commands are probably appropriate, but consult the IRIX documentation for more information. In both cases, raw_device is a raw device name like /dev/rdsk/dks0d0s0 for a single disk partition or /dev/rxlv/xlv0 for a logical volume. For XFS file systems, include the indicated options to configure the partition or logical volume with inodes large enough to accommodate AFS-specific information: # mkfs -t xfs -i size=512 -l size=4000b raw_device For EFS file systems: # mkfs -t efs raw_device Mount each partition by issuing either the mount -a command to mount all partitions at once or the mount command to mount each partition in turn. (Optional) If you have configured partitions or logical volumes to use XFS, issue the following command to verify that the inodes are configured properly (are large enough to accommodate AFS-specific information). If the configuration is correct, the command returns no output. Otherwise, it specifies the command to run in order to configure each partition or logical volume properly. # /usr/afs/bin/xfs_size_check If you plan to retain client functionality on this machine after completing the installation, proceed to Enabling AFS Login on IRIX Systems. Otherwise, proceed to Starting the BOS Server. enabling AFS login file server machine IRIX AFS login on file server machine IRIX first AFS machine AFS login on IRIX IRIX AFS login Enabling AFS Login on IRIX Systems If you plan to remove client functionality from this machine after completing the installation, skip this section and proceed to Starting the BOS Server. Whilst the standard IRIX command-line login program and the graphical xdm login program both have the ability to grant AFS tokens, this ability relies upon the deprecated kaserver authentication system. Users who have been successfully authenticated via Kerberos 5 authentication may obtain AFS tokens following login by running the aklog command. Sites which still require kaserver or external Kerberos v4 authentication should consult Enabling kaserver based AFS Login on IRIX Systems for details of how to enable IRIX login. After taking any necessary action, proceed to Starting the BOS Server. Getting Started on Linux Systems replacing fsck program not necessary on Linux fsck program on first AFS machine Linux first AFS machine fsck program on Linux Linux fsck program replacement not necessary Since this guide was originally written, the procedure for starting OpenAFS has diverged significantly between different Linux distributions. The instructions that follow are appropriate for both the Fedora and RedHat Enterprise Linux packages distributed by OpenAFS. Additional instructions are provided for those building from source. Begin by running the AFS client startup scripts, which call the modprobe program to dynamically load the AFS modifications into the kernel. Then create partitions for storing AFS volumes. You do not need to replace the Linux fsck program. If the machine is to remain an AFS client machine, incorporate AFS into the machine's Pluggable Authentication Module (PAM) scheme. incorporating AFS kernel extensions first AFS machine Linux AFS kernel extensions on first AFS machine Linux first AFS machine AFS kernel extensions on Linux Linux AFS kernel extensions on first AFS machine Loading AFS into the Linux Kernel The modprobe program is the dynamic kernel loader for Linux. Linux does not support incorporation of AFS modifications during a kernel build. For AFS to function correctly, the modprobe program must run each time the machine reboots, so your distribution's AFS initialization script invokes it automatically. The script also includes commands that select the appropriate AFS library file automatically. In this section you run the script. In later sections you verify that the script correctly initializes all AFS components, then activate a configuration variable, which results in the script being incorporated into the Linux startup and shutdown sequence. The procedure for starting up OpenAFS depends upon your distribution Fedora and RedHat Enterprise Linux OpenAFS provides RPMS for all current Fedora and RedHat Enterprise Linux (RHEL) releases on the OpenAFS web site and the OpenAFS yum repository. Browse to http://dl.openafs.org/dl/openafs/VERSION, where VERSION is the latest stable release of OpenAFS. Download the openafs-repository-VERSION.noarch.rpm file for Fedora systems or the openafs-repository-rhel-VERSION.noarch.rpm file for RedHat-based systems. Install the downloaded RPM file using the following command: # rpm -U openafs-repository*.rpm Install the RPM set for your operating system using the yum command as follows: # yum -y install openafs-client openafs-server openafs-krb5 kmod-openafs Alternatively, you may use dynamically-compiled kernel modules if you have the kernel headers, a compiler, and the dkms package from EPEL installed. To use dynamically-compiled kernel modules instead of statically compiled modules, use the following command instead of the kmod-openafs as shown above: # yum install openafs-client openafs-server openafs-krb5 dkms-openafs Systems packaged as tar files If you are running a system where the OpenAFS Binary Distribution is provided as a tar file, or where you have built the system from source yourself, you need to install the relevant components by hand Unpack the distribution tarball. The examples below assume that you have unpacked the files into the /tmp/afsdist directory. If you pick a different location, substitute this in all of the following examples. Once you have unpacked the distribution, change directory as indicated. # cd /tmp/afsdist/linux/dest/root.client/usr/vice/etc Copy the AFS kernel library files to the local /usr/vice/etc/modload directory. The filenames for the libraries have the format libafs-version.o, where version indicates the kernel build level. The string .mp in the version indicates that the file is appropriate for machines running a multiprocessor kernel. # cp -rp modload /usr/vice/etc Copy the AFS initialization script to the local directory for initialization files (by convention, /etc/rc.d/init.d on Linux machines). Note the removal of the .rc extension as you copy the script. # cp -p afs.rc /etc/rc.d/init.d/afs configuring AFS server partition on first AFS machine Linux AFS server partition configuring on first AFS machine Linux first AFS machine AFS server partition on Linux Linux AFS server partition on first AFS machine Configuring Server Partitions on Linux Systems Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each server partition is mounted at a directory named /vicepxx, where xx is one or two lowercase letters. The /vicepxx directories must reside in the file server machine's root directory, not in one of its subdirectories (for example, /usr/vicepa is not an acceptable directory location). For additional information, see Performing Platform-Specific Procedures. Create a directory called /vicepxx for each AFS server partition you are configuring (there must be at least one). Repeat the command for each partition. # mkdir /vicepxx Add a line with the following format to the file systems registry file, /etc/fstab, for each directory just created. The entry maps the directory name to the disk partition to be mounted on it. /dev/disk /vicepxx ext2 defaults 0 2 The following is an example for the first partition being configured. /dev/sda8 /vicepa ext2 defaults 0 2 Create a file system on each partition that is to be mounted at a /vicepxx directory. The following command is probably appropriate, but consult the Linux documentation for more information. # mkfs -v /dev/disk Mount each partition by issuing either the mount -a command to mount all partitions at once or the mount command to mount each partition in turn. If you plan to retain client functionality on this machine after completing the installation, proceed to Enabling AFS Login on Linux Systems. Otherwise, proceed to Starting the BOS Server. enabling AFS login file server machine Linux AFS login on file server machine Linux first AFS machine AFS login on Linux Linux AFS login on file server machine PAM on Linux file server machine Enabling AFS Login on Linux Systems If you plan to remove client functionality from this machine after completing the installation, skip this section and proceed to Starting the BOS Server. At this point you incorporate AFS into the operating system's Pluggable Authentication Module (PAM) scheme. PAM integrates all authentication mechanisms on the machine, including login, to provide the security infrastructure for authenticated access to and from the machine. You should first configure your system to obtain Kerberos v5 tickets as part of the authentication process, and then run an AFS PAM module to obtain tokens from those tickets after authentication. Many Linux distributions come with a Kerberos v5 PAM module (usually called pam-krb5 or pam_krb5), or you can download and install Russ Allbery's Kerberos v5 PAM module, which is tested regularly with AFS. See the instructions of whatever PAM module you use for how to configure it. Some Kerberos v5 PAM modules do come with native AFS support (usually requiring the Heimdal Kerberos implementation rather than the MIT Kerberos implementation). If you are using one of those PAM modules, you can configure it to obtain AFS tokens. It's more common, however, to separate the AFS token acquisition into a separate PAM module. The recommended AFS PAM module is Russ Allbery's pam-afs-session module. It should work with any of the Kerberos v5 PAM modules. To add it to the PAM configuration, you often only need to add configuration to the session group: Linux PAM session example session required pam_afs_session.so If you also want to obtain AFS tokens for scp and similar commands that don't open a session, you will also need to add the AFS PAM module to the auth group so that the PAM setcred call will obtain tokens. The pam_afs_session module will always return success for authentication so that it can be added to the auth group only for setcred, so make sure that it's not marked as sufficient. Linux PAM auth example auth [success=ok default=1] pam_krb5.so auth [default=done] pam_afs_session.so auth required pam_unix.so try_first_pass This example will work if you want to try Kerberos v5 first and then fall back to regular Unix authentication. success=ok for the Kerberos PAM module followed by default=done for the AFS PAM module will cause a successful Kerberos login to run the AFS PAM module and then skip the Unix authentication module. default=1 on the Kerberos PAM module causes failure of that module to skip the next module (the AFS PAM module) and fall back to the Unix module. If you want to try Unix authentication first and rearrange the order, be sure to use default=die instead. The PAM configuration is stored in different places in different Linux distributions. On Red Hat, look in /etc/pam.d/system-auth. On Debian and derivatives, look in /etc/pam.d/common-session and /etc/pam.d/common-auth. For additional configuration examples and the configuration options of the AFS PAM module, see its documentation. For more details on the available options for the PAM configuration, see the Linux PAM documentation. Sites which still require kaserver or external Kerberos v4 authentication should consult Enabling kaserver based AFS Login on Linux Systems for details of how to enable AFS login on Linux. Proceed to Starting the BOS Server (or if referring to these instructions while installing an additional file server machine, return to Starting Server Programs). Getting Started on Solaris Systems Begin by running the AFS initialization script to call the modload program distributed by Sun Microsystems, which dynamically loads AFS modifications into the kernel. Then create partitions for storing AFS volumes, and install and configure the AFS-modified fsck program to run on AFS server partitions. If the machine is to remain an AFS client machine, incorporate AFS into the machine's Pluggable Authentication Module (PAM) scheme. incorporating AFS kernel extensions first AFS machine Solaris AFS kernel extensions on first AFS machine Solaris first AFS machine AFS kernel extensions on Solaris Solaris AFS kernel extensions on first AFS machine Loading AFS into the Solaris Kernel The modload program is the dynamic kernel loader provided by Sun Microsystems for Solaris systems. Solaris does not support incorporation of AFS modifications during a kernel build. For AFS to function correctly, the modload program must run each time the machine reboots, so the AFS initialization script (included on the AFS CD-ROM) invokes it automatically. In this section you copy the appropriate AFS library file to the location where the modload program accesses it and then run the script. In later sections you verify that the script correctly initializes all AFS components, then create the links that incorporate AFS into the Solaris startup and shutdown sequence. Unpack the OpenAFS Solaris distribution tarball. The examples below assume that you have unpacked the files into the /tmp/afsdist directory. If you pick a diferent location, substitute this in all of the following exmaples. Once you have unpacked the distribution, change directory as indicated. # cd /tmp/afsdist/sun4x_56/dest/root.client/usr/vice/etc Copy the AFS initialization script to the local directory for initialization files (by convention, /etc/init.d on Solaris machines). Note the removal of the .rc extension as you copy the script. # cp -p afs.rc /etc/init.d/afs Copy the appropriate AFS kernel library file to the local file /kernel/fs/afs. If the machine is running Solaris 11 on the x86_64 platform: # cp -p modload/libafs64.o /kernel/drv/amd64/afs If the machine is running Solaris 10 on the x86_64 platform: # cp -p modload/libafs64.o /kernel/fs/amd64/afs If the machine is running Solaris 2.6 or the 32-bit version of Solaris 7, its kernel supports NFS server functionality, and the nfsd process is running: # cp -p modload/libafs.o /kernel/fs/afs If the machine is running Solaris 2.6 or the 32-bit version of Solaris 7, and its kernel does not support NFS server functionality or the nfsd process is not running: # cp -p modload/libafs.nonfs.o /kernel/fs/afs If the machine is running the 64-bit version of Solaris 7, its kernel supports NFS server functionality, and the nfsd process is running: # cp -p modload/libafs64.o /kernel/fs/sparcv9/afs If the machine is running the 64-bit version of Solaris 7, and its kernel does not support NFS server functionality or the nfsd process is not running: # cp -p modload/libafs64.nonfs.o /kernel/fs/sparcv9/afs Run the AFS initialization script to load AFS modifications into the kernel. You can ignore any error messages about the inability to start the BOS Server or the Cache Manager or AFS client. # /etc/init.d/afs start When an entry called afs does not already exist in the local /etc/name_to_sysnum file, the script automatically creates it and reboots the machine to start using the new version of the file. If this happens, log in again as the superuser root after the reboot and run the initialization script again. This time the required entry exists in the /etc/name_to_sysnum file, and the modload program runs. login: root Password: root_password # /etc/init.d/afs start replacing fsck program first AFS machine Solaris fsck program on first AFS machine Solaris first AFS machine fsck program on Solaris Solaris fsck program on first AFS machine Configuring the AFS-modified fsck Program on Solaris Systems In this section, you make modifications to guarantee that the appropriate fsck program runs on AFS server partitions. The fsck program provided with the operating system must never run on AFS server partitions. Because it does not recognize the structures that the File Server uses to organize volume data, it removes all of the data. To repeat: Never run the standard fsck program on AFS server partitions. It discards AFS volumes. Create the /usr/lib/fs/afs directory to house the AFS-modified fsck program and related files. # mkdir /usr/lib/fs/afs # cd /usr/lib/fs/afs Copy the vfsck binary to the newly created directory, changing the name as you do so. # cp /tmp/afsdist/sun4x_56/dest/root.server/etc/vfsck fsck Working in the /usr/lib/fs/afs directory, create the following links to Solaris libraries: # ln -s /usr/lib/fs/ufs/clri # ln -s /usr/lib/fs/ufs/df # ln -s /usr/lib/fs/ufs/edquota # ln -s /usr/lib/fs/ufs/ff # ln -s /usr/lib/fs/ufs/fsdb # ln -s /usr/lib/fs/ufs/fsirand # ln -s /usr/lib/fs/ufs/fstyp # ln -s /usr/lib/fs/ufs/labelit # ln -s /usr/lib/fs/ufs/lockfs # ln -s /usr/lib/fs/ufs/mkfs # ln -s /usr/lib/fs/ufs/mount # ln -s /usr/lib/fs/ufs/ncheck # ln -s /usr/lib/fs/ufs/newfs # ln -s /usr/lib/fs/ufs/quot # ln -s /usr/lib/fs/ufs/quota # ln -s /usr/lib/fs/ufs/quotaoff # ln -s /usr/lib/fs/ufs/quotaon # ln -s /usr/lib/fs/ufs/repquota # ln -s /usr/lib/fs/ufs/tunefs # ln -s /usr/lib/fs/ufs/ufsdump # ln -s /usr/lib/fs/ufs/ufsrestore # ln -s /usr/lib/fs/ufs/volcopy Append the following line to the end of the file /etc/dfs/fstypes. afs AFS Utilities Edit the /sbin/mountall file, making two changes. Add an entry for AFS to the case statement for option 2, so that it reads as follows: case "$2" in ufs) foptions="-o p" ;; afs) foptions="-o p" ;; s5) foptions="-y -t /var/tmp/tmp$$ -D" ;; *) foptions="-y" ;; Edit the file so that all AFS and UFS partitions are checked in parallel. Replace the following section of code: # For fsck purposes, we make a distinction between ufs and # other file systems # if [ "$fstype" = "ufs" ]; then ufs_fscklist="$ufs_fscklist $fsckdev" saveentry $fstype "$OPTIONS" $special $mountp continue fi with the following section of code: # For fsck purposes, we make a distinction between ufs/afs # and other file systems. # if [ "$fstype" = "ufs" -o "$fstype" = "afs" ]; then ufs_fscklist="$ufs_fscklist $fsckdev" saveentry $fstype "$OPTIONS" $special $mountp continue fi configuring AFS server partition on first AFS machine Solaris AFS server partition configuring on first AFS machine Solaris first AFS machine AFS server partition on Solaris Solaris AFS server partition on first AFS machine Configuring Server Partitions on Solaris Systems Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each server partition is mounted at a directory named /vicepxx, where xx is one or two lowercase letters. The /vicepxx directories must reside in the file server machine's root directory, not in one of its subdirectories (for example, /usr/vicepa is not an acceptable directory location). For additional information, see Performing Platform-Specific Procedures. Create a directory called /vicepxx for each AFS server partition you are configuring (there must be at least one). Repeat the command for each partition. # mkdir /vicepxx Add a line with the following format to the file systems registry file, /etc/vfstab, for each partition to be mounted on a directory created in the previous step. Note the value afs in the fourth field, which tells Solaris to use the AFS-modified fsck program on this partition. /dev/dsk/disk /dev/rdsk/disk /vicepxx afs boot_order yes The following is an example for the first partition being configured. /dev/dsk/c0t6d0s1 /dev/rdsk/c0t6d0s1 /vicepa afs 3 yes Create a file system on each partition that is to be mounted at a /vicepxx directory. The following command is probably appropriate, but consult the Solaris documentation for more information. # newfs -v /dev/rdsk/disk Issue the mountall command to mount all partitions at once. If you plan to retain client functionality on this machine after completing the installation, proceed to Enabling AFS Login and Editing the File Systems Clean-up Script on Solaris Systems. Otherwise, proceed to Starting the BOS Server. Enabling AFS Login on Solaris Systems enabling AFS login file server machine Solaris AFS login on file server machine Solaris first AFS machine AFS login on Solaris Solaris AFS login on file server machine PAM on Solaris file server machine If you plan to remove client functionality from this machine after completing the installation, skip this section and proceed to Starting the BOS Server. At this point you incorporate AFS into the operating system's Pluggable Authentication Module (PAM) scheme. PAM integrates all authentication mechanisms on the machine, including login, to provide the security infrastructure for authenticated access to and from the machine. Explaining PAM is beyond the scope of this document. It is assumed that you understand the syntax and meanings of settings in the PAM configuration file (for example, how the other entry works, the effect of marking an entry as required, optional, or sufficient, and so on). You should first configure your system to obtain Kerberos v5 tickets as part of the authentication process, and then run an AFS PAM module to obtain tokens from those tickets after authentication. Current versions of Solaris come with a Kerberos v5 PAM module that will work, or you can download and install Russ Allbery's Kerberos v5 PAM module, which is tested regularly with AFS. See the instructions of whatever PAM module you use for how to configure it. Some Kerberos v5 PAM modules do come with native AFS support (usually requiring the Heimdal Kerberos implementation rather than the MIT Kerberos implementation). If you are using one of those PAM modules, you can configure it to obtain AFS tokens. It's more common, however, to separate the AFS token acquisition into a separate PAM module. The recommended AFS PAM module is Russ Allbery's pam-afs-session module. It should work with any of the Kerberos v5 PAM modules. To add it to the PAM configuration, you often only need to add configuration to the session group in pam.conf: Solaris PAM session example login session required pam_afs_session.so This example enables PAM authentication only for console login. You may want to add a similar line for the ssh service and for any other login service that you use, including possibly the other service (which serves as a catch-all). You may also want to add options to the AFS PAM session module (particularly retain_after_close, which is necessary for some versions of Solaris. For additional configuration examples and the configuration options of the AFS PAM module, see its documentation. For more details on the available options for the PAM configuration, see the pam.conf manual page. Sites which still require kaserver or external Kerberos v4 authentication should consult "Enabling kaserver based AFS Login on Solaris Systems" for details of how to enable AFS login on Solaris. Proceed to Editing the File Systems Clean-up Script on Solaris Systems Editing the File Systems Clean-up Script on Solaris Systems Solaris file systems clean-up script on file server machine file systems clean-up script (Solaris) file server machine scripts file systems clean-up (Solaris) file server machine Some Solaris distributions include a script that locates and removes unneeded files from various file systems. Its conventional location is /usr/lib/fs/nfs/nfsfind. The script generally uses an argument to the find command to define which file systems to search. In this step you modify the command to exclude the /afs directory. Otherwise, the command traverses the AFS filespace of every cell that is accessible from the machine, which can take many hours. The following alterations are possibilities, but you must verify that they are appropriate for your cell. The first possible alteration is to add the -local flag to the existing command, so that it looks like the following: find $dir -local -name .nfs\* -mtime +7 -mount -exec rm -f {} \; Another alternative is to exclude any directories whose names begin with the lowercase letter a or a non-alphabetic character. find /[A-Zb-z]* remainder of existing command Do not use the following command, which still searches under the /afs directory, looking for a subdirectory of type 4.2. find / -fstype 4.2 /* do not use */ Proceed to Starting the BOS Server (or if referring to these instructions while installing an additional file server machine, return to Starting Server Programs). Basic OverSeer Server BOS Server BOS Server starting first AFS machine starting BOS Server first AFS machine first AFS machine BOS Server authorization checking (disabling) first AFS machine disabling authorization checking first AFS machine first AFS machine authorization checking (disabling) Starting the BOS Server You are now ready to start the AFS server processes on this machine. If you are not working from a packaged distribution, begin by copying the AFS server binaries from the distribution to the conventional local disk location, the /usr/afs/bin directory. The following instructions also create files in other subdirectories of the /usr/afs directory. Then issue the bosserver command to initialize the Basic OverSeer (BOS) Server, which monitors and controls other AFS server processes on its server machine. Include the -noauth flag to disable authorization checking. Because you have not yet configured your cell's AFS authentication and authorization mechanisms, the BOS Server cannot perform authorization checking as it does during normal operation. In no-authorization mode, it does not verify the identity or privilege of the issuer of a bos command, and so performs any operation for anyone. Disabling authorization checking gravely compromises cell security. You must complete all subsequent steps in one uninterrupted pass and must not leave the machine unattended until you restart the BOS Server with authorization checking enabled, in Verifying the AFS Initialization Script. As it initializes for the first time, the BOS Server creates the following directories and files, setting the owner to the local superuser root and the mode bits to limit the ability to write (and in some cases, read) them. For a description of the contents and function of these directories and files, see the chapter in the OpenAFS Administration Guide about administering server machines. For further discussion of the mode bit settings, see Protecting Sensitive AFS Directories. Binary Distribution copying server files from first AFS machine first AFS machine subdirectories of /usr/afs creating /usr/afs/bin directory first AFS machine creating /usr/afs/etc directory first AFS machine copying server files to local disk first AFS machine first AFS machine copying server files to local disk usr/afs/bin directory first AFS machine usr/afs/etc directory first AFS machine usr/afs/db directory usr/afs/local directory usr/afs/logs directory /usr/afs/db /usr/afs/etc/CellServDB /usr/afs/etc/ThisCell /usr/afs/local /usr/afs/logs The BOS Server also creates symbolic links called /usr/vice/etc/ThisCell and /usr/vice/etc/CellServDB to the corresponding files in the /usr/afs/etc directory. The AFS command interpreters consult the CellServDB and ThisCell files in the /usr/vice/etc directory because they generally run on client machines. On machines that are AFS servers only (as this machine currently is), the files reside only in the /usr/afs/etc directory; the links enable the command interpreters to retrieve the information they need. Later instructions for installing the client functionality replace the links with actual files. If you are not working from a packaged distribution, you may need to copy files from the distribution media to the local /usr/afs directory. # cd /tmp/afsdist/sysname/root.server/usr/afs # cp -rp * /usr/afs commands bosserver bosserver command Issue the bosserver command. Include the -noauth flag to disable authorization checking. # /usr/afs/bin/bosserver -noauth & Verify that the BOS Server created /usr/vice/etc/ThisCell and /usr/vice/etc/CellServDB as symbolic links to the corresponding files in the /usr/afs/etc directory. # ls -l /usr/vice/etc If either or both of /usr/vice/etc/ThisCell and /usr/vice/etc/CellServDB do not exist, or are not links, issue the following commands. # cd /usr/vice/etc # ln -s /usr/afs/etc/ThisCell # ln -s /usr/afs/etc/CellServDB cell name defining during installation of first machine defining cell name during installation of first machine cell name setting in server ThisCell file first AFS machine setting cell name in server ThisCell file first AFS machine first AFS machine ThisCell file (server) usr/afs/etc/ThisCell ThisCell file (server) ThisCell file (server) first AFS machine files ThisCell (server) database server machine entry in server CellServDB file on first AFS machine first AFS machine cell membership, defining for server processes usr/afs/etc/CellServDB file CellServDB file (server) CellServDB file (server) creating on first AFS machine creating CellServDB file (server) first AFS machine files CellServDB (server) first AFS machine CellServDB file (server) first AFS machine defining as database server defining first AFS machine as database server Defining Cell Name and Membership for Server Processes Now assign your cell's name. The chapter in the OpenAFS Administration Guide about cell configuration and administration issues discusses the important considerations, explains why changing the name is difficult, and outlines the restrictions on name format. Two of the most important restrictions are that the name cannot include uppercase letters or more than 64 characters. Use the bos setcellname command to assign the cell name. It creates two files: /usr/afs/etc/ThisCell, which defines this machine's cell membership /usr/afs/etc/CellServDB, which lists the cell's database server machines; the machine named on the command line is placed on the list automatically In the following and every instruction in this guide, for the machine name argument substitute the fully-qualified hostname (such as fs1.example.com) of the machine you are installing. For the cell name argument substitute your cell's complete name (such as example.com). commands bos setcellname bos commands setcellname If necessary, add the directory containing the bos command to your path. # export PATH=$PATH:/usr/afs/bin Issue the bos setcellname command to set the cell name. # bos setcellname <machine name> <cell name> -noauth Because you are not authenticated and authorization checking is disabled, the bos command interpreter possibly produces error messages about being unable to obtain tickets and running unauthenticated. You can safely ignore the messages. commands bos listhosts bos commands listhosts CellServDB file (server) displaying entries displaying CellServDB file (server) entries Issue the bos listhosts command to verify that the machine you are installing is now registered as the cell's first database server machine. # bos listhosts <machine name> -noauth Cell name is cell_name Host 1 is machine_name database server machine installing first instructions database server machine, installing first installing database server machine first Backup Server starting first AFS machine buserver process Backup Server starting Backup Server first AFS machine first AFS machine Backup Server Protection Server starting first AFS machine ptserver process Protection Server starting Protection Server first AFS machine first AFS machine Protection Server VL Server (vlserver process) starting first AFS machine Volume Location Server VL Server starting VL Server first AFS machine first AFS machine VL Server usr/afs/local/BosConfig BosConfig file BosConfig file adding entries first AFS machine adding entries to BosConfig file first AFS machine files BosConfig initializing server process starting server process see also entry for each server's name Starting the Database Server Processes Next use the bos create command to create entries for the three database server processes in the /usr/afs/local/BosConfig file and start them running. The three processes run on database server machines only: The Backup Server (the buserver process) maintains the Backup Database The Protection Server (the ptserver process) maintains the Protection Database The Volume Location (VL) Server (the vlserver process) maintains the Volume Location Database (VLDB) Kerberos AFS ships with an additional database server named 'kaserver', which was historically used to provide authentication services to AFS cells. kaserver was based on Kerberos v4, as such, it is not recommended for new cells. This guide assumes you have already configured a Kerberos v5 realm for your site, and details the procedures required to use AFS with this realm. If you do wish to use kaserver, please see the modifications to these instructions detailed in Starting the kaserver Database Server Process The remaining instructions in this chapter include the -cell argument on all applicable commands. Provide the cell name you assigned in Defining Cell Name and Membership for Server Processes. If a command appears on multiple lines, it is only for legibility. commands bos create bos commands create Issue the bos create command to start the Backup Server. # ./bos create <machine name> buserver simple /usr/afs/bin/buserver -noauth Issue the bos create command to start the Protection Server. # ./bos create <machine name> ptserver simple /usr/afs/bin/ptserver -noauth Issue the bos create command to start the VL Server. # ./bos create <machine name> vlserver simple /usr/afs/bin/vlserver -noauth admin account creating afs entry in Kerberos Database Kerberos Database creating afs entry in Kerberos Database creating admin account in Kerberos Database security initializing cell-wide cell initializing security mechanisms initializing cell security mechanisms usr/afs/etc/KeyFile KeyFile file KeyFile file first AFS machine files KeyFile key server encryption key encryption key server encryption key Initializing Cell Security If you are working with an existing cell which uses kaserver or Kerberos v4 for authentication, please see Initializing Cell Security with kaserver for installation instructions which replace this section. Now initialize the cell's security mechanisms. Begin by creating the following two entires in your site's Kerberos database: A generic administrative account, called admin by convention. If you choose to assign a different name, substitute it throughout the remainder of this document. After you complete the installation of the first machine, you can continue to have all administrators use the admin account, or you can create a separate administrative account for each of them. The latter scheme implies somewhat more overhead, but provides a more informative audit trail for administrative operations. The entry for AFS server processes, called either afs or afs/cell. The latter form is preferred since it works regardless of whether your cell name matches your Kerberos realm name and allows multiple AFS cells to be served from a single Kerberos realm. No user logs in under this identity, but it is used to encrypt the server tickets that granted to AFS clients for presentation to server processes during mutual authentication. (The chapter in the OpenAFS Administration Guide about cell configuration and administration describes the role of server encryption keys in mutual authentication.) In Step 7, you also place the initial AFS server encryption key into the /usr/afs/etc/KeyFile file. The AFS server processes refer to this file to learn the server encryption key when they need to decrypt server tickets. You also issue several commands that enable the new admin user to issue privileged commands in all of the AFS suites. The following instructions do not configure all of the security mechanisms related to the AFS Backup System. See the chapter in the OpenAFS Administration Guide about configuring the Backup System. The examples below assume you are using MIT Kerberos. Please refer to the documentation for your KDC's administrative interface if you are using a different vendor Enter kadmin interactive mode. # kadmin Authenticating as principal you/admin@YOUR REALM with password Password for you/admin@REALM: your_password server encryption key in Kerberos Database creating server encryption key Kerberos Database Issue the add_principal command to create Kerberos Database entries called admin and afs/<cell name>. You should make the admin_passwd as long and complex as possible, but keep in mind that administrators need to enter it often. It must be at least six characters long. Note that when creating the afs/<cell name> entry, the encryption types should be restricted to des-cbc-crc:v4. For more details regarding encryption types, see the documentation for your Kerberos installation. kadmin: add_principal -randkey -e des-cbc-crc:v4 afs/<cell name> Principal "afs/cell name@REALM" created. kadmin: add_principal admin Enter password for principal "admin@REALM": admin_password Principal "admin@REALM" created. commands kas examine kas commands examine displaying server encryption key Authentication Database Issue the kadmin get_principal command to display the afs/<cell name> entry. kadmin: get_principal afs/<cell name> Principal: afs/cell [ ... ] Key: vno 2, DES cbc mode with CRC-32, no salt [ ... ] Extract the newly created key for afs/cell to a keytab on the local machine. We will use /etc/afs.keytab as the location for this keytab. The keytab contains the key material that ensures the security of your AFS cell. You should ensure that it is kept in a secure location at all times. kadmin: ktadd -k /etc/afs.keytab -e des-cbc-crc:v4 afs/<cell name> Entry for principal afs/<cell name> with kvno 3, encryption type DES cbc mode with CRC-32 added to keytab WRFILE:/etc/afs.keytab Make a note of the key version number (kvno) given in the response, as you will need it to load the key into bos in a later step Note that each time you run ktadd a new key is generated for the item being extracted. This means that you cannot run ktadd multiple times and end up with the same key material each time. Issue the quit command to leave kadmin interactive mode. kadmin: quit commands bos adduser bos commands adduser usr/afs/etc/UserList UserList file UserList file first AFS machine files UserList creating UserList file entry admin account adding to UserList file Issue the bos adduser command to add the admin user to the /usr/afs/etc/UserList file. This enables the admin user to issue privileged bos and vos commands. # ./bos adduser <machine name> admin -noauth commands asetkey creating server encryption key KeyFile file server encryption key in KeyFile file Issue the asetkey command to set the AFS server encryption key in the /usr/afs/etc/KeyFile file. This key is created from the /etc/afs.keytab file created earlier. asetkey requires the key version number (or kvno) of the afs/cell key. You should have made note of the kvno when creating the key earlier. The key version number can also be found by running the kvno command # kvno -k /etc/afs.keytab afs/<cell name> Once the kvno is known, the key can then be extracted using asetkey # asetkey add <kvno> /etc/afs.keytab afs/<cell name> commands bos listkeys bos commands listkeys displaying server encryption key KeyFile file Issue the bos listkeys command to verify that the key version number for the new key in the KeyFile file is the same as the key version number in the Authentication Database's afs/cell name entry, which you displayed in Step 3. # ./bos listkeys <machine name> -noauth key 0 has cksum checksum You can safely ignore any error messages indicating that bos failed to get tickets or that authentication failed. Initializing the Protection Database Now continue to configure your cell's security systems by populating the Protection Database with the newly created admin user, and permitting it to issue priviledged commands on the AFS filesystem. commands pts createuser pts commands createuser Protection Database Issue the pts createuser command to create a Protection Database entry for the admin user. By default, the Protection Server assigns AFS UID 1 (one) to the admin user, because it is the first user entry you are creating. If the local password file (/etc/passwd or equivalent) already has an entry for admin that assigns it a UNIX UID other than 1, it is best to use the -id argument to the pts createuser command to make the new AFS UID match the existing UNIX UID. Otherwise, it is best to accept the default. # pts createuser -name admin [-id <AFS UID>] -noauth User admin has id AFS UID commands pts adduser pts commands adduser system:administrators group admin account adding to system:administrators group Issue the pts adduser command to make the admin user a member of the system:administrators group, and the pts membership command to verify the new membership. Membership in the group enables the admin user to issue privileged pts commands and some privileged fs commands. # ./pts adduser admin system:administrators -noauth # ./pts membership admin -noauth Groups admin (id: 1) is a member of: system:administrators commands bos restart on first AFS machine bos commands restart on first AFS machine restarting server process on first AFS machine server process restarting on first AFS machine Issue the bos restart command with the -all flag to restart the database server processes, so that they start using the new server encryption key. # ./bos restart <machine name> -all -noauth File Server first AFS machine fileserver process File Server starting File Server first AFS machine first AFS machine File Server, fs process Volume Server first AFS machine volserver process Volume Server starting Volume Server first AFS machine first AFS machine Volume Server Salvager (salvager process) first AFS machine fs process first AFS machine starting fs process first AFS machine first AFS machine Salvager Starting the File Server processes Start either the fs process or, if you want to run the Demand-Attach File Server, the dafs process. The fs process consists of the File Server, Volume Server, and Salvager (fileserver, volserver and salvager processes). The dafs process consists of the Demand-Attach File Server, Volume Server, Salvage Server, and Salvager (dafileserver, davolserver, salvageserver, and dasalvager processes). For information about the Demand-Attach File Server and to see whether or not you should run it, see Appendix C, The Demand-Attach File Server. Issue the bos create command to start the fs process or the dafs process. The commands appear here on multiple lines only for legibility. If you are not planning on running the Demand-Attach File Server, create the fs process: # ./bos create <machine name> fs fs /usr/afs/bin/fileserver \ /usr/afs/bin/volserver /usr/afs/bin/salvager \ -noauth If you are planning on running the Demand-Attach File Server, create the dafs process: # ./bos create <machine name> dafs dafs /usr/afs/bin/dafileserver \ /usr/afs/bin/davolserver /usr/afs/bin/salvageserver \ /usr/afs/bin/dasalvager -noauth Sometimes a message about Volume Location Database (VLDB) initialization appears, along with one or more instances of an error message similar to the following: FSYNC_clientInit temporary failure (will retry) This message appears when the volserver process tries to start before the fileserver process has completed its initialization. Wait a few minutes after the last such message before continuing, to guarantee that both processes have started successfully. commands bos status bos commands status You can verify that the fs or dafs process has started successfully by issuing the bos status command. Its output mentions two proc starts. If you are not running the Demand-Attach File Server: # ./bos status <machine name> fs -long -noauth If you are running the Demand-Attach File Server: # ./bos status <machine name> dafs -long -noauth Your next action depends on whether you have ever run AFS file server machines in the cell: commands vos create root.afs volume vos commands create root.afs volume root.afs volume creating volume creating root.afs creating root.afs volume If you are installing the first AFS server machine ever in the cell (that is, you are not upgrading the AFS software from a previous version), create the first AFS volume, root.afs. For the partition name argument, substitute the name of one of the machine's AFS server partitions (such as /vicepa). # ./vos create <machine name> <partition name> root.afs \ -noauth The Volume Server produces a message confirming that it created the volume on the specified partition. You can ignore error messages indicating that tokens are missing, or that authentication failed. commands vos syncvldb vos commands syncvldb commands vos syncserv vos commands syncserv If there are existing AFS file server machines and volumes in the cell, issue the vos syncvldb and vos syncserv commands to synchronize the VLDB with the actual state of volumes on the local machine. To follow the progress of the synchronization operation, which can take several minutes, use the -verbose flag. # ./vos syncvldb <machine name> -verbose -noauth # ./vos syncserv <machine name> -verbose -noauth You can ignore error messages indicating that tokens are missing, or that authentication failed. Update Server starting server portion first AFS machine upserver process Update Server starting Update Server server portion first AFS machine first AFS machine Update Server server portion first AFS machine defining as binary distribution machine first AFS machine defining as system control machine system control machine binary distribution machine Starting the Server Portion of the Update Server Start the server portion of the Update Server (the upserver process), to distribute the contents of directories on this machine to other server machines in the cell. It becomes active when you configure the client portion of the Update Server on additional server machines. Distributing the contents of its /usr/afs/etc directory makes this machine the cell's system control machine. The other server machines in the cell run the upclientetc process (an instance of the client portion of the Update Server) to retrieve the configuration files. Use the -crypt argument to the upserver initialization command to specify that the Update Server distributes the contents of the /usr/afs/etc directory only in encrypted form, as shown in the following instruction. Several of the files in the directory, particularly the KeyFile file, are crucial to cell security and so must never cross the network unencrypted. (You can choose not to configure a system control machine, in which case you must update the configuration files in each server machine's /usr/afs/etc directory individually. The bos commands used for this purpose also encrypt data before sending it across the network.) Distributing the contents of its /usr/afs/bin directory to other server machines of its system type makes this machine a binary distribution machine. The other server machines of its system type run the upclientbin process (an instance of the client portion of the Update Server) to retrieve the binaries. If your platform has a package management system, such as 'rpm' or 'apt', running the Update Server to distribute binaries may interfere with this system. The binaries in the /usr/afs/bin directory are not sensitive, so it is not necessary to encrypt them before transfer across the network. Include the -clear argument to the upserver initialization command to specify that the Update Server distributes the contents of the /usr/afs/bin directory in unencrypted form unless an upclientbin process requests encrypted transfer. Note that the server and client portions of the Update Server always mutually authenticate with one another, regardless of whether you use the -clear or -crypt arguments. This protects their communications from eavesdropping to some degree. For more information on the upclient and upserver processes, see their reference pages in the OpenAFS Administration Reference. The commands appear on multiple lines here only for legibility. Issue the bos create command to start the upserver process. # ./bos create <machine name> upserver simple \ "/usr/afs/bin/upserver -crypt /usr/afs/etc \ -clear /usr/afs/bin" -noauth Clock Sync Considerations Keeping the clocks on all server and client machines in your cell synchronized is crucial to several functions, and in particular to the correct operation of AFS's distributed database technology, Ubik. The chapter in the OpenAFS Administration Guide about administering server machines explains how time skew can disturb Ubik's performance and cause service outages in your cell. You should install and configure your time service independently of AFS. Your Kerberos realm will also require a reliable time source, so your site may already have one available. overview installing client functionality on first machine first AFS machine client functionality installing installing client functionality first AFS machine Overview: Installing Client Functionality The machine you are installing is now an AFS file server machine, database server machine, system control machine, and binary distribution machine. Now make it a client machine by completing the following tasks: Define the machine's cell membership for client processes Create the client version of the CellServDB file Define cache location and size Create the /afs directory and start the Cache Manager Distribution copying client files from first AFS machine first AFS machine copying client files to local disk copying client files to local disk first AFS machine Copying Client Files to the Local Disk You need only undertake the steps in this section, if you are using a tar file distribution, or one built from scratch. Packaged distributions, such as RPMs or DEBs will already have installed the necessary files in the correct locations. Before installing and configuring the AFS client, copy the necessary files from the tarball to the local /usr/vice/etc directory. If you have not already done so, unpack the distribution tarball for this machine's system type into a suitable location on the filesystem, such as /tmp/afsdist. If you use a different location, substitue that in the examples that follow. Copy files to the local /usr/vice/etc directory. This step places a copy of the AFS initialization script (and related files, if applicable) into the /usr/vice/etc directory. In the preceding instructions for incorporating AFS into the kernel, you copied the script directly to the operating system's conventional location for initialization files. When you incorporate AFS into the machine's startup sequence in a later step, you can choose to link the two files. On some system types that use a dynamic kernel loader program, you previously copied AFS library files into a subdirectory of the /usr/vice/etc directory. On other system types, you copied the appropriate AFS library file directly to the directory where the operating system accesses it. The following commands do not copy or recopy the AFS library files into the /usr/vice/etc directory, because on some system types the library files consume a large amount of space. If you want to copy them, add the -r flag to the first cp command and skip the second cp command. # cd /tmp/afsdist/sysname/root.client/usr/vice/etc # cp -p * /usr/vice/etc # cp -rp C /usr/vice/etc cell name setting in client ThisCell file first AFS machine setting cell name in client ThisCell file first AFS machine first AFS machine ThisCell file (client) first AFS machine cell membership, defining for client processes usr/vice/etc/ThisCell ThisCell file (client) ThisCell file (client) first AFS machine files ThisCell (client) Defining Cell Membership for Client Processes Every AFS client machine has a copy of the /usr/vice/etc/ThisCell file on its local disk to define the machine's cell membership for the AFS client programs that run on it. The ThisCell file you created in the /usr/afs/etc directory (in Defining Cell Name and Membership for Server Processes) is used only by server processes. Among other functions, the ThisCell file on a client machine determines the following: The cell in which users gain tokens when they log onto the machine, assuming it is using an AFS-modified login utility The cell in which users gain tokens by default when they issue the aklog command The cell membership of the AFS server processes that the AFS command interpreters on this machine contact by default Change to the /usr/vice/etc directory and remove the symbolic link created in Starting the BOS Server. # cd /usr/vice/etc # rm ThisCell Create the ThisCell file as a copy of the /usr/afs/etc/ThisCell file. Defining the same local cell for both server and client processes leads to the most consistent AFS performance. # cp /usr/afs/etc/ThisCell ThisCell database server machine entry in client CellServDB file on first AFS machine usr/vice/etc/CellServDB CellServDB file (client) CellServDB file (client) creating on first AFS machine creating CellServDB file (client) first AFS machine CellServDB file (client) required format requirements CellServDB file format (client version) files CellServDB (client) first AFS machine CellServDB file (client) Creating the Client CellServDB File The /usr/vice/etc/CellServDB file on a client machine's local disk lists the database server machines for each cell that the local Cache Manager can contact. If there is no entry in the file for a cell, or if the list of database server machines is wrong, then users working on this machine cannot access the cell. The chapter in the OpenAFS Administration Guide about administering client machines explains how to maintain the file after creating it. As the afsd program initializes the Cache Manager, it copies the contents of the CellServDB file into kernel memory. The Cache Manager always consults the list in kernel memory rather than the CellServDB file itself. Between reboots of the machine, you can use the fs newcell command to update the list in kernel memory directly; see the chapter in the OpenAFS Administration Guide about administering client machines. The AFS distribution includes the file CellServDB.dist. It includes an entry for all AFS cells that agreed to share their database server machine information at the time the distribution was created. The definitive copy of this file is maintained at grand.central.org, and updates may be obtained from /afs/grand.central.org/service/CellServDB or http://grand.central.org/dl/cellservdb/CellServDB The CellServDB.dist file can be a good basis for the client CellServDB file, because all of the entries in it use the correct format. You can add or remove cell entries as you see fit. Depending on your cache manager configuration, additional steps (as detailed in Enabling Access to Foreign Cells) may be required to enable the Cache Manager to actually reach the cells. In this section, you add an entry for the local cell to the local CellServDB file. The current working directory is still /usr/vice/etc. Remove the symbolic link created in Starting the BOS Server and rename the CellServDB.sample file to CellServDB. # rm CellServDB # mv CellServDB.sample CellServDB Add an entry for the local cell to the CellServDB file. One easy method is to use the cat command to append the contents of the server /usr/afs/etc/CellServDB file to the client version. # cat /usr/afs/etc/CellServDB >> CellServDB Then open the file in a text editor to verify that there are no blank lines, and that all entries have the required format, which is described just following. The ordering of cells is not significant, but it can be convenient to have the client machine's home cell at the top; move it there now if you wish. The first line of a cell's entry has the following format: >cell_name #organization where cell_name is the cell's complete Internet domain name (for example, example.com) and organization is an optional field that follows any number of spaces and the number sign (#). By convention it names the organization to which the cell corresponds (for example, the Example Corporation). After the first line comes a separate line for each database server machine. Each line has the following format: IP_address #machine_name where IP_address is the machine's IP address in dotted decimal format (for example, 192.12.105.3). Following any number of spaces and the number sign (#) is machine_name, the machine's fully-qualified hostname (for example, db1.example.com). In this case, the number sign does not indicate a comment; machine_name is a required field. If the file includes cells that you do not wish users of this machine to access, remove their entries. The following example shows entries for two cells, each of which has three database server machines: >example.com #Example Corporation (home cell) 192.12.105.3 #db1.example.com 192.12.105.4 #db2.example.com 192.12.105.55 #db3.example.com >example.org #Example Organization cell 138.255.68.93 #serverA.example.org 138.255.68.72 #serverB.example.org 138.255.33.154 #serverC.example.org cache configuring first AFS machine configuring cache first AFS machine setting cache size and location first AFS machine first AFS machine cache size and location Configuring the Cache The Cache Manager uses a cache on the local disk or in machine memory to store local copies of files fetched from file server machines. As the afsd program initializes the Cache Manager, it sets basic cache configuration parameters according to definitions in the local /usr/vice/etc/cacheinfo file. The file has three fields: The first field names the local directory on which to mount the AFS filespace. The conventional location is the /afs directory. The second field defines the local disk directory to use for the disk cache. The conventional location is the /usr/vice/cache directory, but you can specify an alternate directory if another partition has more space available. There must always be a value in this field, but the Cache Manager ignores it if the machine uses a memory cache. The third field specifies the number of kilobyte (1024 byte) blocks to allocate for the cache. The values you define must meet the following requirements. On a machine using a disk cache, the Cache Manager expects always to be able to use the amount of space specified in the third field. Failure to meet this requirement can cause serious problems, some of which can be repaired only by rebooting. You must prevent non-AFS processes from filling up the cache partition. The simplest way is to devote a partition to the cache exclusively. The amount of space available in memory or on the partition housing the disk cache directory imposes an absolute limit on cache size. The maximum supported cache size can vary in each AFS release; see the OpenAFS Release Notes for the current version. For a disk cache, you cannot specify a value in the third field that exceeds 95% of the space available on the partition mounted at the directory named in the second field. If you violate this restriction, the afsd program exits without starting the Cache Manager and prints an appropriate message on the standard output stream. A value of 90% is more appropriate on most machines. Some operating systems (such as AIX) do not automatically reserve some space to prevent the partition from filling completely; for them, a smaller value (say, 80% to 85% of the space available) is more appropriate. For a memory cache, you must leave enough memory for other processes and applications to run. If you try to allocate more memory than is actually available, the afsd program exits without initializing the Cache Manager and produces the following message on the standard output stream. afsd: memCache allocation failure at number KB The number value is how many kilobytes were allocated just before the failure, and so indicates the approximate amount of memory available. Within these hard limits, the factors that determine appropriate cache size include the number of users working on the machine, the size of the files with which they work, and (for a memory cache) the number of processes that run on the machine. The higher the demand from these factors, the larger the cache needs to be to maintain good performance. Disk caches smaller than 10 MB do not generally perform well. Machines serving multiple users usually perform better with a cache of at least 60 to 70 MB. The point at which enlarging the cache further does not really improve performance depends on the factors mentioned previously and is difficult to predict. Memory caches smaller than 1 MB are nonfunctional, and the performance of caches smaller than 5 MB is usually unsatisfactory. Suitable upper limits are similar to those for disk caches but are probably determined more by the demands on memory from other sources on the machine (number of users and processes). Machines running only a few processes possibly can use a smaller memory cache. Configuring a Disk Cache Not all file system types that an operating system supports are necessarily supported for use as the cache partition. For possible restrictions, see the OpenAFS Release Notes. To configure the disk cache, perform the following procedures: Create the local directory to use for caching. The following instruction shows the conventional location, /usr/vice/cache. If you are devoting a partition exclusively to caching, as recommended, you must also configure it, make a file system on it, and mount it at the directory created in this step. # mkdir /usr/vice/cache Create the cacheinfo file to define the configuration parameters discussed previously. The following instruction shows the standard mount location, /afs, and the standard cache location, /usr/vice/cache. # echo "/afs:/usr/vice/cache:#blocks" > /usr/vice/etc/cacheinfo The following example defines the disk cache size as 50,000 KB: # echo "/afs:/usr/vice/cache:50000" > /usr/vice/etc/cacheinfo Configuring a Memory Cache To configure a memory cache, create the cacheinfo file to define the configuration parameters discussed previously. The following instruction shows the standard mount location, /afs, and the standard cache location, /usr/vice/cache (though the exact value of the latter is irrelevant for a memory cache). # echo "/afs:/usr/vice/cache:#blocks" > /usr/vice/etc/cacheinfo The following example allocates 25,000 KB of memory for the cache. # echo "/afs:/usr/vice/cache:25000" > /usr/vice/etc/cacheinfo Cache Manager first AFS machine configuring Cache Manager first AFS machine first AFS machine Cache Manager afs (/afs) directory creating first AFS machine AFS initialization script setting afsd parameters first AFS machine first AFS machine afsd command parameters Configuring the Cache Manager By convention, the Cache Manager mounts the AFS filespace on the local /afs directory. In this section you create that directory. The afsd program sets several cache configuration parameters as it initializes the Cache Manager, and starts daemons that improve performance. You can use the afsd command's arguments to override the parameters' default values and to change the number of some of the daemons. Depending on the machine's cache size, its amount of RAM, and how many people work on it, you can sometimes improve Cache Manager performance by overriding the default values. For a discussion of all of the afsd command's arguments, see its reference page in the OpenAFS Administration Reference. On platforms using the standard 'afs' initialisation script (this does not apply to Fedora or RHEL based distributions), the afsd command line in the AFS initialization script on each system type includes an OPTIONS variable. You can use it to set nondefault values for the command's arguments, in one of the following ways: You can create an afsd options file that sets values for arguments to the afsd command. If the file exists, its contents are automatically substituted for the OPTIONS variable in the AFS initialization script. The AFS distribution for some system types includes an options file; on other system types, you must create it. You use two variables in the AFS initialization script to specify the path to the options file: CONFIG and AFSDOPT. On system types that define a conventional directory for configuration files, the CONFIG variable indicates it by default; otherwise, the variable indicates an appropriate location. List the desired afsd options on a single line in the options file, separating each option with one or more spaces. The following example sets the -stat argument to 2500, the -daemons argument to 4, and the -volumes argument to 100. -stat 2500 -daemons 4 -volumes 100 On a machine that uses a disk cache, you can set the OPTIONS variable in the AFS initialization script to one of $SMALL, $MEDIUM, or $LARGE. The AFS initialization script uses one of these settings if the afsd options file named by the AFSDOPT variable does not exist. In the script as distributed, the OPTIONS variable is set to the value $MEDIUM. Do not set the OPTIONS variable to $SMALL, $MEDIUM, or $LARGE on a machine that uses a memory cache. The arguments it sets are appropriate only on a machine that uses a disk cache. The script (or on some system types the afsd options file named by the AFSDOPT variable) defines a value for each of SMALL, MEDIUM, and LARGE that sets afsd command arguments appropriately for client machines of different sizes: SMALL is suitable for a small machine that serves one or two users and has approximately 8 MB of RAM and a 20-MB cache MEDIUM is suitable for a medium-sized machine that serves two to six users and has 16 MB of RAM and a 40-MB cache LARGE is suitable for a large machine that serves five to ten users and has 32 MB of RAM and a 100-MB cache You can choose not to create an afsd options file and to set the OPTIONS variable in the initialization script to a null value rather than to the default $MEDIUM value. You can then either set arguments directly on the afsd command line in the script, or set no arguments (and so accept default values for all Cache Manager parameters). If you are running on a Fedora or RHEL based system, the openafs-client initialization script behaves differently from that described above. It sources /etc/sysconfig/openafs, in which the AFSD_ARGS variable may be set to contain any, or all, of the afsd options detailed. Note that this script does not support setting an OPTIONS variable, or the SMALL, MEDIUM and LARGE methods of defining cache size Create the local directory on which to mount the AFS filespace, by convention /afs. If the directory already exists, verify that it is empty. # mkdir /afs On AIX systems, add the following line to the /etc/vfs file. It enables AIX to unmount AFS correctly during shutdown. afs 4 none none On non-package based Linux systems, copy the afsd options file from the /usr/vice/etc directory to the /etc/sysconfig directory, removing the .conf extension as you do so. # cp /usr/vice/etc/afs.conf /etc/sysconfig/afs Edit the machine's AFS initialization script or afsd options file to set appropriate values for afsd command parameters. The script resides in the indicated location on each system type: On AIX systems, /etc/rc.afs On HP-UX systems, /sbin/init.d/afs On IRIX systems, /etc/init.d/afs On Fedora and RHEL systems, /etc/sysconfg/openafs On non-package based Linux systems, /etc/sysconfig/afs (the afsd options file) On Solaris systems, /etc/init.d/afs Use one of the methods described in the introduction to this section to add the following flags to the afsd command line. If you intend for the machine to remain an AFS client, also set any performance-related arguments you wish. Add the -memcache flag if the machine is to use a memory cache. Add the -verbose flag to display a trace of the Cache Manager's initialization on the standard output stream. In order to successfully complete the instructions in the remainder of this guide, it is important that the machine does not have a synthetic root (as discussed in Enabling Access to Foreign Cells). As some distributions ship with this enabled, it may be necessary to remove any occurences of the -dynroot and -afsdb options from both the AFS initialisation script and options file. If this functionality is required it may be renabled as detailed in Enabling Access to Foreign Cells. overview completing installation of first machine first AFS machine completion of installation Overview: Completing the Installation of the First AFS Machine The machine is now configured as an AFS file server and client machine. In this final phase of the installation, you initialize the Cache Manager and then create the upper levels of your AFS filespace, among other procedures. The procedures are: Verify that the initialization script works correctly, and incorporate it into the operating system's startup and shutdown sequence Create and mount top-level volumes Create and mount volumes to store system binaries in AFS Enable access to foreign cells Institute additional security measures Remove client functionality if desired AFS initialization script verifying on first AFS machine AFS initialization script running first AFS machine first AFS machine AFS initialization script running/verifying running AFS init. script first AFS machine invoking AFS init. script running Verifying the AFS Initialization Script At this point you run the AFS initialization script to verify that it correctly invokes all of the necessary programs and AFS processes, and that they start correctly. The following are the relevant commands: The command that dynamically loads AFS modifications into the kernel, on some system types (not applicable if the kernel has AFS modifications built in) The bosserver command, which starts the BOS Server; it in turn starts the server processes for which you created entries in the /usr/afs/local/BosConfig file The afsd command, which initializes the Cache Manager On system types that use a dynamic loader program, you must reboot the machine before running the initialization script, so that it can freshly load AFS modifications into the kernel. If there are problems during the initialization, attempt to resolve them. The OpenAFS mailing lists can provide assistance if necessary. commands bos shutdown bos commands shutdown Issue the bos shutdown command to shut down the AFS server processes other than the BOS Server. Include the -wait flag to delay return of the command shell prompt until all processes shut down completely. # /usr/afs/bin/bos shutdown <machine name> -wait Issue the ps command to learn the bosserver process's process ID number (PID), and then the kill command to stop it. # ps appropriate_ps_options | grep bosserver # kill bosserver_PID Issue the appropriate commands to run the AFS initialization script for this system type. AIX AFS initialization script on first AFS machine On AIX systems: Reboot the machine and log in again as the local superuser root. # cd / # shutdown -r now login: root Password: root_password Run the AFS initialization script. # /etc/rc.afs HP-UX AFS initialization script on first AFS machine On HP-UX systems: Run the AFS initialization script. # /sbin/init.d/afs start IRIX AFS initialization script on first AFS machine afsclient variable (IRIX) first AFS machine variables afsclient (IRIX) first AFS machine IRIX afsclient variable first AFS machine afsserver variable (IRIX) first AFS machine variables afsserver (IRIX) first AFS machine IRIX afsserver variable first AFS machine On IRIX systems: If you have configured the machine to use the ml dynamic loader program, reboot the machine and log in again as the local superuser root. # cd / # shutdown -i6 -g0 -y login: root Password: root_password Issue the chkconfig command to activate the afsserver and afsclient configuration variables. # /etc/chkconfig -f afsserver on # /etc/chkconfig -f afsclient on Run the AFS initialization script. # /etc/init.d/afs start Linux AFS initialization script on first AFS machine On Linux systems: Reboot the machine and log in again as the local superuser root. # cd / # shutdown -r now login: root Password: root_password Run the AFS initialization scripts. # /etc/rc.d/init.d/openafs-client start # /etc/rc.d/init.d/openafs-server start Solaris AFS initialization script on first AFS machine On Solaris systems: Reboot the machine and log in again as the local superuser root. # cd / # shutdown -i6 -g0 -y login: root Password: root_password Run the AFS initialization script. # /etc/init.d/afs start Wait for the message that confirms that Cache Manager initialization is complete. On machines that use a disk cache, it can take a while to initialize the Cache Manager for the first time, because the afsd program must create all of the Vn files in the cache directory. Subsequent Cache Manager initializations do not take nearly as long, because the Vn files already exist. commands aklog aklog command If you are working with an existing cell which uses kaserver for authentication, please recall the note in Using this Appendix detailing the substitution of kinit and aklog with klog. As a basic test of correct AFS functioning, issue the kinit and aklog commands to authenticate as the admin user. Provide the password (admin_passwd) you defined in Initializing Cell Security. # kinit admin Password: admin_passwd # aklog commands tokens tokens command Issue the tokens command to verify that the aklog command worked correctly. If it did, the output looks similar to the following example for the example.com cell, where admin's AFS UID is 1. If the output does not seem correct, resolve the problem. Changes to the AFS initialization script are possibly necessary. The OpenAFS mailing lists can provide assistance as necessary. # tokens Tokens held by the Cache Manager: User's (AFS ID 1) tokens for afs@example.com [Expires May 22 11:52] --End of list-- Issue the bos status command to verify that the output for each process reads Currently running normally. # /usr/afs/bin/bos status <machine name> fs commands checkvolumes commands fs checkvolumes Change directory to the local file system root (/) and issue the fs checkvolumes command. # cd / # /usr/afs/bin/fs checkvolumes AFS initialization script adding to machine startup sequence first AFS machine installing AFS initialization script first AFS machine first AFS machine AFS initialization script activating activating AFS init. script installing Activating the AFS Initialization Script Now that you have confirmed that the AFS initialization script works correctly, take the action necessary to have it run automatically at each reboot. Proceed to the instructions for your system type: Activating the Script on AIX Systems Activating the Script on HP-UX Systems Activating the Script on IRIX Systems Activating the Script on Linux Systems Activating the Script on Solaris Systems AIX AFS initialization script on first AFS machine Activating the Script on AIX Systems Edit the AIX initialization file, /etc/inittab, adding the following line to invoke the AFS initialization script. Place it just after the line that starts NFS daemons. rcafs:2:wait:/etc/rc.afs > /dev/console 2>&1 # Start AFS services (Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /etc directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS CD-ROM if necessary. # cd /usr/vice/etc # rm rc.afs # ln -s /etc/rc.afs Proceed to Configuring the Top Levels of the AFS Filespace. HP-UX AFS initialization script on first AFS machine Activating the Script on HP-UX Systems Change to the /sbin/init.d directory and issue the ln -s command to create symbolic links that incorporate the AFS initialization script into the HP-UX startup and shutdown sequence. # cd /sbin/init.d # ln -s ../init.d/afs /sbin/rc2.d/S460afs # ln -s ../init.d/afs /sbin/rc2.d/K800afs (Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /sbin/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS CD-ROM if necessary. # cd /usr/vice/etc # rm afs.rc # ln -s /sbin/init.d/afs afs.rc Proceed to Configuring the Top Levels of the AFS Filespace. IRIX AFS initialization script on first AFS machine Activating the Script on IRIX Systems Change to the /etc/init.d directory and issue the ln -s command to create symbolic links that incorporate the AFS initialization script into the IRIX startup and shutdown sequence. # cd /etc/init.d # ln -s ../init.d/afs /etc/rc2.d/S35afs # ln -s ../init.d/afs /etc/rc0.d/K35afs (Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /etc/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS CD-ROM if necessary. # cd /usr/vice/etc # rm afs.rc # ln -s /etc/init.d/afs afs.rc Proceed to Configuring the Top Levels of the AFS Filespace. Linux AFS initialization script on first AFS machine Activating the Script on Linux Systems Issue the chkconfig command to activate the openafs-client and openafs-server configuration variables. Based on the instruction in the AFS initialization file that begins with the string #chkconfig, the command automatically creates the symbolic links that incorporate the script into the Linux startup and shutdown sequence. # /sbin/chkconfig --add openafs-client # /sbin/chkconfig --add openafs-server (Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /etc/rc.d/init.d directories, and copies of the afsd options file in both the /usr/vice/etc and /etc/sysconfig directories. If you want to avoid potential confusion by guaranteeing that the two copies of each file are always the same, create a link between them. You can always retrieve the original script or options file from the AFS CD-ROM if necessary. # cd /usr/vice/etc # rm afs.rc afs.conf # ln -s /etc/rc.d/init.d/afs afs.rc # ln -s /etc/sysconfig/afs afs.conf Proceed to Configuring the Top Levels of the AFS Filespace. Solaris AFS initialization script on first AFS machine Activating the Script on Solaris Systems Change to the /etc/init.d directory and issue the ln -s command to create symbolic links that incorporate the AFS initialization script into the Solaris startup and shutdown sequence. # cd /etc/init.d # ln -s ../init.d/afs /etc/rc3.d/S99afs # ln -s ../init.d/afs /etc/rc0.d/K66afs (Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /etc/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS CD-ROM if necessary. # cd /usr/vice/etc # rm afs.rc # ln -s /etc/init.d/afs afs.rc AFS filespace configuring top levels configuring AFS filespace (top levels) Configuring the Top Levels of the AFS Filespace If you have not previously run AFS in your cell, you now configure the top levels of your cell's AFS filespace. If you have run a previous version of AFS, the filespace is already configured. Proceed to Storing AFS Binaries in AFS. root.cell volume creating and replicating volume creating root.cell creating root.cell volume You created the root.afs volume in Starting the File Server, Volume Server, and Salvager, and the Cache Manager mounted it automatically on the local /afs directory when you ran the AFS initialization script in Verifying the AFS Initialization Script. You now set the access control list (ACL) on the /afs directory; creating, mounting, and setting the ACL are the three steps required when creating any volume. After setting the ACL on the root.afs volume, you create your cell's root.cell volume, mount it as a subdirectory of the /afs directory, and set the ACL. Create both a read/write and a regular mount point for the root.cell volume. The read/write mount point enables you to access the read/write version of replicated volumes when necessary. Creating both mount points essentially creates separate read-only and read-write copies of your filespace, and enables the Cache Manager to traverse the filespace on a read-only path or read/write path as appropriate. For further discussion of these concepts, see the chapter in the OpenAFS Administration Guide about administering volumes. root.afs volume replicating volume replicating root.afs and root.cell replicating volumes Then replicate both the root.afs and root.cell volumes. This is required if you want to replicate any other volumes in your cell, because all volumes mounted above a replicated volume must themselves be replicated in order for the Cache Manager to access the replica. When the root.afs volume is replicated, the Cache Manager is programmed to access its read-only version (root.afs.readonly) whenever possible. To make changes to the contents of the root.afs volume (when, for example, you mount another cell's root.cell volume at the second level in your filespace), you must mount the root.afs volume temporarily, make the changes, release the volume and remove the temporary mount point. For instructions, see Enabling Access to Foreign Cells. fs commands setacl commands fs setacl access control list (ACL), setting setting ACL Issue the fs setacl command to edit the ACL on the /afs directory. Add an entry that grants the l (lookup) and r (read) permissions to the system:anyuser group, to enable all AFS users who can reach your cell to traverse through the directory. If you prefer to enable access only to locally authenticated users, substitute the system:authuser group. Note that there is already an ACL entry that grants all seven access rights to the system:administrators group. It is a default entry that AFS places on every new volume's root directory. The top-level AFS directory, typically /afs, is a special case: when the client is configured to run in dynroot mode (e.g. afsd -dynroot, attempts to set the ACL on this directory will return Connection timed out. This is because the dynamically- generated root directory is not a part of the global AFS space, and cannot have an access control list set on it. # /usr/afs/bin/fs setacl /afs system:anyuser rl commands vos create root.cell volume vos commands create root.cell volume fs commands mkmount commands fs mkmount mount point creating mount point volume mounting Issue the vos create command to create the root.cell volume. Then issue the fs mkmount command to mount it as a subdirectory of the /afs directory, where it serves as the root of your cell's local AFS filespace. Finally, issue the fs setacl command to create an ACL entry for the system:anyuser group (or system:authuser group). For the partition name argument, substitute the name of one of the machine's AFS server partitions (such as /vicepa). For the cellname argument, substitute your cell's fully-qualified Internet domain name (such as example.com). # /usr/afs/bin/vos create <machine name> <partition name> root.cell # /usr/afs/bin/fs mkmount /afs/cellname root.cell # /usr/afs/bin/fs setacl /afs/cellname system:anyuser rl creating symbolic link for abbreviated cell name symbolic link for abbreviated cell name cell name symbolic link for abbreviated (Optional) Create a symbolic link to a shortened cell name, to reduce the length of pathnames for users in the local cell. For example, in the example.com cell, /afs/example is a link to /afs/example.com. # cd /afs # ln -s full_cellname short_cellname read/write mount point for root.afs volume root.afs volume read/write mount point creating read/write mount point Issue the fs mkmount command to create a read/write mount point for the root.cell volume (you created a regular mount point in Step 2). By convention, the name of a read/write mount point begins with a period, both to distinguish it from the regular mount point and to make it visible only when the -a flag is used on the ls command. Change directory to /usr/afs/bin to make it easier to access the command binaries. # cd /usr/afs/bin # ./fs mkmount /afs/.cellname root.cell -rw commands vos addsite vos commands addsite volume defining replication site defining replication site for volume Issue the vos addsite command to define a replication site for both the root.afs and root.cell volumes. In each case, substitute for the partition name argument the partition where the volume's read/write version resides. When you install additional file server machines, it is a good idea to create replication sites on them as well. # ./vos addsite <machine name> <partition name> root.afs # ./vos addsite <machine name> <partition name> root.cell fs commands examine commands fs examine Issue the fs examine command to verify that the Cache Manager can access both the root.afs and root.cell volumes, before you attempt to replicate them. The output lists each volume's name, volumeID number, quota, size, and the size of the partition that houses them. If you get an error message instead, do not continue before taking corrective action. # ./fs examine /afs # ./fs examine /afs/cellname commands vos release vos commands release volume releasing replicated releasing replicated volume Issue the vos release command to release a replica of the root.afs and root.cell volumes to the sites you defined in Step 5. # ./vos release root.afs # ./vos release root.cell fs commands checkvolumes commands fs checkvolumes Issue the fs checkvolumes to force the Cache Manager to notice that you have released read-only versions of the volumes, then issue the fs examine command again. This time its output mentions the read-only version of the volumes (root.afs.readonly and root.cell.readonly) instead of the read/write versions, because of the Cache Manager's bias to access the read-only version of the root.afs volume if it exists. # ./fs checkvolumes # ./fs examine /afs # ./fs examine /afs/cellname storing AFS binaries in volumes creating volume for AFS binaries volume for AFS binaries binaries storing AFS in volume usr/afsws directory directories /usr/afsws Storing AFS Binaries in AFS Sites with existing binary distribution mechanisms, including those which use packaging systems such as RPM, may wish to skip this step, and use tools native to their operating system to manage AFS configuration information. In the conventional configuration, you make AFS client binaries and configuration files available in the subdirectories of the /usr/afsws directory on client machines (afsws is an acronym for AFS workstation). You can conserve local disk space by creating /usr/afsws as a link to an AFS volume that houses the AFS client binaries and configuration files for this system type. In this section you create the necessary volumes. The conventional location to which to link /usr/afsws is /afs/cellname/sysname/usr/afsws, where sysname is the appropriate system type name as specified in the OpenAFS Release Notes. The instructions in Installing Additional Client Machines assume that you have followed the instructions in this section. If you have previously run AFS in the cell, the volumes possibly already exist. If so, you need to perform Step 8 only. The current working directory is still /usr/afs/bin, which houses the fs and vos command suite binaries. In the following commands, it is possible you still need to specify the pathname to the commands, depending on how your PATH environment variable is set. commands vos create volume for AFS binaries vos commands create volume for AFS binaries Issue the vos create command to create volumes for storing the AFS client binaries for this system type. The following example instruction creates volumes called sysname, sysname.usr, and sysname.usr.afsws. Refer to the OpenAFS Release Notes to learn the proper value of sysname for this system type. # vos create <machine name> <partition name> sysname # vos create <machine name> <partition name> sysname.usr # vos create <machine name> <partition name> sysname.usr.afsws Issue the fs mkmount command to mount the newly created volumes. Because the root.cell volume is replicated, you must precede the cellname part of the pathname with a period to specify the read/write mount point, as shown. Then issue the vos release command to release a new replica of the root.cell volume, and the fs checkvolumes command to force the local Cache Manager to access them. # fs mkmount -dir /afs/.cellname/sysname -vol sysname # fs mkmount -dir /afs/.cellname/sysname/usr -vol sysname.usr # fs mkmount -dir /afs/.cellname/sysname/usr/afsws -vol sysname.usr.afsws # vos release root.cell # fs checkvolumes Issue the fs setacl command to grant the l (lookup) and r (read) permissions to the system:anyuser group on each new directory's ACL. # cd /afs/.cellname/sysname # fs setacl -dir . usr usr/afsws -acl system:anyuser rl commands fs setquota fs commands setquota quota for volume volume setting quota setting volume quota Issue the fs setquota command to set an unlimited quota on the volume mounted at the /afs/cellname/sysname/usr/afsws directory. This enables you to copy all of the appropriate files from the CD-ROM into the volume without exceeding the volume's quota. If you wish, you can set the volume's quota to a finite value after you complete the copying operation. At that point, use the vos examine command to determine how much space the volume is occupying. Then issue the fs setquota command to set a quota that is slightly larger. # fs setquota /afs/.cellname/sysname/usr/afsws 0 Unpack the distribution tarball into the /tmp/afsdist directory, if it is not already. copying AFS binaries into volume CD-ROM copying AFS binaries into volume first AFS machine copying AFS binaries into volume Copy the contents of the indicated directories from the distribution into the /afs/cellname/sysname/usr/afsws directory. # cd /afs/.cellname/sysname/usr/afsws # cp -rp /tmp/afsdist/sysname/bin . # cp -rp /tmp/afsdist/sysname/etc . # cp -rp /tmp/afsdist/sysname/include . # cp -rp /tmp/afsdist/sysname/lib . creating symbolic link to AFS binaries symbolic link to AFS binaries from local disk Create /usr/afsws on the local disk as a symbolic link to the directory /afs/cellname/@sys/usr/afsws. You can specify the actual system name instead of @sys if you wish, but the advantage of using @sys is that it remains valid if you upgrade this machine to a different system type. # ln -s /afs/cellname/@sys/usr/afsws /usr/afsws PATH environment variable for users variables PATH, setting for users (Optional) To enable users to issue commands from the AFS suites (such as fs) without having to specify a pathname to their binaries, include the /usr/afsws/bin and /usr/afsws/etc directories in the PATH environment variable you define in each user's shell initialization file (such as .cshrc). storing AFS documentation in volumes creating volume for AFS documentation volume for AFS documentation documentation, creating volume for AFS usr/afsdoc directory directories /usr/afsdoc Storing AFS Documents in AFS The AFS distribution includes the following documents: OpenAFS Release Notes OpenAFS Quick Beginnings OpenAFS User Guide OpenAFS Administration Reference OpenAFS Administration Guide OpenAFS Documentation is not currently provided with all distributions, but may be downloaded separately from the OpenAFS website The OpenAFS Documentation Distribution has a directory for each document format provided. The different formats are suitable for online viewing, printing, or both. This section explains how to create and mount a volume to house the documents, making them available to your users. The recommended mount point for the volume is /afs/cellname/afsdoc. If you wish, you can create a link to the mount point on each client machine's local disk, called /usr/afsdoc. Alternatively, you can create a link to the mount point in each user's home directory. You can also choose to permit users to access only certain documents (most probably, the OpenAFS User Guide) by creating different mount points or setting different ACLs on different document directories. The current working directory is still /usr/afs/bin, which houses the fs and vos command suite binaries you use to create and mount volumes. In the following commands, it is possible you still need to specify the pathname to the commands, depending on how your PATH environment variable is set. commands vos create volume for AFS documentation vos commands create volume for AFS documentation Issue the vos create command to create a volume for storing the AFS documentation. Include the -maxquota argument to set an unlimited quota on the volume. This enables you to copy all of the appropriate files from the CD-ROM into the volume without exceeding the volume's quota. If you wish, you can set the volume's quota to a finite value after you complete the copying operations. At that point, use the vos examine command to determine how much space the volume is occupying. Then issue the fs setquota command to set a quota that is slightly larger. # vos create <machine name> <partition name> afsdoc -maxquota 0 Issue the fs mkmount command to mount the new volume. Because the root.cell volume is replicated, you must precede the cellname with a period to specify the read/write mount point, as shown. Then issue the vos release command to release a new replica of the root.cell volume, and the fs checkvolumes command to force the local Cache Manager to access them. # fs mkmount -dir /afs/.cellname/afsdoc -vol afsdoc # vos release root.cell # fs checkvolumes Issue the fs setacl command to grant the rl permissions to the system:anyuser group on the new directory's ACL. # cd /afs/.cellname/afsdoc # fs setacl . system:anyuser rl Unpack the OpenAFS documentation distribution into the /tmp/afsdocs directory. You may use a different directory, in which case the location you use should be subsituted in the following examples. For instructions on unpacking the distribution, consult the documentation for your operating system's tar command. copying AFS documentation from distribution OpenAFS Distribution copying AFS documentation from first AFS machine copying AFS documentation from OpenAFS distribution index.htm file files index.htm Copy the AFS documents in one or more formats from the unpacked distribution into subdirectories of the /afs/cellname/afsdoc directory. Repeat the commands for each format. # mkdir format_name # cd format_name # cp -rp /tmp/afsdocs/format . If you choose to store the HTML version of the documents in AFS, note that in addition to a subdirectory for each document there are several files with a .gif extension, which enable readers to move easily between sections of a document. The file called index.htm is an introductory HTML page that contains a hyperlink to each of the documents. For online viewing to work properly, these files must remain in the top-level HTML directory (the one named, for example, /afs/cellname/afsdoc/html). (Optional) If you believe it is helpful to your users to access the AFS documents in a certain format via a local disk directory, create /usr/afsdoc on the local disk as a symbolic link to the documentation directory in AFS (/afs/cellname/afsdoc/format_name). # ln -s /afs/cellname/afsdoc/format_name /usr/afsdoc An alternative is to create a link in each user's home directory to the /afs/cellname/afsdoc/format_name directory. storing system binaries in volumes creating volume for system binaries volume for system binaries binaries storing system in volumes Storing System Binaries in AFS You can also choose to store other system binaries in AFS volumes, such as the standard UNIX programs conventionally located in local disk directories such as /etc, /bin, and /lib. Storing such binaries in an AFS volume not only frees local disk space, but makes it easier to update binaries on all client machines. The following is a suggested scheme for storing system binaries in AFS. It does not include instructions, but you can use the instructions in Storing AFS Binaries in AFS (which are for AFS-specific binaries) as a template. Some files must remain on the local disk for use when AFS is inaccessible (during bootup and file server or network outages). The required binaries include the following: A text editor, network commands, and so on Files used during the boot sequence before the afsd program runs, such as initialization and configuration files, and binaries for commands that mount file systems Files used by dynamic kernel loader programs In most cases, it is more secure to enable only locally authenticated users to access system binaries, by granting the l (lookup) and r (read) permissions to the system:authuser group on the ACLs of directories that contain the binaries. If users need to access a binary while unauthenticated, however, the ACL on its directory must grant those permissions to the system:anyuser group. The following chart summarizes the suggested volume and mount point names for storing system binaries. It uses a separate volume for each directory. You already created a volume called sysname for this machine's system type when you followed the instructions in Storing AFS Binaries in AFS. You can name volumes in any way you wish, and mount them at other locations than those suggested here. However, this scheme has several advantages: Volume names clearly identify volume contents Using the sysname prefix on every volume makes it is easy to back up all of the volumes together, because the AFS Backup System enables you to define sets of volumes based on a string included in all of their names It makes it easy to track related volumes, keeping them together on the same file server machine if desired There is a clear relationship between volume name and mount point name Volume Name Mount Point sysname /afs/cellname/sysname sysname.bin /afs/cellname/sysname/bin sysname.etc /afs/cellname/sysname/etc sysname.usr /afs/cellname/sysname/usr sysname.usr.afsws /afs/cellname/sysname/usr/afsws sysname.usr.bin /afs/cellname/sysname/usr/bin sysname.usr.etc /afs/cellname/sysname/usr/etc sysname.usr.inc /afs/cellname/sysname/usr/include sysname.usr.lib /afs/cellname/sysname/usr/lib sysname.usr.loc /afs/cellname/sysname/usr/local sysname.usr.man /afs/cellname/sysname/usr/man sysname.usr.sys /afs/cellname/sysname/usr/sys foreign cell, enabling access cell enabling access to foreign access to local and foreign cells AFS filespace enabling access to foreign cells root.cell volume mounting for foreign cells in local filespace database server machine entry in client CellServDB file for foreign cell CellServDB file (client) adding entry for foreign cell Enabling Access to Foreign Cells With current OpenAFS releases, there exist a number of mechanisms for providing access to foreign cells. You may add mount points in your AFS filespace for each foreign cell you wish users to access, or you can enable a 'synthetic' AFS root, which contains mountpoints for either all AFS cells defined in the client machine's local /usr/vice/etc/CellServDB, or for all cells providing location information in the DNS. Enabling a Synthetic AFS root When a synthetic root is enabled, the client cache machine creates its own root.afs volume, rather than using the one provided with your cell. This allows clients to access all cells in the CellServDB file and, optionally, all cells registered in the DNS, without requiring system administrator action to enable this access. Using a synthetic root has the additional advantage that it allows a client to start its AFS service without a network available, as it is no longer necessary to contact a fileserver to obtain the root volume. OpenAFS supports two complimentary mechanisms for creating the synthetic root. Starting the cache manager with the -dynroot option adds all cells listed in /usr/vice/etc/CellServDB to the client's AFS root. Adding the -afsdb option in addition to this enables DNS lookups for any cells that are not found in the client's CellServDB file. Both of these options are added to the AFS initialisation script, or options file, as detailed in Configuring the Cache Manager. Adding foreign cells to a conventional root volume In this section you create a mount point in your AFS filespace for the root.cell volume of each foreign cell that you want to enable your users to access. For users working on a client machine to access the cell, there must in addition be an entry for it in the client machine's local /usr/vice/etc/CellServDB file. (The instructions in Creating the Client CellServDB File suggest that you use the CellServDB.sample file included in the AFS distribution as the basis for your cell's client CellServDB file. The sample file lists all of the cells that had agreed to participate in the AFS global namespace at the time your AFS CD-ROM was created. As mentioned in that section, the AFS Product Support group also maintains a copy of the file, updating it as necessary.) The chapter in the OpenAFS Administration Guide about cell administration and configuration issues discusses the implications of participating in the global AFS namespace. The chapter about administering client machines explains how to maintain knowledge of foreign cells on client machines, and includes suggestions for maintaining a central version of the file in AFS. Issue the fs mkmount command to mount each foreign cell's root.cell volume on a directory called /afs/foreign_cell. Because the root.afs volume is replicated, you must create a temporary mount point for its read/write version in a directory to which you have write access (such as your cell's /afs/.cellname directory). Create the mount points, issue the vos release command to release new replicas to the read-only sites for the root.afs volume, and issue the fs checkvolumes command to force the local Cache Manager to access the new replica. You need to issue the fs mkmount command only once for each foreign cell's root.cell volume. You do not need to repeat the command on each client machine. Substitute your cell's name for cellname. # cd /afs/.cellname # /usr/afs/bin/fs mkmount temp root.afs Repeat the fs mkmount command for each foreign cell you wish to mount at this time. # /usr/afs/bin/fs mkmount temp/foreign_cell root.cell -c foreign_cell Issue the following commands only once. # /usr/afs/bin/fs rmmount temp # /usr/afs/bin/vos release root.afs # /usr/afs/bin/fs checkvolumes fs commands newcell commands fs newcell If this machine is going to remain an AFS client after you complete the installation, verify that the local /usr/vice/etc/CellServDB file includes an entry for each foreign cell. For each cell that does not already have an entry, complete the following instructions: Create an entry in the CellServDB file. Be sure to comply with the formatting instructions in Creating the Client CellServDB File. Issue the fs newcell command to add an entry for the cell directly to the list that the Cache Manager maintains in kernel memory. Provide each database server machine's fully qualified hostname. # /usr/afs/bin/fs newcell <foreign_cell> <dbserver1> \ [<dbserver2>] [<dbserver3>] If you plan to maintain a central version of the CellServDB file (the conventional location is /afs/cellname/common/etc/CellServDB), create it now as a copy of the local /usr/vice/etc/CellServDB file. Verify that it includes an entry for each foreign cell you want your users to be able to access. # mkdir common # mkdir common/etc # cp /usr/vice/etc/CellServDB common/etc # /usr/afs/bin/vos release root.cell Issue the ls command to verify that the new cell's mount point is visible in your filespace. The output lists the directories at the top level of the new cell's AFS filespace. # ls /afs/foreign_cell If you wish to participate in the global AFS namespace, and only intend running one database server, please register your cell with grand.central.org at this time. To do so, email the CellServDB fragment describing your cell, together with a contact name and email address for any queries, to cellservdb@grand.central.org. If you intend on deploying multiple database servers, please wait until you have installed all of them before registering your cell. If you wish to allow your cell to be located through DNS lookups, at this time you should also add the necessary configuration to your DNS. AFS database servers may be located by creating AFSDB records in the DNS for the domain name corresponding to the name of your cell. It's outside the scope of this guide to give an indepth description of managing, or configuring, your site's DNS. You should consult the documentation for your DNS server for further details on AFSDB records. Improving Cell Security cell improving security security improving root superuser controlling access access to root and admin accounts admin account controlling access to AFS filespace controlling access by root superuser This section discusses ways to improve the security of AFS data in your cell. Also see the chapter in the OpenAFS Administration Guide about configuration and administration issues. Controlling root Access As on any machine, it is important to prevent unauthorized users from logging onto an AFS server or client machine as the local superuser root. Take care to keep the root password secret. The local root superuser does not have special access to AFS data through the Cache Manager (as members of the system:administrators group do), but it does have the following privileges: On client machines, the ability to issue commands from the fs suite that affect AFS performance On server machines, the ability to disable authorization checking, or to install rogue process binaries Controlling System Administrator Access Following are suggestions for managing AFS administrative privilege: Create an administrative account for each administrator named something like username.admin. Administrators authenticate under these identities only when performing administrative tasks, and destroy the administrative tokens immediately after finishing the task (either by issuing the unlog command, or the kinit and aklog commands to adopt their regular identity). Set a short ticket lifetime for administrator accounts (for example, 20 minutes) by using the facilities of your KDC. For instance, with a MIT Kerberos KDC, this can be performed using the --max-ticket-life argument to the kadmin modify_principal command. Do not however, use a short lifetime for users who issue long-running backup commands. Limit the number of system administrators in your cell, especially those who belong to the system:administrators group. By default they have all ACL rights on all directories in the local AFS filespace, and therefore must be trusted not to examine private files. Limit the use of system administrator accounts on machines in public areas. It is especially important not to leave such machines unattended without first destroying the administrative tokens. Limit the use by administrators of standard UNIX commands that make connections to remote machines (such as the telnet utility). Many of these programs send passwords across the network without encrypting them. BOS Server checking mode bits on AFS directories mode bits on local AFS directories UNIX mode bits on local AFS directories Protecting Sensitive AFS Directories Some subdirectories of the /usr/afs directory contain files crucial to cell security. Unauthorized users must not read or write to these files because of the potential for misuse of the information they contain. As the BOS Server initializes for the first time on a server machine, it creates several files and directories (as mentioned in Starting the BOS Server). It sets their owner to the local superuser root and sets their mode bits to enable writing by the owner only; in some cases, it also restricts reading. At each subsequent restart, the BOS Server checks that the owner and mode bits on these files are still set appropriately. If they are not, it write the following message to the /usr/afs/logs/BosLog file: Bosserver reports inappropriate access on server directories The BOS Server does not reset the mode bits, which enables you to set alternate values if you wish. The following charts lists the expected mode bit settings. A question mark indicates that the BOS Server does not check that mode bit. /usr/afs drwxr?xr-x /usr/afs/backup drwx???--- /usr/afs/bin drwxr?xr-x /usr/afs/db drwx???--- /usr/afs/etc drwxr?xr-x /usr/afs/etc/KeyFile -rw????--- /usr/afs/etc/UserList -rw?????-- /usr/afs/local drwx???--- /usr/afs/logs drwxr?xr-x first AFS machine client functionality removing removing client functionality from first AFS machine Removing Client Functionality Follow the instructions in this section only if you do not wish this machine to remain an AFS client. Removing client functionality means that you cannot use this machine to access AFS files. Remove the files from the /usr/vice/etc directory. The command does not remove the directory for files used by the dynamic kernel loader program, if it exists on this system type. Those files are still needed on a server-only machine. # cd /usr/vice/etc # rm * # rm -rf C Create symbolic links to the ThisCell and CellServDB files in the /usr/afs/etc directory. This makes it possible to issue commands from the AFS command suites (such as bos and fs) on this machine. # ln -s /usr/afs/etc/ThisCell ThisCell # ln -s /usr/afs/etc/CellServDB CellServDB On IRIX systems, issue the chkconfig command to deactivate the afsclient configuration variable. # /etc/chkconfig -f afsclient off Reboot the machine. Most system types use the shutdown command, but the appropriate options vary. # cd / # shutdown appropriate_options