Installing the First AFS Machinefile server machinefirst AFS machinefile server machine, additionalinstructionsfirst AFS machineinstallingfirst AFS machineThis chapter describes how to install the first AFS machine in your cell, configuring it as both a file server machine and a
client machine. After completing all procedures in this chapter, you can remove the client functionality if you wish, as described
in Removing Client Functionality.To install additional file server machines after completing this chapter, see Installing Additional
Server Machines.To install additional client machines after completing this chapter, see Installing Additional
Client Machines. requirementsfirst AFS machineRequirements and Configuration DecisionsThe instructions in this chapter assume that you meet the following requirements.
You are logged onto the machine's console as the local superuser rootA standard version of one of the operating systems supported by the current version of AFS is running on the
machineYou have either installed the provided OpenAFS packages for
your system, have access to a binary distribution tarball, or have
successfully built OpenAFS from sourceYou have a Kerberos v5 realm running for your site. If you are
working with an existing cell which uses
kaserver or Kerberos v4 for
authentication, please see
kaserver and Legacy Kerberos v4 Authentication
for the modifications required to this installation procedure.You have a NTP, or similar, time service deployed to ensure
rough clock syncronistation between your clients and servers. If you
wish to use AFS's built in timeservice (which is deprecated) please
see Appendix B for the necessary modifications to this installation
procedure.You must make the following configuration decisions while installing the first AFS machine. To speed the installation
itself, it is best to make the decisions before beginning. See the chapter in the OpenAFS Administration
Guide about issues in cell administration and configuration for detailed guidelines. cell namechoosingAFS filespacedeciding how to configurefilespaceAFS filespaceSelect the first AFS machineSelect the cell nameDecide which partitions or logical volumes to configure as AFS server partitions, and choose the directory names on
which to mount themDecide how big to make the client cacheDecide how to configure the top levels of your cell's AFS filespaceThis chapter is divided into three large sections corresponding to the three parts of installing the first AFS machine.
Perform all of the steps in the order they appear. Each functional section begins with a summary of the procedures to perform.
The sections are as follows: Installing server functionality (begins in Overview: Installing Server
Functionality)Installing client functionality (begins in Overview: Installing Client
Functionality)Configuring your cell's filespace, establishing further security mechanisms, and enabling access to foreign cells
(begins in Overview: Completing the Installation of the First AFS Machine)overviewinstalling server functionality on first AFS machinefirst AFS machineserver functionalityinstallingserver functionalityfirst AFS machineOverview: Installing Server FunctionalityIn the first phase of installing your cell's first AFS machine, you install file server and database server functionality
by performing the following procedures:
Choose which machine to install as the first AFS machineCreate AFS-related directories on the local diskIncorporate AFS modifications into the machine's kernelConfigure partitions or logical volumes for storing AFS volumesOn some system types, install and configure an AFS-modified version of the fsck
programIf the machine is to remain a client machine, incorporate AFS into its authentication systemStart the Basic OverSeer (BOS) ServerDefine the cell name and the machine's cell membershipStart the database server processes: Backup Server, Protection Server, and Volume Location
(VL) ServerConfigure initial security mechanismsStart the fs process, which incorporates three component processes: the File
Server, Volume Server, and SalvagerStart the server portion of the Update ServerChoosing the First AFS MachineThe first AFS machine you install must have sufficient disk space to store AFS volumes. To take best advantage of AFS's
capabilities, store client-side binaries as well as user files in volumes. When you later install additional file server
machines in your cell, you can distribute these volumes among the different machines as you see fit.These instructions configure the first AFS machine as a database server machine, the binary
distribution machine for its system type, and the cell's system control machine. For a
description of these roles, see the OpenAFS Administration Guide.Installation of additional machines is simplest if the first machine has the lowest IP address of any database server
machine you currently plan to install. If you later install database server functionality on a machine with a lower IP address,
you must first update the /usr/vice/etc/CellServDB file on all of your cell's client machines.
For more details, see Installing Database Server Functionality.Creating AFS Directoriesusr/afs directoryfirst AFS machinefirst AFS machine/usr/afs directorycreating/usr/afs directoryfirst AFS machineusr/vice/etc directoryfirst AFS machinefirst AFS machine/usr/vice/etc directorycreating/usr/vice/etc directoryfirst AFS machine/ as start to file and directory namessee alphabetized entries without initial slashIf you are installing from packages (such as Debian .deb or
Fedora/SuSe .rpm files), you should now install all of the available
OpenAFS packages for your system type. Typically, these will include
packages for client and server functionality, and a seperate package
containing a suitable kernel module for your running kernel. Consult
the package lists on the OpenAFS website to determine the packages
appropriate for your system.If you are installing from a tarfile, or from a locally compiled
source tree you should create the /usr/afs
and /usr/vice/etc directories on the
local disk, to house server and client files respectively. Subsequent
instructions copy files from the distribution tarfile into them.
# mkdir /usr/afs
# mkdir /usr/vice
# mkdir /usr/vice/etcPerforming Platform-Specific ProceduresSeveral of the initial procedures for installing a file server machine differ for each system type. For convenience, the
following sections group them together for each system type: kernel extensionsAFS kernel extensionsloading AFS kernel extensionsincorporatingbuildingAFS extensions into kernelincorporating AFS kernel extensionsIncorporate AFS modifications into the kernel.The kernel on every AFS client machine and, on some systems,
the AFS fileservers, must incorporate AFS extensions. On machines
that use a dynamic kernel module loader, it is conventional to
alter the machine's initialization script to load the AFS extensions
at each reboot. AFS server partitionmounted on /vicep directorypartitionAFS server partitionlogical volumeAFS server partitionrequirementsAFS server partition name and locationnaming conventions for AFS server partitionvicepxx directoryAFS server partitiondirectories/vicepxxAFS server partitionConfigure server partitions or logical volumes to house AFS volumes.Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes
(for convenience, the documentation hereafter refers to partitions only). Each server partition is mounted at a directory
named /vicepxx, where xx is one or
two lowercase letters. By convention, the first 26 partitions are mounted on the directories called /vicepa through /vicepz, the 27th one is mounted on the /vicepaa directory, and so on through /vicepaz and /vicepba, continuing up to the index corresponding to the maximum number of server partitions
supported in the current version of AFS (which is specified in the OpenAFS Release Notes).The /vicepxx directories must reside in the file server
machine's root directory, not in one of its subdirectories (for example, /usr/vicepa is
not an acceptable directory location).You can also add or remove server partitions on an existing file server machine. For instructions, see the chapter
in the OpenAFS Administration Guide about maintaining server machines.Not all file system types supported by an operating system are necessarily supported as AFS server partitions. For
possible restrictions, see the OpenAFS Release Notes.On some system types, install and configure a modified fsck program which
recognizes the structures that the File Server uses to organize volume data on AFS server partitions. The fsck program provided with the operating system does not understand the AFS data structures, and so
removes them to the lost+found directory.If the machine is to remain an AFS client machine, modify the machine's authentication system so that users obtain
an AFS token as they log into the local file system. Using AFS is simpler and more convenient for your users if you make
the modifications on all client machines. Otherwise, users must perform a two or three step login procedure (login to the local
system, then obtain Kerberos credentials, and then issue the aklog command). For further discussion of AFS
authentication, see the chapter in the OpenAFS Administration Guide about cell configuration and
administration issues.To continue, proceed to the appropriate section: Getting Started on AIX SystemsGetting Started on HP-UX SystemsGetting Started on IRIX SystemsGetting Started on Linux SystemsGetting Started on Solaris SystemsGetting Started on AIX SystemsBegin by running the AFS initialization script to call the AIX kernel extension facility, which dynamically loads AFS
modifications into the kernel. Then use the SMIT program to configure partitions for storing
AFS volumes, and replace the AIX fsck program helper with a version that correctly handles AFS
volumes. If the machine is to remain an AFS client machine, incorporate AFS into the AIX secondary authentication system.
incorporating AFS kernel extensionsfirst AFS machineAIXAFS kernel extensionson first AFS machineAIXfirst AFS machineAFS kernel extensionson AIXAIXAFS kernel extensionson first AFS machineLoading AFS into the AIX KernelThe AIX kernel extension facility is the dynamic kernel loader
provided by IBM Corporation. AIX does not support incorporation of
AFS modifications during a kernel build.For AFS to function correctly, the kernel extension facility must run each time the machine reboots, so the AFS
initialization script (included in the AFS distribution) invokes it automatically. In this section you copy the script to the
conventional location and edit it to select the appropriate options depending on whether NFS is also to run.After editing the script, you run it to incorporate AFS into the kernel. In later sections you verify that the script
correctly initializes all AFS components, then configure the AIX inittab file so that the
script runs automatically at reboot. Unpack the distribution tarball. The examples below assume
that you have unpacked the files into the
/tmp/afsdist directory. If you
pick a different location, substitute this in all of the following
examples. Once you have unpacked the distribution,
change directory as indicated.
# cd /tmp/afsdist/rs_aix42/root.client/usr/vice/etcCopy the AFS kernel library files to the local /usr/vice/etc/dkload directory,
and the AFS initialization script to the /etc directory.
# cp -rp dkload /usr/vice/etc
# cp -p rc.afs /etc/rc.afsEdit the /etc/rc.afs script, setting the NFS
variable as indicated.If the machine is not to function as an NFS/AFS Translator, set the NFS variable
as follows.
NFS=$NFS_NONE
If the machine is to function as an NFS/AFS Translator and is running AIX 4.2.1 or higher, set the
NFS variable as follows. Note that NFS must already be loaded into the kernel, which
happens automatically on systems running AIX 4.1.1 and later, as long as the file /etc/exports exists.
NFS=$NFS_IAUTH
Invoke the /etc/rc.afs script to load AFS modifications into the kernel. You can
ignore any error messages about the inability to start the BOS Server or the Cache Manager or AFS client.
# /etc/rc.afsconfiguringAFS server partition on first AFS machineAIXAFS server partitionconfiguring on first AFS machineAIXfirst AFS machineAFS server partitionon AIXAIXAFS server partitionon first AFS machineConfiguring Server Partitions on AIX SystemsEvery AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each
server partition is mounted at a directory named /vicepxx, where
xx is one or two lowercase letters. The /vicepxx directories must reside in the file server machine's root
directory, not in one of its subdirectories (for example, /usr/vicepa is not an acceptable
directory location). For additional information, see Performing Platform-Specific
Procedures.To configure server partitions on an AIX system, perform the following procedures: Create a directory called /vicepxx for each AFS server
partition you are configuring (there must be at least one). Repeat the command for each partition.
# mkdir /vicepxxUse the SMIT program to create a journaling file system on each partition to be
configured as an AFS server partition.Mount each partition at one of the /vicepxx
directories. Choose one of the following three methods: Use the SMIT programUse the mount -a command to mount all partitions at onceUse the mount command on each partition in turnAlso configure the partitions so that they are mounted automatically at each reboot. For more information, refer
to the AIX documentation.replacing fsck programfirst AFS machineAIXfsck programon first AFS machineAIXfirst AFS machinefsck programon AIXAIXfsck programon first AFS machineReplacing the fsck Program Helper on AIX SystemsThe AFS modified fsck program is not required on AIX 5.1
systems, and the v3fshelper program
refered to below is not shipped for these systems.In this section, you make modifications to guarantee that the appropriate fsck program
runs on AFS server partitions. The fsck program provided with the operating system must never
run on AFS server partitions. Because it does not recognize the structures that the File Server uses to organize volume data,
it removes all of the data. To repeat:Never run the standard fsck program on AFS server partitions. It discards AFS
volumes.On AIX systems, you do not replace the fsck binary itself, but rather the
program helper file included in the AIX distribution as /sbin/helpers/v3fshelper. Move the AIX fsck program helper to a safe location and install the version from
the AFS distribution in its place.
# cd /sbin/helpers
# mv v3fshelper v3fshelper.noafs
# cp -p /tmp/afsdist/rs_aix42/root.server/etc/v3fshelper v3fshelperIf you plan to retain client functionality on this machine after completing the installation, proceed to Enabling AFS Login on AIX Systems. Otherwise, proceed to Starting the
BOS Server.enabling AFS loginfile server machineAIXAFS loginon file server machineAIXfirst AFS machineAFS loginon AIXAIXAFS loginon file server machinesecondary authentication system (AIX)server machineEnabling AFS Login on AIX SystemsIf you plan to remove client functionality from this machine after completing the installation, skip this section and
proceed to Starting the BOS Server.In modern AFS installations, you should be using Kerberos v5
for user login, and obtaining AFS tokens following this authentication
step.There are currently no instructions available on configuring AIX to
automatically obtain AFS tokens at login. Following login, users can
obtain tokens by running the aklog
commandSites which still require kaserver
or external Kerberos v4 authentication should consult
Enabling kaserver based AFS login on AIX systems
for details of how to enable AIX login.Proceed to Starting the BOS Server
(or if referring to these instructions while installing an additional
file server machine, return to Starting Server
Programs).Getting Started on HP-UX SystemsBegin by building AFS modifications into a new kernel; HP-UX
does not support dynamic loading. Then create partitions for storing
AFS volumes, and install and configure the AFS-modified fsck program to run on AFS server
partitions. If the machine is to remain an AFS client machine,
incorporate AFS into the machine's Pluggable Authentication Module
(PAM) scheme. incorporating AFS kernel extensionsfirst AFS machineHP-UXAFS kernel extensionson first AFS machineHP-UXfirst AFS machineAFS kernel extensionson HP-UXHP-UXAFS-modified kernelon first AFS machineBuilding AFS into the HP-UX KernelUse the following instructions to build AFS modifications into the kernel on an HP-UX system. Move the existing kernel-related files to a safe location.
# cp /stand/vmunix /stand/vmunix.noafs
# cp /stand/system /stand/system.noafsUnpack the OpenAFS HP-UX distribution tarball. The examples
below assume that you have unpacked the files into the
/tmp/afsdist directory. If you
pick a different location, substitute this in all of the following
examples. Once you have unpacked the distribution, change directory
as indicated.
# cd /tmp/afsdist/hp_ux110/root.clientCopy the AFS initialization file to the local directory for initialization files (by convention, /sbin/init.d on HP-UX machines). Note the removal of the .rc
extension as you copy the file.
# cp usr/vice/etc/afs.rc /sbin/init.d/afsCopy the file afs.driver to the local /usr/conf/master.d directory, changing its name to afs as you
do.
# cp usr/vice/etc/afs.driver /usr/conf/master.d/afsCopy the AFS kernel module to the local /usr/conf/lib directory.If the machine's kernel supports NFS server functionality:
# cp bin/libafs.a /usr/conf/libIf the machine's kernel does not support NFS server functionality, change the file's name as you copy it:
# cp bin/libafs.nonfs.a /usr/conf/lib/libafs.aIncorporate the AFS driver into the kernel, either using the SAM program or a
series of individual commands. To use the SAM program: Invoke the SAM program, specifying the hostname of the local machine
as local_hostname. The SAM graphical user
interface pops up.
# sam -displaylocal_hostname:0Choose the Kernel Configuration icon, then the Drivers icon. From the list of drivers, select afs.Open the pull-down Actions menu and choose the Add Driver to Kernel option.Open the Actions menu again and choose the Create a New Kernel option.Confirm your choices by choosing Yes and OK when prompted by subsequent pop-up windows. The SAM program builds the kernel and reboots the system.Login again as the superuser root.
login: root
Password: root_passwordTo use individual commands: Edit the file /stand/system, adding an entry for afs to the Subsystems section.Change to the /stand/build directory and issue the mk_kernel command to build the kernel.
# cd /stand/build
# mk_kernelMove the new kernel to the standard location (/stand/vmunix), reboot
the machine to start using it, and login again as the superuser root.
# mv /stand/build/vmunix_test /stand/vmunix
# cd /
# shutdown -r now
login: root
Password: root_passwordconfiguringAFS server partition on first AFS machineHP-UXAFS server partitionconfiguring on first AFS machineHP-UXfirst AFS machineAFS server partitionon HP-UXHP-UXAFS server partitionon first AFS machineConfiguring Server Partitions on HP-UX SystemsEvery AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each
server partition is mounted at a directory named /vicepxx, where
xx is one or two lowercase letters. The /vicepxx directories must reside in the file server machine's root
directory, not in one of its subdirectories (for example, /usr/vicepa is not an acceptable
directory location). For additional information, see Performing Platform-Specific Procedures.
Create a directory called /vicepxx for each AFS server
partition you are configuring (there must be at least one). Repeat the command for each partition.
# mkdir /vicepxxUse the SAM program to create a file system on each partition. For instructions,
consult the HP-UX documentation.On some HP-UX systems that use logical volumes, the SAM program automatically
mounts the partitions. If it has not, mount each partition by issuing either the mount
-a command to mount all partitions at once or the mount command to mount
each partition in turn.replacing fsck programfirst AFS machineHP-UXfsck programon first AFS machineHP-UXfirst AFS machinefsck programon HP-UXHP-UXfsck programon first AFS machineConfiguring the AFS-modified fsck Program on HP-UX SystemsIn this section, you make modifications to guarantee that the appropriate fsck program
runs on AFS server partitions. The fsck program provided with the operating system must never
run on AFS server partitions. Because it does not recognize the structures that the File Server uses to organize volume data,
it removes all of the data. To repeat:Never run the standard fsck program on AFS server partitions. It discards AFS
volumes.On HP-UX systems, there are several configuration files to install in addition to the AFS-modified fsck program (the vfsck binary). Create the command configuration file /sbin/lib/mfsconfig.d/afs. Use a text
editor to place the indicated two lines in it:
format_revision 1
fsck 0 m,P,p,d,f,b:c:y,n,Y,N,q,
Create and change directory to an AFS-specific command directory called /sbin/fs/afs.
# mkdir /sbin/fs/afs
# cd /sbin/fs/afsCopy the AFS-modified version of the fsck program (the vfsck binary) and related files from the distribution directory to the new AFS-specific command
directory.
# cp -p /tmp/afsdist/hp_ux110/root.server/etc/* .Change the vfsck binary's name to fsck and set
the mode bits appropriately on all of the files in the /sbin/fs/afs directory.
# mv vfsck fsck
# chmod 755 *Edit the /etc/fstab file, changing the file system type for each AFS server
partition from hfs to afs. This ensures that the
AFS-modified fsck program runs on the appropriate partitions.The sixth line in the following example of an edited file shows an AFS server partition, /vicepa.
/dev/vg00/lvol1 / hfs defaults 0 1
/dev/vg00/lvol4 /opt hfs defaults 0 2
/dev/vg00/lvol5 /tmp hfs defaults 0 2
/dev/vg00/lvol6 /usr hfs defaults 0 2
/dev/vg00/lvol8 /var hfs defaults 0 2
/dev/vg00/lvol9 /vicepa afs defaults 0 2
/dev/vg00/lvol7 /usr/vice/cache hfs defaults 0 2
If you plan to retain client functionality on this machine after completing the installation, proceed to Enabling AFS Login on HP-UX Systems. Otherwise, proceed to Starting the
BOS Server.enabling AFS loginfile server machineHP-UXAFS loginon file server machineHP-UXfirst AFS machineAFS loginon HP-UXHP-UXAFS loginon file server machinePAMon HP-UXfile server machinePluggable Authentication ModulePAMEnabling AFS Login on HP-UX SystemsIf you plan to remove client functionality from this machine after completing the installation, skip this section and proceed to Starting the BOS Server.At this point you incorporate AFS into the operating system's
Pluggable Authentication Module (PAM) scheme. PAM integrates all
authentication mechanisms on the machine, including login, to
provide the security infrastructure for authenticated access to and
from the machine.In modern AFS installations, you should be using Kerberos v5
for user login, and obtaining AFS tokens subsequent to this
authentication step. OpenAFS does not currently distribute a PAM
module allowing AFS tokens to be automatically gained at
login. Whilst there are a number of third party modules providing
this functionality, it is not know if these have been tested with
HP/UX.Following login, users can obtain tokens by running the
aklog commandSites which still require kaserver or external Kerberos v4
authentication should consult Enabling
kaserver based AFS login on HP-UX systems for details of how
to enable HP-UX login.Proceed to Starting the BOS
Server (or if referring to these instructions while
installing an additional file server machine, return to Starting Server Programs).Getting Started on IRIX Systemsincorporating AFS kernel extensionsfirst AFS machineIRIXAFS kernel extensionson first AFS machineIRIXfirst AFS machineAFS kernel extensionson IRIXreplacing fsck programnot necessary on IRIXfsck programon first AFS machineIRIXfirst AFS machinefsck programon IRIXIRIXfsck program replacement not necessaryTo incorporate AFS into the kernel on IRIX systems, choose one of two methods: Run the AFS initialization script to invoke the ml program distributed by Silicon
Graphics, Incorporated (SGI), which dynamically loads AFS modifications into the kernelBuild a new static kernelThen create partitions for storing AFS volumes. You do not need to replace the IRIX fsck
program because SGI has already modified it to handle AFS volumes properly. If the machine is to remain an AFS client machine,
verify that the IRIX login utility installed on the machine grants an AFS token.In preparation for either dynamic loading or kernel building, perform the following procedures: Unpack the OpenAFS IRIX distribution tarball. The examples
below assume that you have unpacked the files into the
/tmp/afsdist directory. If you
pick a different location, substitue this in all of the following
examples. Once you have unpacked the distribution, change directory
as indicated.
# cd /tmp/afsdist/sgi_65/root.clientCopy the AFS initialization script to the local directory for initialization files (by convention, /etc/init.d on IRIX machines). Note the removal of the .rc
extension as you copy the script.
# cp -p usr/vice/etc/afs.rc /etc/init.d/afsIssue the uname -m command to determine the machine's CPU board type. The IPxx value in the output must match one of the supported CPU board types
listed in the OpenAFS Release Notes for the current version of AFS.
# uname -mProceed to either Loading AFS into the IRIX Kernel or Building AFS into the IRIX Kernel.IRIXAFS kernel extensionson first AFS machineafsml variable (IRIX)first AFS machinevariablesafsml (IRIX)first AFS machineIRIXafsml variablefirst AFS machineafsxnfs variable (IRIX)first AFS machinevariablesafsxnfs (IRIX)first AFS machineIRIXafsxnfs variablefirst AFS machineLoading AFS into the IRIX KernelThe ml program is the dynamic kernel loader provided by SGI for IRIX systems. If you
use it rather than building AFS modifications into a static kernel, then for AFS to function correctly the ml program must run each time the machine reboots. Therefore, the AFS initialization script (included
on the AFS CD-ROM) invokes it automatically when the afsml configuration variable is
activated. In this section you activate the variable and run the script.In later sections you verify that the script correctly initializes all AFS components, then create the links that
incorporate AFS into the IRIX startup and shutdown sequence. Create the local /usr/vice/etc/sgiload directory to house the AFS kernel library
file.
# mkdir /usr/vice/etc/sgiloadCopy the appropriate AFS kernel library file to the /usr/vice/etc/sgiload
directory. The IPxx portion of the library file name must
match the value previously returned by the uname -m command. Also choose the file
appropriate to whether the machine's kernel supports NFS server functionality (NFS must be supported for the machine to
act as an NFS/AFS Translator). Single- and multiprocessor machines use the same library file.(You can choose to copy all of the kernel library files into the /usr/vice/etc/sgiload directory, but they require a significant amount of space.)If the machine's kernel supports NFS server functionality:
# cp -p usr/vice/etc/sgiload/libafs.IPxx.o /usr/vice/etc/sgiloadIf the machine's kernel does not support NFS server functionality:
# cp -p usr/vice/etc/sgiload/libafs.IPxx.nonfs.o \
/usr/vice/etc/sgiloadIssue the chkconfig command to activate the afsml configuration variable.
# /etc/chkconfig -f afsml onIf the machine is to function as an NFS/AFS Translator and the kernel supports NFS server functionality, activate
the afsxnfs variable.
# /etc/chkconfig -f afsxnfs onRun the /etc/init.d/afs script to load AFS extensions into the kernel. The script
invokes the ml command, automatically determining which kernel library file to use
based on this machine's CPU type and the activation state of the afsxnfs
variable.You can ignore any error messages about the inability to start the BOS Server or the Cache Manager or AFS
client.
# /etc/init.d/afs startProceed to Configuring Server Partitions on IRIX Systems.IRIXAFS-modified kernelon first AFS machineBuilding AFS into the IRIX KernelUse the following instructions to build AFS modifications into the kernel on an IRIX system. Copy the kernel initialization file afs.sm to the local /var/sysgen/system directory, and the kernel master file afs to
the local /var/sysgen/master.d directory.
# cp -p bin/afs.sm /var/sysgen/system
# cp -p bin/afs /var/sysgen/master.dCopy the appropriate AFS kernel library file to the local file /var/sysgen/boot/afs.a; the IPxx
portion of the library file name must match the value previously returned by the uname
-m command. Also choose the file appropriate to whether the machine's kernel supports NFS server
functionality (NFS must be supported for the machine to act as an NFS/AFS Translator). Single- and multiprocessor
machines use the same library file.If the machine's kernel supports NFS server functionality:
# cp -p bin/libafs.IPxx.a /var/sysgen/boot/afs.aIf the machine's kernel does not support NFS server functionality:
# cp -p bin/libafs.IPxx.nonfs.a /var/sysgen/boot/afs.aIssue the chkconfig command to deactivate the afsml configuration variable.
# /etc/chkconfig -f afsml offIf the machine is to function as an NFS/AFS Translator and the kernel supports NFS server functionality, activate
the afsxnfs variable.
# /etc/chkconfig -f afsxnfs onCopy the existing kernel file, /unix, to a safe location. Compile the new kernel,
which is created in the file /unix.install. It overwrites the existing /unix file when the machine reboots in the next step.
# cp /unix /unix_noafs
# autoconfigReboot the machine to start using the new kernel, and login again as the superuser root.
# cd /
# shutdown -i6 -g0 -y
login: root
Password: root_passwordconfiguringAFS server partition on first AFS machineIRIXAFS server partitionconfiguring on first AFS machineIRIXfirst AFS machineAFS server partitionon IRIXIRIXAFS server partitionon first AFS machineConfiguring Server Partitions on IRIX SystemsEvery AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each
server partition is mounted at a directory named /vicepxx, where
xx is one or two lowercase letters. The /vicepxx directories must reside in the file server machine's root
directory, not in one of its subdirectories (for example, /usr/vicepa is not an acceptable
directory location). For additional information, see Performing Platform-Specific
Procedures.AFS supports use of both EFS and XFS partitions for housing AFS volumes. SGI encourages use of XFS partitions.
Create a directory called /vicepxx for each AFS server
partition you are configuring (there must be at least one). Repeat the command for each partition.
# mkdir /vicepxxAdd a line with the following format to the file systems registry file, /etc/fstab, for each partition (or logical volume created with the XLV volume manager) to be
mounted on one of the directories created in the previous step.For an XFS partition or logical volume:
/dev/dsk/disk /vicepxx xfs rw,raw=/dev/rdsk/disk 0 0
For an EFS partition:
/dev/dsk/disk /vicepxx efs rw,raw=/dev/rdsk/disk 0 0
The following are examples of an entry for each file system type:
/dev/dsk/dks0d2s6 /vicepa xfs rw,raw=/dev/rdsk/dks0d2s6 0 0
/dev/dsk/dks0d3s1 /vicepb efs rw,raw=/dev/rdsk/dks0d3s1 0 0
Create a file system on each partition that is to be mounted on a /vicepxx directory. The following commands are probably appropriate,
but consult the IRIX documentation for more information. In both cases, raw_device is a raw
device name like /dev/rdsk/dks0d0s0 for a single disk partition or /dev/rxlv/xlv0 for a logical volume.For XFS file systems, include the indicated options to configure the partition or logical volume with inodes large
enough to accommodate AFS-specific information:
# mkfs -t xfs -i size=512 -l size=4000braw_deviceFor EFS file systems:
# mkfs -t efsraw_deviceMount each partition by issuing either the mount -a command to mount all
partitions at once or the mount command to mount each partition in turn.(Optional) If you have configured partitions or logical volumes to use XFS, issue
the following command to verify that the inodes are configured properly (are large enough to accommodate AFS-specific
information). If the configuration is correct, the command returns no output. Otherwise, it specifies the command to run
in order to configure each partition or logical volume properly.
# /usr/afs/bin/xfs_size_checkIf you plan to retain client functionality on this machine after completing the installation, proceed to Enabling AFS Login on IRIX Systems. Otherwise, proceed to Starting the
BOS Server.enabling AFS loginfile server machineIRIXAFS loginon file server machineIRIXfirst AFS machineAFS loginon IRIXIRIXAFS loginEnabling AFS Login on IRIX SystemsIf you plan to remove client functionality from this machine after completing the installation, skip this section and
proceed to Starting the BOS Server.Whilst the standard IRIX command-line
login program and the
graphical xdm login program both have
the ability to grant AFS tokens, this ability relies upon the deprecated
kaserver authentication system.Users who have been successfully authenticated via Kerberos 5
authentication may obtain AFS tokens following login by running the
aklog command.Sites which still require kaserver
or external Kerberos v4 authentication should consult
Enabling kaserver based AFS Login on IRIX Systems
for details of how to enable IRIX login.After taking any necessary action, proceed to
Starting the BOS Server.Getting Started on Linux Systemsreplacing fsck programnot necessary on Linuxfsck programon first AFS machineLinuxfirst AFS machinefsck programon LinuxLinuxfsck program replacement not necessarySince this guide was originally written, the procedure for starting
OpenAFS has diverged significantly between different Linux distributions.
The instructions that follow are appropriate for both the Fedora and
RedHat Enterprise Linux packages distributed by OpenAFS. Additional
instructions are provided for those building from source.Begin by running the AFS client startup scripts, which call the
modprobe program, which dynamically
loads AFS modifications into the kernel. Then create partitions for
storing AFS volumes. You do not need to replace the Linux fsck program. If the machine is to remain an
AFS client machine, incorporate AFS into the machine's Pluggable
Authentication Module (PAM) scheme. incorporating AFS kernel extensionsfirst AFS machineLinuxAFS kernel extensionson first AFS machineLinuxfirst AFS machineAFS kernel extensionson LinuxLinuxAFS kernel extensionson first AFS machineLoading AFS into the Linux KernelThe modprobe program is the dynamic kernel loader for Linux. Linux does not support
incorporation of AFS modifications during a kernel build.For AFS to function correctly, the modprobe program must run each time the machine
reboots, so your distribution's AFS initialization script invokes it automatically. The script also includes
commands that select the appropriate AFS library file automatically. In this section you run the script.In later sections you verify that the script correctly initializes all AFS components, then activate a configuration
variable, which results in the script being incorporated into the Linux startup and shutdown sequence.The procedure for starting up OpenAFS depends upon your distributionFedora and RedHat Enterprise LinuxOpenAFS ship RPMS for all current Fedora and RHEL releases.
Download and install the RPM set for your operating system.
RPMs are available from the OpenAFS web site. You will need the
openafsopenafs-client>openafs-server packages, along with
an openafs-kernel package matching
your current, running, kernel.You can find the version of your current kernel by running
# uname -r
2.6.20-1.2933.fc6Once downloaded, the packages may be installed with the
rpm command
# rpm -U openafs-* openafs-client-* openafs-server-* openafs-kernel-*
Systems packaged as tar filesIf you are running a system where the OpenAFS Binary Distribution
is provided as a tar file, or where you have built the system from
source yourself, you need to install the relevant components by hand
Unpack the distribution tarball. The examples below assume
that you have unpacked the files into the
/tmp/afsdistdirectory. If you
pick a different location, substitute this in all of the following
examples. Once you have unpacked the distribution,
change directory as indicated.
# cd /tmp/afsdist/linux/root.client/usr/vice/etcCopy the AFS kernel library files to the local /usr/vice/etc/modload directory.
The filenames for the libraries have the format libafs-version.o, where
version indicates the kernel build level. The string .mp in
the version indicates that the file is appropriate for machines running a multiprocessor
kernel.
# cp -rp modload /usr/vice/etcCopy the AFS initialization script to the local directory for initialization files (by convention, /etc/rc.d/init.d on Linux machines). Note the removal of the .rc
extension as you copy the script.
# cp -p afs.rc /etc/rc.d/init.d/afsconfiguringAFS server partition on first AFS machineLinuxAFS server partitionconfiguring on first AFS machineLinuxfirst AFS machineAFS server partitionon LinuxLinuxAFS server partitionon first AFS machineConfiguring Server Partitions on Linux SystemsEvery AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each
server partition is mounted at a directory named /vicepxx, where
xx is one or two lowercase letters. The /vicepxx directories must reside in the file server machine's root
directory, not in one of its subdirectories (for example, /usr/vicepa is not an acceptable
directory location). For additional information, see Performing Platform-Specific Procedures.
Create a directory called /vicepxx for each AFS server
partition you are configuring (there must be at least one). Repeat the command for each partition.
# mkdir /vicepxxAdd a line with the following format to the file systems registry file, /etc/fstab, for each directory just created. The entry maps the directory name to the disk
partition to be mounted on it.
/dev/disk /vicepxx ext2 defaults 0 2
The following is an example for the first partition being configured.
/dev/sda8 /vicepa ext2 defaults 0 2
Create a file system on each partition that is to be mounted at a /vicepxx directory. The following command is probably appropriate, but
consult the Linux documentation for more information.
# mkfs -v /dev/diskMount each partition by issuing either the mount -a command to mount all
partitions at once or the mount command to mount each partition in turn.If you plan to retain client functionality on this machine after completing the installation, proceed to Enabling AFS Login on Linux Systems. Otherwise, proceed to Starting the
BOS Server.enabling AFS loginfile server machineLinuxAFS loginon file server machineLinuxfirst AFS machineAFS loginon LinuxLinuxAFS loginon file server machinePAMon Linuxfile server machineEnabling AFS Login on Linux SystemsIf you plan to remove client functionality from this machine
after completing the installation, skip this section and proceed
to Starting the BOS Server.At this point you incorporate AFS into the operating system's
Pluggable Authentication Module (PAM) scheme. PAM integrates all
authentication mechanisms on the machine, including login, to provide
the security infrastructure for authenticated access to and from the
machine.You should first configure your system to obtain Kerberos v5
tickets as part of the authentication process, and then run an AFS PAM
module to obtain tokens from those tickets after authentication. Many
Linux distributions come with a Kerberos v5 PAM module (usually called
pam-krb5 or pam_krb5), or you can download and install Russ Allbery's
Kerberos v5 PAM module, which is tested regularly with AFS.
See the instructions of whatever PAM module you use for how to
configure it.Some Kerberos v5 PAM modules do come with native AFS support
(usually requiring the Heimdal Kerberos implementation rather than the
MIT Kerberos implementation). If you are using one of those PAM
modules, you can configure it to obtain AFS tokens. It's more common,
however, to separate the AFS token acquisition into a separate PAM
module.The recommended AFS PAM module is Russ
Allbery's pam-afs-session module. It should work with any of
the Kerberos v5 PAM modules. To add it to the PAM configuration, you
often only need to add configuration to the session group:Linux PAM session examplesession required pam_afs_session.soIf you also want to obtain AFS tokens for scp
and similar commands that don't open a session, you will also need to
add the AFS PAM module to the auth group so that the PAM
setcred call will obtain tokens. The
pam_afs_session module will always return success
for authentication so that it can be added to the auth group only for
setcred, so make sure that it's not marked as
sufficient.Linux PAM auth exampleauth [success=ok default=1] pam_krb5.so
auth [default=done] pam_afs_session.so
auth required pam_unix.so try_first_passThis example will work if you want to try Kerberos v5 first and
then fall back to regular Unix authentication.
success=ok for the Kerberos PAM module followed by
default=done for the AFS PAM module will cause a
successful Kerberos login to run the AFS PAM module and then skip the
Unix authentication module. default=1 on the
Kerberos PAM module causes failure of that module to skip the next
module (the AFS PAM module) and fall back to the Unix module. If you
want to try Unix authentication first and rearrange the order, be sure
to use default=die instead.The PAM configuration is stored in different places in different
Linux distributions. On Red Hat, look in
/etc/pam.d/system-auth. On Debian and
derivatives, look in /etc/pam.d/common-session
and /etc/pam.d/common-auth.For additional configuration examples and the configuration
options of the AFS PAM module, see its documentation. For more
details on the available options for the PAM configuration, see the
Linux PAM documentation.Sites which still require kaserver or
external Kerberos v4 authentication should consult Enabling kaserver based AFS Login on Linux
Systems for details of how to enable AFS login on Linux.Proceed to Starting the BOS
Server (or if referring to these instructions while installing
an additional file server machine, return to Starting Server Programs).Getting Started on Solaris SystemsBegin by running the AFS initialization script to call the modload program distributed by
Sun Microsystems, which dynamically loads AFS modifications into the kernel. Then create partitions for storing AFS volumes, and
install and configure the AFS-modified fsck program to run on AFS server partitions. If the
machine is to remain an AFS client machine, incorporate AFS into the machine's Pluggable Authentication Module (PAM) scheme.
incorporating AFS kernel extensionsfirst AFS machineSolarisAFS kernel extensionson first AFS machineSolarisfirst AFS machineAFS kernel extensionson SolarisSolarisAFS kernel extensionson first AFS machineLoading AFS into the Solaris KernelThe modload program is the dynamic kernel loader provided by Sun Microsystems for
Solaris systems. Solaris does not support incorporation of AFS modifications during a kernel build.For AFS to function correctly, the modload program must run each time the machine
reboots, so the AFS initialization script (included on the AFS CD-ROM) invokes it automatically. In this section you copy the
appropriate AFS library file to the location where the modload program accesses it and then
run the script.In later sections you verify that the script correctly initializes all AFS components, then create the links that
incorporate AFS into the Solaris startup and shutdown sequence. Unpack the OpenAFS Solaris distribution tarball. The examples
below assume that you have unpacked the files into the
/tmp/afsdist directory. If you
pick a diferent location, substitute this in all of the following
exmaples. Once you have unpacked the distribution, change directory
as indicated.
# cd /tmp/afsdist/sun4x_56/root.client/usr/vice/etcCopy the AFS initialization script to the local directory for initialization files (by convention, /etc/init.d on Solaris machines). Note the removal of the .rc
extension as you copy the script.
# cp -p afs.rc /etc/init.d/afsCopy the appropriate AFS kernel library file to the local file /kernel/fs/afs.If the machine is running Solaris 2.6 or the 32-bit version of Solaris 7, its kernel supports NFS server
functionality, and the nfsd process is running:
# cp -p modload/libafs.o /kernel/fs/afsIf the machine is running Solaris 2.6 or the 32-bit version of Solaris 7, and its kernel does not support NFS
server functionality or the nfsd process is not running:
# cp -p modload/libafs.nonfs.o /kernel/fs/afsIf the machine is running the 64-bit version of Solaris 7, its kernel supports NFS server functionality, and the
nfsd process is running:
# cp -p modload/libafs64.o /kernel/fs/sparcv9/afsIf the machine is running the 64-bit version of Solaris 7, and its kernel does not support NFS server
functionality or the nfsd process is not running:
# cp -p modload/libafs64.nonfs.o /kernel/fs/sparcv9/afsRun the AFS initialization script to load AFS modifications into the kernel. You can ignore any error messages
about the inability to start the BOS Server or the Cache Manager or AFS client.
# /etc/init.d/afs startWhen an entry called afs does not already exist in the local /etc/name_to_sysnum file, the script automatically creates it and reboots the machine to start
using the new version of the file. If this happens, log in again as the superuser root
after the reboot and run the initialization script again. This time the required entry exists in the /etc/name_to_sysnum file, and the modload program runs.
login: root
Password: root_password
# /etc/init.d/afs startreplacing fsck programfirst AFS machineSolarisfsck programon first AFS machineSolarisfirst AFS machinefsck programon SolarisSolarisfsck programon first AFS machineConfiguring the AFS-modified fsck Program on Solaris SystemsIn this section, you make modifications to guarantee that the appropriate fsck program
runs on AFS server partitions. The fsck program provided with the operating system must never
run on AFS server partitions. Because it does not recognize the structures that the File Server uses to organize volume data,
it removes all of the data. To repeat:Never run the standard fsck program on AFS server partitions. It discards AFS volumes.Create the /usr/lib/fs/afs directory to house the AFS-modified fsck program and related files.
# mkdir /usr/lib/fs/afs
# cd /usr/lib/fs/afsCopy the vfsck binary to the newly created directory, changing the name as you do
so.
# cp /tmp/afsdist/sun4x_56/root.server/etc/vfsck fsckWorking in the /usr/lib/fs/afs directory, create the following links to Solaris
libraries:
# ln -s /usr/lib/fs/ufs/clri
# ln -s /usr/lib/fs/ufs/df
# ln -s /usr/lib/fs/ufs/edquota
# ln -s /usr/lib/fs/ufs/ff
# ln -s /usr/lib/fs/ufs/fsdb
# ln -s /usr/lib/fs/ufs/fsirand
# ln -s /usr/lib/fs/ufs/fstyp
# ln -s /usr/lib/fs/ufs/labelit
# ln -s /usr/lib/fs/ufs/lockfs
# ln -s /usr/lib/fs/ufs/mkfs
# ln -s /usr/lib/fs/ufs/mount
# ln -s /usr/lib/fs/ufs/ncheck
# ln -s /usr/lib/fs/ufs/newfs
# ln -s /usr/lib/fs/ufs/quot
# ln -s /usr/lib/fs/ufs/quota
# ln -s /usr/lib/fs/ufs/quotaoff
# ln -s /usr/lib/fs/ufs/quotaon
# ln -s /usr/lib/fs/ufs/repquota
# ln -s /usr/lib/fs/ufs/tunefs
# ln -s /usr/lib/fs/ufs/ufsdump
# ln -s /usr/lib/fs/ufs/ufsrestore
# ln -s /usr/lib/fs/ufs/volcopyAppend the following line to the end of the file /etc/dfs/fstypes.
afs AFS Utilities
Edit the /sbin/mountall file, making two changes. Add an entry for AFS to the case statement for option 2, so that it reads
as follows:
case "$2" in
ufs) foptions="-o p"
;;
afs) foptions="-o p"
;;
s5) foptions="-y -t /var/tmp/tmp$$ -D"
;;
*) foptions="-y"
;;
Edit the file so that all AFS and UFS partitions are checked in parallel. Replace the following section of
code:
# For fsck purposes, we make a distinction between ufs and
# other file systems
#
if [ "$fstype" = "ufs" ]; then
ufs_fscklist="$ufs_fscklist $fsckdev"
saveentry $fstype "$OPTIONS" $special $mountp
continue
fi
with the following section of code:
# For fsck purposes, we make a distinction between ufs/afs
# and other file systems.
#
if [ "$fstype" = "ufs" -o "$fstype" = "afs" ]; then
ufs_fscklist="$ufs_fscklist $fsckdev"
saveentry $fstype "$OPTIONS" $special $mountp
continue
fi
configuringAFS server partition on first AFS machineSolarisAFS server partitionconfiguring on first AFS machineSolarisfirst AFS machineAFS server partitionon SolarisSolarisAFS server partitionon first AFS machineConfiguring Server Partitions on Solaris SystemsEvery AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each
server partition is mounted at a directory named /vicepxx, where
xx is one or two lowercase letters. The /vicepxx directories must reside in the file server machine's root
directory, not in one of its subdirectories (for example, /usr/vicepa is not an acceptable
directory location). For additional information, see Performing Platform-Specific Procedures.
Create a directory called /vicepxx for each AFS server
partition you are configuring (there must be at least one). Repeat the command for each partition.
# mkdir /vicepxxAdd a line with the following format to the file systems registry file, /etc/vfstab, for each partition to be mounted on a directory created in the previous step. Note
the value afs in the fourth field, which tells Solaris to use the AFS-modified
fsck program on this partition.
/dev/dsk/disk /dev/rdsk/disk /vicepxx afs boot_order yes
The following is an example for the first partition being configured.
/dev/dsk/c0t6d0s1 /dev/rdsk/c0t6d0s1 /vicepa afs 3 yes
Create a file system on each partition that is to be mounted at a /vicepxx directory. The following command is probably appropriate, but
consult the Solaris documentation for more information.
# newfs -v /dev/rdsk/diskIssue the mountall command to mount all partitions at once.If you plan to retain client functionality on this machine after completing the installation, proceed to Enabling AFS Login and Editing the File Systems Clean-up Script on Solaris Systems. Otherwise,
proceed to Starting the BOS Server.Enabling AFS Login on Solaris Systemsenabling AFS loginfile server machineSolarisAFS loginon file server machineSolarisfirst AFS machineAFS loginon SolarisSolarisAFS loginon file server machinePAMon Solarisfile server machineIf you plan to remove client functionality from this machine after completing the installation, skip this section and
proceed to Starting the BOS Server.At this point you incorporate AFS into the operating system's
Pluggable Authentication Module (PAM) scheme. PAM integrates all
authentication mechanisms on the machine, including login, to provide
the security infrastructure for authenticated access to and from the
machine.Explaining PAM is beyond the scope of this document. It is
assumed that you understand the syntax and meanings of settings in the
PAM configuration file (for example, how the
other entry works, the effect of
marking an entry as required,
optional, or
sufficient, and so on).You should first configure your system to obtain Kerberos v5
tickets as part of the authentication process, and then run an AFS PAM
module to obtain tokens from those tickets after authentication.
Current versions of Solaris come with a Kerberos v5 PAM module that
will work, or you can download and install Russ Allbery's
Kerberos v5 PAM module, which is tested regularly with AFS.
See the instructions of whatever PAM module you use for how to
configure it.Some Kerberos v5 PAM modules do come with native AFS support
(usually requiring the Heimdal Kerberos implementation rather than the
MIT Kerberos implementation). If you are using one of those PAM
modules, you can configure it to obtain AFS tokens. It's more common,
however, to separate the AFS token acquisition into a separate PAM
module.The recommended AFS PAM module is Russ
Allbery's pam-afs-session module. It should work with any of
the Kerberos v5 PAM modules. To add it to the PAM configuration, you
often only need to add configuration to the session group in
pam.conf:Solaris PAM session examplelogin session required pam_afs_session.soThis example enables PAM authentication only for console login.
You may want to add a similar line for the ssh service and for any
other login service that you use, including possibly the
other service (which serves as a catch-all). You
may also want to add options to the AFS PAM session module
(particularly retain_after_close, which is
necessary for some versions of Solaris.For additional configuration examples and the configuration
options of the AFS PAM module, see its documentation. For more
details on the available options for the PAM configuration, see the
pam.conf manual page.Sites which still require kaserver or external Kerberos v4 authentication
should consult Enabling kaserver based AFS
Login on Solaris Systems" for details of how to enable AFS
login on Solaris.Proceed to Editing the File Systems
Clean-up Script on Solaris SystemsEditing the File Systems Clean-up Script on Solaris SystemsSolarisfile systems clean-up scripton file server machinefile systems clean-up script (Solaris)file server machinescriptsfile systems clean-up (Solaris)file server machineSome Solaris distributions include a script that locates and removes unneeded files from various file systems. Its
conventional location is /usr/lib/fs/nfs/nfsfind. The script generally uses an argument
to the find command to define which file systems to search. In this step you modify the
command to exclude the /afs directory. Otherwise, the command traverses the AFS
filespace of every cell that is accessible from the machine, which can take many hours. The following alterations are
possibilities, but you must verify that they are appropriate for your cell.The first possible alteration is to add the -local flag to the existing command,
so that it looks like the following:
find $dir -local -name .nfs\* -mtime +7 -mount -exec rm -f {} \;
Another alternative is to exclude any directories whose names begin with the lowercase letter a or a non-alphabetic character.
find /[A-Zb-z]* remainder of existing commandDo not use the following command, which still searches under the /afs directory,
looking for a subdirectory of type 4.2.
find / -fstype 4.2 /* do not use */
Proceed to Starting the BOS Server (or if referring to these instructions while
installing an additional file server machine, return to Starting Server
Programs).Basic OverSeer ServerBOS ServerBOS Serverstartingfirst AFS machinestartingBOS Serverfirst AFS machinefirst AFS machineBOS Serverauthorization checking (disabling)first AFS machinedisabling authorization checkingfirst AFS machinefirst AFS machineauthorization checking (disabling)Starting the BOS ServerYou are now ready to start the AFS server processes on this machine.
If you are not working from a packaged distribution, begin by copying the
AFS server binaries from the distribution to the conventional local disk
location, the /usr/afs/bin directory. The
following instructions also create files in other subdirectories of the
/usr/afs directory.Then issue the bosserver command to initialize the Basic OverSeer (BOS) Server, which
monitors and controls other AFS server processes on its server machine. Include the -noauth
flag to disable authorization checking. Because you have not yet configured your cell's AFS authentication and authorization
mechanisms, the BOS Server cannot perform authorization checking as it does during normal operation. In no-authorization mode,
it does not verify the identity or privilege of the issuer of a bos command, and so performs
any operation for anyone.Disabling authorization checking gravely compromises cell security. You must complete all subsequent steps in one
uninterrupted pass and must not leave the machine unattended until you restart the BOS Server with authorization checking
enabled, in Verifying the AFS Initialization Script.As it initializes for the first time, the BOS Server creates the following directories and files, setting the owner to the
local superuser root and the mode bits to limit the ability to write (and in some cases, read)
them. For a description of the contents and function of these directories and files, see the chapter in the OpenAFS
Administration Guide about administering server machines. For further discussion of the mode bit settings, see Protecting Sensitive AFS Directories. Binary Distributioncopying server files fromfirst AFS machinefirst AFS machinesubdirectories of /usr/afscreating/usr/afs/bin directoryfirst AFS machinecreating/usr/afs/etc directoryfirst AFS machinecopyingserver files to local diskfirst AFS machinefirst AFS machinecopyingserver files to local diskusr/afs/bin directoryfirst AFS machineusr/afs/etc directoryfirst AFS machineusr/afs/db directoryusr/afs/local directoryusr/afs/logs directory/usr/afs/db/usr/afs/etc/CellServDB/usr/afs/etc/ThisCell/usr/afs/local/usr/afs/logsThe BOS Server also creates symbolic links called /usr/vice/etc/ThisCell and /usr/vice/etc/CellServDB to the corresponding files in the /usr/afs/etc
directory. The AFS command interpreters consult the CellServDB and ThisCell files in the /usr/vice/etc directory because they generally run
on client machines. On machines that are AFS servers only (as this machine currently is), the files reside only in the /usr/afs/etc directory; the links enable the command interpreters to retrieve the information they need.
Later instructions for installing the client functionality replace the links with actual files. If you are not working from a packaged distribution, you may need to copy files from the distribution media to the local /usr/afs directory.
# cd /tmp/afsdist/sysname/root.server/usr/afs
# cp -rp * /usr/afscommandsbosserverbosserver commandIssue the bosserver command. Include the -noauth
flag to disable authorization checking.
# /usr/afs/bin/bosserver -noauth &Verify that the BOS Server created /usr/vice/etc/ThisCell and /usr/vice/etc/CellServDB as symbolic links to the corresponding files in the /usr/afs/etc directory.
# ls -l /usr/vice/etcIf either or both of /usr/vice/etc/ThisCell and /usr/vice/etc/CellServDB do not exist, or are not links, issue the following commands.
# cd /usr/vice/etc
# ln -s /usr/afs/etc/ThisCell
# ln -s /usr/afs/etc/CellServDBcell namedefining during installation of first machinedefiningcell name during installation of first machinecell namesetting in server ThisCell filefirst AFS machinesettingcell name in server ThisCell filefirst AFS machinefirst AFS machineThisCell file (server)usr/afs/etc/ThisCellThisCell file (server)ThisCell file (server)first AFS machinefilesThisCell (server)database server machineentry in server CellServDB fileon first AFS machinefirst AFS machinecell membership, definingfor server processesusr/afs/etc/CellServDB fileCellServDB file (server)CellServDB file (server)creatingon first AFS machinecreatingCellServDB file (server)first AFS machinefilesCellServDB (server)first AFS machineCellServDB file (server)first AFS machinedefiningas database serverdefiningfirst AFS machine as database serverDefining Cell Name and Membership for Server ProcessesNow assign your cell's name. The chapter in the OpenAFS Administration Guide about cell configuration
and administration issues discusses the important considerations, explains why changing the name is difficult, and outlines the
restrictions on name format. Two of the most important restrictions are that the name cannot include uppercase letters or more
than 64 characters.Use the bos setcellname command to assign the cell name. It creates two files:
/usr/afs/etc/ThisCell, which defines this machine's cell membership/usr/afs/etc/CellServDB, which lists the cell's database server machines; the
machine named on the command line is placed on the list automaticallyIn the following and every instruction in this guide, for the machine name argument
substitute the fully-qualified hostname (such as fs1.example.com) of the machine you are
installing. For the cell name argument substitute your cell's complete name (such as example.com).commandsbos setcellnamebos commandssetcellnameIf necessary, add the directory containing the bos command to your path.
# export PATH=$PATH:/usr/afs/binIssue the bos setcellname command to set the cell name.
# bos setcellname <machine name> <cell name> -noauthBecause you are not authenticated and authorization checking is disabled, the bos
command interpreter possibly produces error messages about being unable to obtain tickets and running unauthenticated. You
can safely ignore the messages. commandsbos listhostsbos commandslisthostsCellServDB file (server)displaying entriesdisplayingCellServDB file (server) entriesIssue the bos listhosts command to verify that the machine you are installing is now
registered as the cell's first database server machine.
# bos listhosts <machine name> -noauth
Cell name is cell_name
Host 1 is machine_namedatabase server machineinstallingfirstinstructionsdatabase server machine, installing firstinstallingdatabase server machinefirstBackup Serverstartingfirst AFS machinebuserver processBackup ServerstartingBackup Serverfirst AFS machinefirst AFS machineBackup ServerProtection Serverstartingfirst AFS machineptserver processProtection ServerstartingProtection Serverfirst AFS machinefirst AFS machineProtection ServerVL Server (vlserver process)startingfirst AFS machineVolume Location ServerVL ServerstartingVL Serverfirst AFS machinefirst AFS machineVL Serverusr/afs/local/BosConfigBosConfig fileBosConfig fileadding entriesfirst AFS machineaddingentries to BosConfig filefirst AFS machinefilesBosConfiginitializingserver processstartingserver processsee also entry for each server's nameStarting the Database Server ProcessesNext use the bos create command to create entries for the three database server processes
in the /usr/afs/local/BosConfig file and start them running. The three processes run on database
server machines only: The Backup Server (the buserver process) maintains the Backup DatabaseThe Protection Server (the ptserver process) maintains the Protection
DatabaseThe Volume Location (VL) Server (the vlserver process) maintains the Volume
Location Database (VLDB)KerberosAFS ships with an additional database server named 'kaserver', which
was historically used to provide authentication services to AFS cells.
kaserver was based on Kerberos v4, as such, it is
not recommended for new cells. This guide assumes you have already
configured a Kerberos v5 realm for your site, and details the procedures
required to use AFS with this realm. If you do wish to use
kaserver, please see the modifications
to these instructions detailed in
Starting the kaserver Database Server Process
The remaining instructions in this chapter include the -cell argument on all applicable
commands. Provide the cell name you assigned in Defining Cell Name and Membership for Server
Processes. If a command appears on multiple lines, it is only for legibility. commandsbos createbos commandscreateIssue the bos create command to start the Backup Server.
# ./bos create <machine name> buserver simple /usr/afs/bin/buserver \
-cell <cell name> -noauthIssue the bos create command to start the Protection Server.
# ./bos create <machine name> ptserver simple /usr/afs/bin/ptserver \
-cell <cell name> -noauthIssue the bos create command to start the VL Server.
# ./bos create <machine name> vlserver simple /usr/afs/bin/vlserver \
-cell <cell name> -noauthadmin accountcreatingafs entry in Kerberos DatabaseKerberos Databasecreatingafs entry in Kerberos Databasecreatingadmin account in Kerberos Databasesecurityinitializing cell-widecellinitializing security mechanismsinitializingcell security mechanismsusr/afs/etc/KeyFileKeyFile fileKeyFile filefirst AFS machinefilesKeyFilekeyserver encryption keyencryption keyserver encryption keyInitializing Cell Security If you are working with an existing cell which uses
kaserver or Kerberos v4 for authentication,
please see
Initializing Cell Security with kaserver
for installation instructions which replace this section.Now initialize the cell's security mechanisms. Begin by creating the following two entires in your site's Kerberos database: A generic administrative account, called admin by convention. If you choose to
assign a different name, substitute it throughout the remainder of this document.After you complete the installation of the first machine, you can continue to have all administrators use the
admin account, or you can create a separate administrative account for each of them. The
latter scheme implies somewhat more overhead, but provides a more informative audit trail for administrative
operations.The entry for AFS server processes, called either
afs or
afs/cell.
No user logs in under this identity, but it is used to encrypt the
server tickets that granted to AFS clients for presentation to
server processes during mutual authentication. (The
chapter in the OpenAFS Administration Guide about cell configuration and administration describes the
role of server encryption keys in mutual authentication.)In Step 7, you also place the initial AFS server encryption key into the /usr/afs/etc/KeyFile file. The AFS server processes refer to this file to learn the server
encryption key when they need to decrypt server tickets.You also issue several commands that enable the new admin user to issue privileged
commands in all of the AFS suites.The following instructions do not configure all of the security mechanisms related to the AFS Backup System. See the
chapter in the OpenAFS Administration Guide about configuring the Backup System.The examples below assume you are using MIT Kerberos. Please refer
to the documentation for your KDC's administrative interface if you are
using a different vendorEnter kadmin interactive mode.
# kadmin
Authenticating as principal you/admin@YOUR REALM with password
Password for you/admin@REALM: your_passwordserver encryption keyin Kerberos Databasecreatingserver encryption keyKerberos DatabaseIssue the
add_principal command to create
Kerberos Database entries called
admin and
afs/<cell name>.You should make the admin_passwd as
long and complex as possible, but keep in mind that administrators
need to enter it often. It must be at least six characters long.Note that when creating the
afs/<cell name>
entry, the encryption types should be restricted to des-cbc-crc:v4.
For more details regarding encryption types, see the documentation
for your Kerberos installation.
kadmin: add_principal -randkey -e des-cbc-crc:v4 afs/<cell name>
Principal "afs/cell name@REALM" created.
kadmin: add_principal admin
Enter password for principal "admin@REALM": admin_password
Principal "admin@REALM" created.
commandskas examinekas commandsexaminedisplayingserver encryption keyAuthentication DatabaseIssue the kadmin
get_principal command to display the afs/<cell name> entry.
kadmin: get_principal afs/<cell name>
Principal: afs/cell
[ ... ]
Key: vno 2, DES cbc mode with CRC-32, no salt
[ ... ]
Extract the newly created key for afs/cell to a keytab on the local machine. We will use /etc/afs.keytab as the location for this keytab.The keytab contains the key material that ensures the security of your AFS cell. You should ensure that it is kept in a secure location at all times.
kadmin: ktadd -k /etc/afs.keytab -e des-cbc-crc:v4 afs/<cell name>
Entry for principal afs/<cell name> with kvno 3, encryption type DES cbc mode with CRC-32 added to keytab WRFILE:/etc/afs.keytab
Make a note of the key version number (kvno) given in the
response, as you will need it to load the key into bos in a later
stepNote that each time you run
ktadd a new key is generated
for the item being extracted. This means that you cannot run ktadd
multiple times and end up with the same key material each time.
Issue the kadmin quit command to leave kadmin
interactive mode.
kadmin: quitcommandsbos adduserbos commandsadduserusr/afs/etc/UserListUserList fileUserList filefirst AFS machinefilesUserListcreatingUserList file entryadmin accountaddingto UserList fileIssue the bos adduser command to add the admin user to the /usr/afs/etc/UserList file. This enables the
admin user to issue privileged bos and vos commands.
# ./bos adduser <machine name> admin -cell <cell name> -noauthcommandsasetkeycreatingserver encryption keyKeyFile fileserver encryption keyin KeyFile fileIssue the
asetkey command to set the AFS
server encryption key in the
/usr/afs/etc/KeyFile file. This key
is created from the /etc/afs.keytab
file created earlier.asetkey requires the key version number (or kvno) of the
afs/cell
key. You should have noted this down when creating the key earlier.
The key version number can also be found by running the
kvno command
# kvno afs/<cell name>
Once the kvno is known, the key can then be extracted using
asetkey
# asetkey add <kvno> /etc/afs.keytab afs/<cell name>
commandsbos listkeysbos commandslistkeysdisplayingserver encryption keyKeyFile fileIssue the
bos listkeys command to verify that
the key version number for the new key in the
KeyFile file is the same as the key
version number in the Authentication Database's
afs/cell name
entry, which you displayed in Step 3.
# ./bos listkeys <machine name> -cell <cell name> -noauth
key 0 has cksum checksumYou can safely ignore any error messages indicating that bos failed to get tickets
or that authentication failed.Initializing the Protection DatabaseNow continue to configure your cell's security systems by
populating the Protection Database with the newly created
admin user, and permitting it
to issue priviledged commands on the AFS filesystem.commandspts createuserpts commandscreateuserProtection DatabaseIssue the pts createuser command to create a Protection Database entry for the
admin user.By default, the Protection Server assigns AFS UID 1 (one) to the admin user,
because it is the first user entry you are creating. If the local password file (/etc/passwd or equivalent) already has an entry for admin that
assigns it a UNIX UID other than 1, it is best to use the -id argument to the pts createuser command to make the new AFS UID match the existing UNIX UID. Otherwise, it is best
to accept the default.
# pts createuser -name admin -cell <cell name> [-id <AFS UID>] -noauth
User admin has id AFS UIDcommandspts adduserpts commandsaddusersystem:administrators groupadmin accountaddingto system:administrators groupIssue the pts adduser command to make the admin
user a member of the system:administrators group, and the pts
membership command to verify the new membership. Membership in the group enables the admin user to issue privileged pts commands and some privileged
fs commands.
# ./pts adduser admin system:administrators -cell <cell name> -noauth
# ./pts membership admin -cell <cell name> -noauth
Groups admin (id: 1) is a member of:
system:administrators
commandsbos restarton first AFS machinebos commandsrestarton first AFS machinerestarting server processon first AFS machineserver processrestartingon first AFS machineIssue the bos restart command with the -all flag
to restart the database server processes, so that they start using the new server encryption key.
# ./bos restart <machine name> -all -cell <cell name> -noauthFile Serverfirst AFS machinefileserver processFile ServerstartingFile Serverfirst AFS machinefirst AFS machineFile Server, fs processVolume Serverfirst AFS machinevolserver processVolume ServerstartingVolume Serverfirst AFS machinefirst AFS machineVolume ServerSalvager (salvager process)first AFS machinefs processfirst AFS machinestartingfs processfirst AFS machinefirst AFS machineSalvagerStarting the File Server, Volume Server, and SalvagerStart the fs process, which consists of the File Server, Volume Server, and Salvager
(fileserver, volserver and salvager processes). Issue the bos create command to start the fs
process. The command appears here on multiple lines only for legibility.
# ./bos create <machine name> fs fs /usr/afs/bin/fileserver \
/usr/afs/bin/volserver /usr/afs/bin/salvager \
-cell <cell name> -noauthSometimes a message about Volume Location Database (VLDB) initialization appears, along with one or more instances
of an error message similar to the following:
FSYNC_clientInit temporary failure (will retry)
This message appears when the volserver process tries to start before the fileserver process has completed its initialization. Wait a few minutes after the last such message
before continuing, to guarantee that both processes have started successfully. commandsbos statusbos commandsstatusYou can verify that the fs process has started successfully by issuing the
bos status command. Its output mentions two proc
starts.
# ./bos status <machine name> fs -long -noauthYour next action depends on whether you have ever run AFS file server machines in the cell: commandsvos createroot.afs volumevos commandscreateroot.afs volumeroot.afs volumecreatingvolumecreatingroot.afscreatingroot.afs volumeIf you are installing the first AFS server machine ever in the cell (that is, you are not upgrading the AFS
software from a previous version), create the first AFS volume, root.afs.For the partition name argument, substitute the name of one of the machine's AFS
server partitions (such as /vicepa).
# ./vos create <machine name> <partition name> root.afs \
-cell <cell name> -noauthThe Volume Server produces a message confirming that it created the volume on the specified partition. You can
ignore error messages indicating that tokens are missing, or that authentication failed. commandsvos syncvldbvos commandssyncvldbcommandsvos syncservvos commandssyncservIf there are existing AFS file server machines and volumes in the cell, issue the vos
syncvldb and vos syncserv commands to synchronize the VLDB with the
actual state of volumes on the local machine. To follow the progress of the synchronization operation, which can
take several minutes, use the -verbose flag.
# ./vos syncvldb <machine name> -cell <cell name> -verbose -noauth
# ./vos syncserv <machine name> -cell <cell name> -verbose -noauthYou can ignore error messages indicating that tokens are missing, or that authentication failed.Update Serverstarting server portionfirst AFS machineupserver processUpdate ServerstartingUpdate Server server portionfirst AFS machinefirst AFS machineUpdate Server server portionfirst AFS machinedefiningas binary distribution machinefirst AFS machinedefiningas system control machinesystem control machinebinary distribution machineStarting the Server Portion of the Update ServerStart the server portion of the Update Server (the upserver process), to distribute the
contents of directories on this machine to other server machines in the cell. It becomes active when you configure the client
portion of the Update Server on additional server machines.Distributing the contents of its /usr/afs/etc directory makes this machine the cell's
system control machine. The other server machines in the cell run the upclientetc process (an instance of the client portion of the Update Server) to retrieve the
configuration files. Use the -crypt argument to the upserver
initialization command to specify that the Update Server distributes the contents of the /usr/afs/etc directory only in encrypted form, as shown in the following instruction. Several of the
files in the directory, particularly the KeyFile file, are crucial to cell security and so must
never cross the network unencrypted.(You can choose not to configure a system control machine, in which case you must update the configuration files in each
server machine's /usr/afs/etc directory individually. The bos
commands used for this purpose also encrypt data before sending it across the network.)Distributing the contents of its /usr/afs/bin directory to other server machines of its
system type makes this machine a binary distribution machine. The other server machines of its system type
run the upclientbin process (an instance of the client portion of the Update Server) to
retrieve the binaries. If your platform has a package management system,
such as 'rpm' or 'apt', running the Update Server to distribute binaries
may interfere with this system.The binaries in the /usr/afs/bin directory are not sensitive, so it is not necessary to
encrypt them before transfer across the network. Include the -clear argument to the upserver initialization command to specify that the Update Server distributes the contents of the
/usr/afs/bin directory in unencrypted form unless an upclientbin process requests encrypted transfer.Note that the server and client portions of the Update Server always mutually authenticate with one another, regardless of
whether you use the -clear or -crypt arguments. This protects
their communications from eavesdropping to some degree.For more information on the upclient and upserver
processes, see their reference pages in the OpenAFS Administration Reference. The commands appear on
multiple lines here only for legibility. Issue the bos create command to start the upserver
process.
# ./bos create <machine name>upserver simple \
"/usr/afs/bin/upserver -crypt /usr/afs/etc \
-clear /usr/afs/bin" -cell <cell name> -noauthStarting the Controller for NTPDKeeping the clocks on all server and client machines in your cell synchronized is crucial to several functions, and in
particular to the correct operation of AFS's distributed database technology, Ubik. The chapter in the OpenAFS
Administration Guide about administering server machines explains how time skew can disturb Ubik's performance and
cause service outages in your cell.Historically, AFS used to distribute its own version of the Network
Time Protocol Daemon. Whilst this is still provided for existing sites, we
recommend that you configure and install your time service independently of
AFS. A reliable timeservice will also be required by your Kerberos realm,
and so may already be available at your site.overviewinstalling client functionality on first machinefirst AFS machineclient functionalityinstallinginstallingclient functionalityfirst AFS machineOverview: Installing Client FunctionalityThe machine you are installing is now an AFS file server machine,
database server machine, system control machine, and binary distribution
machine. Now make it a client machine by completing the following tasks:
Define the machine's cell membership for client processesCreate the client version of the CellServDB fileDefine cache location and sizeCreate the /afs directory and start the Cache ManagerDistributioncopying client files fromfirst AFS machinefirst AFS machinecopyingclient files to local diskcopyingclient files to local diskfirst AFS machineCopying Client Files to the Local DiskYou need only undertake the steps in this section, if you are using
a tar file distribution, or one built from scratch. Packaged distributions,
such as RPMs or DEBs will already have installed the necessary files in
the correct locations.Before installing and configuring the AFS client, copy the necessary files from the tarball to the local /usr/vice/etc directory. If you have not already done so, unpack the distribution
tarball for this machine's system type into a suitable location on
the filesystem, such as /tmp/afsdist.
If you use a different location, substitue that in the examples that
follow.Copy files to the local /usr/vice/etc directory.This step places a copy of the AFS initialization script (and related files, if applicable) into the /usr/vice/etc directory. In the preceding instructions for incorporating AFS into the kernel, you
copied the script directly to the operating system's conventional location for initialization files. When you incorporate
AFS into the machine's startup sequence in a later step, you can choose to link the two files.On some system types that use a dynamic kernel loader program, you previously copied AFS library files into a
subdirectory of the /usr/vice/etc directory. On other system types, you copied the
appropriate AFS library file directly to the directory where the operating system accesses it. The following commands do
not copy or recopy the AFS library files into the /usr/vice/etc directory, because on
some system types the library files consume a large amount of space. If you want to copy them, add the -r flag to the first cp command and skip the second cp command.
# cd /tmp/afsdist/sysname/root.client/usr/vice/etc
# cp -p * /usr/vice/etc
# cp -rp C /usr/vice/etccell namesetting in client ThisCell filefirst AFS machinesettingcell name in client ThisCell filefirst AFS machinefirst AFS machineThisCell file (client)first AFS machinecell membership, definingfor client processesusr/vice/etc/ThisCellThisCell file (client)ThisCell file (client)first AFS machinefilesThisCell (client)Defining Cell Membership for Client ProcessesEvery AFS client machine has a copy of the /usr/vice/etc/ThisCell file on its local disk
to define the machine's cell membership for the AFS client programs that run on it. The ThisCell file you created in the /usr/afs/etc directory (in Defining Cell Name and Membership for Server Processes) is used only by server processes.Among other functions, the ThisCell file on a client machine determines the following:
The cell in which users gain tokens when they log onto the
machine, assuming it is using an AFS-modified login utilityThe cell in which users gain tokens by default when they issue
the aklog commandThe cell membership of the AFS server processes that the AFS
command interpreters on this machine contact by defaultChange to the /usr/vice/etc directory and remove the symbolic link created in Starting the BOS Server.
# cd /usr/vice/etc
# rm ThisCellCreate the ThisCell file as a copy of the /usr/afs/etc/ThisCell file. Defining the same local cell for both server and client processes leads
to the most consistent AFS performance.
# cp /usr/afs/etc/ThisCell ThisCelldatabase server machineentry in client CellServDB fileon first AFS machineusr/vice/etc/CellServDBCellServDB file (client)CellServDB file (client)creatingon first AFS machinecreatingCellServDB file (client)first AFS machineCellServDB file (client)required formatrequirementsCellServDB file format (client version)filesCellServDB (client)first AFS machineCellServDB file (client)Creating the Client CellServDB FileThe /usr/vice/etc/CellServDB file on a client machine's local disk lists the database
server machines for each cell that the local Cache Manager can contact. If there is no entry in the file for a cell, or if the
list of database server machines is wrong, then users working on this machine cannot access the cell. The chapter in the
OpenAFS Administration Guide about administering client machines explains how to maintain the file after
creating it.As the afsd program initializes the Cache Manager, it copies the contents of the
CellServDB file into kernel memory. The Cache Manager always consults the list in kernel memory
rather than the CellServDB file itself. Between reboots of the machine, you can use the
fs newcell command to update the list in kernel memory directly; see the chapter in the
OpenAFS Administration Guide about administering client machines.The AFS distribution includes the file
CellServDB.dist. It includes an entry for
all AFS cells that agreed to share their database server machine
information at the time the distribution was
created. The definitive copy of this file is maintained at
grand.central.org, and updates may be obtained from
/afs/grand.central.org/service/CellServDB or
http://grand.central.org/dl/cellservdb/CellServDBThe CellServDB.dist file can be a
good basis for the client CellServDB file,
because all of the entries in it use the correct format. You can add or
remove cell entries as you see fit. Depending on your cache manager
configuration, additional steps (as detailed in
Enabling Access to Foreign Cells) may be
required to enable the Cache Manager to actually reach the cells.In this section, you add an entry for the local cell to the local CellServDB file. The
current working directory is still /usr/vice/etc. Remove the symbolic link created in Starting the BOS Server and rename the CellServDB.sample file to CellServDB.
# rm CellServDB
# mv CellServDB.sample CellServDBAdd an entry for the local cell to the CellServDB file. One easy method is to use
the cat command to append the contents of the server /usr/afs/etc/CellServDB file to the client version.
# cat /usr/afs/etc/CellServDB >> CellServDBThen open the file in a text editor to verify that there are no blank lines, and that all entries have the required
format, which is described just following. The ordering of cells is not significant, but it can be convenient to have the
client machine's home cell at the top; move it there now if you wish. The first line of a cell's entry has the following format:
>cell_name #organizationwhere cell_name is the cell's complete Internet domain name (for example, example.com) and organization is an optional field that follows any
number of spaces and the number sign (#). By convention it names the organization
to which the cell corresponds (for example, the Example Corporation).After the first line comes a separate line for each database server machine. Each line has the following
format: IP_address #machine_namewhere IP_address is the machine's IP address in dotted decimal format (for example,
192.12.105.3). Following any number of spaces and the number sign (#) is
machine_name, the machine's fully-qualified hostname (for example, db1.example.com). In this case, the number sign does not indicate a comment;
machine_name is a required field.If the file includes cells that you do not wish users of this machine to access, remove their entries.The following example shows entries for two cells, each of which has three database server machines:
>example.com #Example Corporation (home cell)
192.12.105.3 #db1.example.com
192.12.105.4 #db2.example.com
192.12.105.55 #db3.example.com
>stateu.edu #State University cell
138.255.68.93 #serverA.stateu.edu
138.255.68.72 #serverB.stateu.edu
138.255.33.154 #serverC.stateu.edu
cacheconfiguringfirst AFS machineconfiguringcachefirst AFS machinesettingcache size and locationfirst AFS machinefirst AFS machinecache size and locationConfiguring the CacheThe Cache Manager uses a cache on the local disk or in machine memory to store local copies of files fetched from file
server machines. As the afsd program initializes the Cache Manager, it sets basic cache
configuration parameters according to definitions in the local /usr/vice/etc/cacheinfo file.
The file has three fields: The first field names the local directory on which to mount the AFS filespace. The conventional location is the
/afs directory.The second field defines the local disk directory to use for the disk cache. The conventional location is the
/usr/vice/cache directory, but you can specify an alternate directory if another
partition has more space available. There must always be a value in this field, but the Cache Manager ignores it if the
machine uses a memory cache.The third field specifies the number of kilobyte (1024 byte) blocks to allocate for the cache.The values you define must meet the following requirements. On a machine using a disk cache, the Cache Manager expects always to be able to use the amount of space specified in
the third field. Failure to meet this requirement can cause serious problems, some of which can be repaired only by
rebooting. You must prevent non-AFS processes from filling up the cache partition. The simplest way is to devote a
partition to the cache exclusively.The amount of space available in memory or on the partition housing the disk cache directory imposes an absolute
limit on cache size.The maximum supported cache size can vary in each AFS release; see the OpenAFS Release Notes
for the current version.For a disk cache, you cannot specify a value in the third field that exceeds 95% of the space available on the
partition mounted at the directory named in the second field. If you violate this restriction, the afsd program exits without starting the Cache Manager and prints an appropriate message on the
standard output stream. A value of 90% is more appropriate on most machines. Some operating systems (such as AIX) do not
automatically reserve some space to prevent the partition from filling completely; for them, a smaller value (say, 80% to
85% of the space available) is more appropriate.For a memory cache, you must leave enough memory for other processes and applications to run. If you try to allocate
more memory than is actually available, the afsd program exits without initializing the
Cache Manager and produces the following message on the standard output stream.
afsd: memCache allocation failure at number KB
The number value is how many kilobytes were allocated just before the failure, and so
indicates the approximate amount of memory available.Within these hard limits, the factors that determine appropriate cache size include the number of users working on the
machine, the size of the files with which they work, and (for a memory cache) the number of processes that run on the machine.
The higher the demand from these factors, the larger the cache needs to be to maintain good performance.Disk caches smaller than 10 MB do not generally perform well. Machines serving multiple users usually perform better with
a cache of at least 60 to 70 MB. The point at which enlarging the cache further does not really improve performance depends on
the factors mentioned previously and is difficult to predict.Memory caches smaller than 1 MB are nonfunctional, and the performance of caches smaller than 5 MB is usually
unsatisfactory. Suitable upper limits are similar to those for disk caches but are probably determined more by the demands on
memory from other sources on the machine (number of users and processes). Machines running only a few processes possibly can use
a smaller memory cache.Configuring a Disk CacheNot all file system types that an operating system supports are necessarily supported for use as the cache partition.
For possible restrictions, see the OpenAFS Release Notes.To configure the disk cache, perform the following procedures: Create the local directory to use for caching. The following instruction shows the conventional location,
/usr/vice/cache. If you are devoting a partition exclusively to caching, as
recommended, you must also configure it, make a file system on it, and mount it at the directory created in this step.
# mkdir /usr/vice/cacheCreate the cacheinfo file to define the configuration parameters discussed
previously. The following instruction shows the standard mount location, /afs, and the
standard cache location, /usr/vice/cache.
# echo "/afs:/usr/vice/cache:#blocks" > /usr/vice/etc/cacheinfoThe following example defines the disk cache size as 50,000 KB:
# echo "/afs:/usr/vice/cache:50000" > /usr/vice/etc/cacheinfoConfiguring a Memory CacheTo configure a memory cache, create the cacheinfo file to define the configuration
parameters discussed previously. The following instruction shows the standard mount location, /afs, and the standard cache location, /usr/vice/cache (though the
exact value of the latter is irrelevant for a memory cache).
# echo "/afs:/usr/vice/cache:#blocks" > /usr/vice/etc/cacheinfoThe following example allocates 25,000 KB of memory for the cache.
# echo "/afs:/usr/vice/cache:25000" > /usr/vice/etc/cacheinfoCache Managerfirst AFS machineconfiguringCache Managerfirst AFS machinefirst AFS machineCache Managerafs (/afs) directorycreatingfirst AFS machineAFS initialization scriptsetting afsd parametersfirst AFS machinefirst AFS machineafsd command parametersConfiguring the Cache ManagerBy convention, the Cache Manager mounts the AFS filespace on the local /afs directory. In
this section you create that directory.The afsd program sets several cache configuration parameters as it initializes the Cache
Manager, and starts daemons that improve performance. You can use the afsd command's arguments
to override the parameters' default values and to change the number of some of the daemons. Depending on the machine's cache
size, its amount of RAM, and how many people work on it, you can sometimes improve Cache Manager performance by overriding the
default values. For a discussion of all of the afsd command's arguments, see its reference page
in the OpenAFS Administration Reference.On platforms using the standard 'afs' initialisation script (this does not apply to Fedora or RHEL based distributions), the afsd command line in the AFS initialization script on each system type includes an
OPTIONS variable. You can use it to set nondefault values for the command's arguments, in one
of the following ways: You can create an afsdoptions file that sets values for
arguments to the afsd command. If the file exists, its contents are automatically
substituted for the OPTIONS variable in the AFS initialization script. The AFS
distribution for some system types includes an options file; on other system types, you must create it.You use two variables in the AFS initialization script to specify the path to the options file:
CONFIG and AFSDOPT. On system types that define a
conventional directory for configuration files, the CONFIG variable indicates it by
default; otherwise, the variable indicates an appropriate location.List the desired afsd options on a single line in the options file, separating each
option with one or more spaces. The following example sets the -stat argument to 2500,
the -daemons argument to 4, and the -volumes argument to
100.
-stat 2500 -daemons 4 -volumes 100
On a machine that uses a disk cache, you can set the OPTIONS variable in the AFS
initialization script to one of $SMALL, $MEDIUM, or
$LARGE. The AFS initialization script uses one of these settings if the afsd options file named by the AFSDOPT variable does not exist. In
the script as distributed, the OPTIONS variable is set to the value
$MEDIUM.Do not set the OPTIONS variable to $SMALL,
$MEDIUM, or $LARGE on a machine that uses a memory
cache. The arguments it sets are appropriate only on a machine that uses a disk cache.The script (or on some system types the afsd options file named by the
AFSDOPT variable) defines a value for each of SMALL,
MEDIUM, and LARGE that sets afsd command arguments appropriately for client machines of different sizes: SMALL is suitable for a small machine that serves one or two users and has
approximately 8 MB of RAM and a 20-MB cacheMEDIUM is suitable for a medium-sized machine that serves two to six users
and has 16 MB of RAM and a 40-MB cacheLARGE is suitable for a large machine that serves five to ten users and has
32 MB of RAM and a 100-MB cacheYou can choose not to create an afsd options file and to set the
OPTIONS variable in the initialization script to a null value rather than to the default
$MEDIUM value. You can then either set arguments directly on the afsd command line in the script, or set no arguments (and so accept default values for all Cache
Manager parameters).If you are running on a Fedora or RHEL based system, the
openafs-client initialization script behaves differently from that
described above. It sources /etc/sysconfig/openafs, in which the
AFSD_ARGS variable may be set to contain any, or all, of the afsd options
detailed. Note that this script does not support setting an OPTIONS
variable, or the SMALL, MEDIUM and LARGE methods of defining cache size
Create the local directory on which to mount the AFS filespace, by convention /afs.
If the directory already exists, verify that it is empty.
# mkdir /afsOn AIX systems, add the following line to the /etc/vfs file. It enables AIX to
unmount AFS correctly during shutdown.
afs 4 none none
On non-package based Linux systems, copy the afsd options file from the /usr/vice/etc directory to the /etc/sysconfig directory, removing
the .conf extension as you do so.
# cp /usr/vice/etc/afs.conf /etc/sysconfig/afsEdit the machine's AFS initialization script or afsd options file to set
appropriate values for afsd command parameters. The script resides in the indicated
location on each system type: On AIX systems, /etc/rc.afsOn HP-UX systems, /sbin/init.d/afsOn IRIX systems, /etc/init.d/afsOn Fedora and RHEL systems, /etc/sysconfg/openafsOn non-package based Linux systems, /etc/sysconfig/afs (the afsd options file)On Solaris systems, /etc/init.d/afsUse one of the methods described in the introduction to this section to add the following flags to the afsd command line. If you intend for the machine to remain an AFS client, also set any
performance-related arguments you wish. Add the -memcache flag if the machine is to use a memory cache.Add the -verbose flag to display a trace of the Cache Manager's
initialization on the standard output stream.In order to successfully complete the instructions in the
remainder of this guide, it is important that the machine does not have
a synthetic root (as discussed in Enabling Access
to Foreign Cells). As some distributions ship with this enabled, it
may be necessary to remove any occurences of the
-dynroot and
-afsdb options from both the AFS
initialisation script and options file. If this functionality is
required it may be renabled as detailed in
Enabling Access to Foreign Cells.
overviewcompleting installation of first machinefirst AFS machinecompletion of installationOverview: Completing the Installation of the First AFS MachineThe machine is now configured as an AFS file server and client machine. In this final phase of the installation, you
initialize the Cache Manager and then create the upper levels of your AFS filespace, among other procedures. The procedures are:
Verify that the initialization script works correctly, and incorporate it into the operating system's startup and
shutdown sequenceCreate and mount top-level volumesCreate and mount volumes to store system binaries in AFSEnable access to foreign cellsInstitute additional security measuresRemove client functionality if desiredAFS initialization scriptverifying on first AFS machineAFS initialization scriptrunningfirst AFS machinefirst AFS machineAFS initialization scriptrunning/verifyingrunning AFS init. scriptfirst AFS machineinvoking AFS init. scriptrunningVerifying the AFS Initialization ScriptAt this point you run the AFS initialization script to verify that it correctly invokes all of the necessary programs and
AFS processes, and that they start correctly. The following are the relevant commands: The command that dynamically loads AFS modifications into the kernel, on some system types (not applicable if the
kernel has AFS modifications built in)The bosserver command, which starts the BOS Server; it in turn starts the server
processes for which you created entries in the /usr/afs/local/BosConfig fileThe afsd command, which initializes the Cache ManagerOn system types that use a dynamic loader program, you must reboot the machine before running the initialization script,
so that it can freshly load AFS modifications into the kernel.If there are problems during the initialization, attempt to resolve them. The OpenAFS mailing lists can provide assistance if necessary.
commandsbos shutdownbos commandsshutdownIssue the bos shutdown command to shut down the AFS server processes other than the
BOS Server. Include the -wait flag to delay return of the command shell prompt until all
processes shut down completely.
# /usr/afs/bin/bos shutdown <machine name> -waitIssue the ps command to learn the bosserver
process's process ID number (PID), and then the kill command to stop it.
# psappropriate_ps_options| grep bosserver
# killbosserver_PIDIssue the appropriate commands to run the AFS initialization script for this system type.AIXAFS initialization scripton first AFS machineOn AIX systems:Reboot the machine and log in again as the local superuser root.
# cd /
# shutdown -r now
login: root
Password: root_passwordRun the AFS initialization script.
# /etc/rc.afsHP-UXAFS initialization scripton first AFS machineOn HP-UX systems:Run the AFS initialization script.
# /sbin/init.d/afs startIRIXAFS initialization scripton first AFS machineafsclient variable (IRIX)first AFS machinevariablesafsclient (IRIX)first AFS machineIRIXafsclient variablefirst AFS machineafsserver variable (IRIX)first AFS machinevariablesafsserver (IRIX)first AFS machineIRIXafsserver variablefirst AFS machineOn IRIX systems:If you have configured the machine to use the ml dynamic loader program,
reboot the machine and log in again as the local superuser root.
# cd /
# shutdown -i6 -g0 -y
login: root
Password: root_passwordIssue the chkconfig command to activate the afsserver and afsclient configuration variables.
# /etc/chkconfig -f afsserver on
# /etc/chkconfig -f afsclient onRun the AFS initialization script.
# /etc/init.d/afs startLinuxAFS initialization scripton first AFS machineOn Linux systems:Reboot the machine and log in again as the local superuser root.
# cd /
# shutdown -r now
login: root
Password: root_passwordRun the AFS initialization scripts.
# /etc/rc.d/init.d/openafs-client start
# /etc/rc.d/init.d/openafs-server startSolarisAFS initialization scripton first AFS machineOn Solaris systems:Reboot the machine and log in again as the local superuser root.
# cd /
# shutdown -i6 -g0 -y
login: root
Password: root_passwordRun the AFS initialization script.
# /etc/init.d/afs startWait for the message that confirms that Cache Manager initialization is complete.On machines that use a disk cache, it can take a while to initialize the Cache Manager for the first time, because
the afsd program must create all of the Vn files in the cache directory. Subsequent Cache Manager
initializations do not take nearly as long, because the Vn
files already exist.commandsaklogaklog commandIf you are working with an existing cell which uses
kaserver for authentication,
please recall the note in
Using this Appendix detailing the
substitution of kinit and
aklog with
klog.As a basic test of correct AFS functioning, issue the
kinit and
aklog commands to authenticate
as the admin user.
Provide the password (admin_passwd) you
defined in Initializing Cell Security.
# kinit admin
Password: admin_passwd
# aklogcommandstokenstokens commandIssue the tokens command to
verify that the aklog
command worked correctly. If it did, the output looks similar to the following example for the example.com cell, where admin's AFS UID is 1. If the output does not
seem correct, resolve the problem. Changes to the AFS initialization script are possibly necessary. The OpenAFS mailing lists can provide assistance as necessary.
# tokens
Tokens held by the Cache Manager:
User's (AFS ID 1) tokens for afs@example.com [Expires May 22 11:52]
--End of list--
Issue the bos status command to verify that the output for each process reads
Currently running normally.
# /usr/afs/bin/bos status <machine name>
fs commandscheckvolumescommandsfs checkvolumesChange directory to the local file system root (/) and issue the fs checkvolumes command.
# cd /
# /usr/afs/bin/fs checkvolumesAFS initialization scriptadding to machine startup sequencefirst AFS machineinstallingAFS initialization scriptfirst AFS machinefirst AFS machineAFS initialization scriptactivatingactivating AFS init. scriptinstallingActivating the AFS Initialization ScriptNow that you have confirmed that the AFS initialization script works correctly, take the action necessary to have it run
automatically at each reboot. Proceed to the instructions for your system type: Activating the Script on AIX SystemsActivating the Script on HP-UX SystemsActivating the Script on IRIX SystemsActivating the Script on Linux SystemsActivating the Script on Solaris SystemsAIXAFS initialization scripton first AFS machineActivating the Script on AIX SystemsEdit the AIX initialization file, /etc/inittab, adding the following line to invoke
the AFS initialization script. Place it just after the line that starts NFS daemons.
rcafs:2:wait:/etc/rc.afs > /dev/console 2>&1 # Start AFS services
(Optional) There are now copies of the AFS initialization file in both the
/usr/vice/etc and /etc directories. If you want to avoid
potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the
original script from the AFS CD-ROM if necessary.
# cd /usr/vice/etc
# rm rc.afs
# ln -s /etc/rc.afsProceed to Configuring the Top Levels of the AFS Filespace.HP-UXAFS initialization scripton first AFS machineActivating the Script on HP-UX SystemsChange to the /sbin/init.d directory and issue the ln
-s command to create symbolic links that incorporate the AFS initialization script into the HP-UX startup and
shutdown sequence.
# cd /sbin/init.d
# ln -s ../init.d/afs /sbin/rc2.d/S460afs
# ln -s ../init.d/afs /sbin/rc2.d/K800afs(Optional) There are now copies of the AFS initialization file in both the
/usr/vice/etc and /sbin/init.d directories. If you want
to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always
retrieve the original script from the AFS CD-ROM if necessary.
# cd /usr/vice/etc
# rm afs.rc
# ln -s /sbin/init.d/afs afs.rcProceed to Configuring the Top Levels of the AFS Filespace.IRIXAFS initialization scripton first AFS machineActivating the Script on IRIX SystemsChange to the /etc/init.d directory and issue the ln
-s command to create symbolic links that incorporate the AFS initialization script into the IRIX startup and
shutdown sequence.
# cd /etc/init.d
# ln -s ../init.d/afs /etc/rc2.d/S35afs
# ln -s ../init.d/afs /etc/rc0.d/K35afs(Optional) There are now copies of the AFS initialization file in both the
/usr/vice/etc and /etc/init.d directories. If you want
to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always
retrieve the original script from the AFS CD-ROM if necessary.
# cd /usr/vice/etc
# rm afs.rc
# ln -s /etc/init.d/afs afs.rcProceed to Configuring the Top Levels of the AFS Filespace.LinuxAFS initialization scripton first AFS machineActivating the Script on Linux SystemsIssue the chkconfig command to activate the openafs-client and openafs-server
configuration variables. Based on the instruction in the AFS initialization file that begins with the string
#chkconfig, the command automatically creates the symbolic links that incorporate the
script into the Linux startup and shutdown sequence.
# /sbin/chkconfig --add openafs-client
# /sbin/chkconfig --add openafs-server(Optional) There are now copies of the AFS initialization file in both the
/usr/vice/etc and /etc/rc.d/init.d directories, and
copies of the afsd options file in both the /usr/vice/etc and /etc/sysconfig directories. If you want to avoid
potential confusion by guaranteeing that the two copies of each file are always the same, create a link between them. You
can always retrieve the original script or options file from the AFS CD-ROM if necessary.
# cd /usr/vice/etc
# rm afs.rc afs.conf
# ln -s /etc/rc.d/init.d/afs afs.rc
# ln -s /etc/sysconfig/afs afs.confProceed to Configuring the Top Levels of the AFS Filespace.SolarisAFS initialization scripton first AFS machineActivating the Script on Solaris SystemsChange to the /etc/init.d directory and issue the ln
-s command to create symbolic links that incorporate the AFS initialization script into the Solaris startup and
shutdown sequence.
# cd /etc/init.d
# ln -s ../init.d/afs /etc/rc3.d/S99afs
# ln -s ../init.d/afs /etc/rc0.d/K66afs(Optional) There are now copies of the AFS initialization file in both the
/usr/vice/etc and /etc/init.d directories. If you want
to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always
retrieve the original script from the AFS CD-ROM if necessary.
# cd /usr/vice/etc
# rm afs.rc
# ln -s /etc/init.d/afs afs.rcAFS filespaceconfiguring top levelsconfiguringAFS filespace (top levels)Configuring the Top Levels of the AFS FilespaceIf you have not previously run AFS in your cell, you now configure the top levels of your cell's AFS filespace. If you
have run a previous version of AFS, the filespace is already configured. Proceed to Storing AFS Binaries
in AFS. root.cell volumecreating and replicatingvolumecreatingroot.cellcreatingroot.cell volumeYou created the root.afs volume in Starting the File Server,
Volume Server, and Salvager, and the Cache Manager mounted it automatically on the local /afs directory when you ran the AFS initialization script in Verifying the AFS
Initialization Script. You now set the access control list (ACL) on the /afs directory;
creating, mounting, and setting the ACL are the three steps required when creating any volume.After setting the ACL on the root.afs volume, you create your cell's root.cell volume, mount it as a subdirectory of the /afs directory, and
set the ACL. Create both a read/write and a regular mount point for the root.cell volume. The
read/write mount point enables you to access the read/write version of replicated volumes when necessary. Creating both mount
points essentially creates separate read-only and read-write copies of your filespace, and enables the Cache Manager to traverse
the filespace on a read-only path or read/write path as appropriate. For further discussion of these concepts, see the chapter
in the OpenAFS Administration Guide about administering volumes. root.afs volumereplicatingvolumereplicating root.afs and root.cellreplicating volumesThen replicate both the root.afs and root.cell volumes.
This is required if you want to replicate any other volumes in your cell, because all volumes mounted above a replicated volume
must themselves be replicated in order for the Cache Manager to access the replica.When the root.afs volume is replicated, the Cache Manager is programmed to access its
read-only version (root.afs.readonly) whenever possible. To make changes to the contents of the
root.afs volume (when, for example, you mount another cell's root.cell volume at the second level in your filespace), you must mount the root.afs volume temporarily, make the changes, release the volume and remove the temporary mount point.
For instructions, see Enabling Access to Foreign Cells. fs commandssetaclcommandsfs setaclaccess control list (ACL), settingsettingACLIssue the fs setacl command to edit the ACL on the /afs directory. Add an entry that grants the l (lookup) and r (read) permissions
to the system:anyuser group, to enable all AFS users who can reach your cell to traverse
through the directory. If you prefer to enable access only to locally authenticated users, substitute the system:authuser group.Note that there is already an ACL entry that grants all seven access rights to the system:administrators group. It is a default entry that AFS places on every new volume's root
directory.
# /usr/afs/bin/fs setacl /afs system:anyuser rlcommandsvos createroot.cell volumevos commandscreateroot.cell volumefs commandsmkmountcommandsfs mkmountmount pointcreatingmount pointvolumemountingIssue the vos create command to create the root.cell volume. Then issue the fs mkmount command to mount it as
a subdirectory of the /afs directory, where it serves as the root of your cell's local
AFS filespace. Finally, issue the fs setacl command to create an ACL entry for the
system:anyuser group (or system:authuser group).For the partition name argument, substitute the name of one of the machine's AFS server
partitions (such as /vicepa). For the cellname argument,
substitute your cell's fully-qualified Internet domain name (such as abc.com).
# /usr/afs/bin/vos create <machine name> <partition name> root.cell
# /usr/afs/bin/fs mkmount /afs/cellnameroot.cell
# /usr/afs/bin/fs setacl /afs/cellnamesystem:anyuser rlcreatingsymbolic linkfor abbreviated cell namesymbolic linkfor abbreviated cell namecell namesymbolic link for abbreviated(Optional) Create a symbolic link to a shortened cell name, to reduce the length of
pathnames for users in the local cell. For example, in the abc.com cell, /afs/abc is a link to /afs/abc.com.
# cd /afs
# ln -sfull_cellnameshort_cellnameread/write mount point for root.afs volumeroot.afs volumeread/write mount pointcreatingread/write mount pointIssue the fs mkmount command to create a read/write mount point for the root.cell volume (you created a regular mount point in Step 2).By convention, the name of a read/write mount point begins with a period, both to distinguish it from the regular
mount point and to make it visible only when the -a flag is used on the ls command.Change directory to /usr/afs/bin to make it easier to access the command
binaries.
# cd /usr/afs/bin
# ./fs mkmount /afs/.cellnameroot.cell -rwcommandsvos addsitevos commandsaddsitevolumedefining replication sitedefiningreplication site for volumeIssue the vos addsite command to define a replication site
for both the root.afs and root.cell volumes. In each
case, substitute for the partition name argument the partition where the volume's read/write
version resides. When you install additional file server machines, it is a good idea to create replication sites on them
as well.
# ./vos addsite <machine name> <partition name> root.afs
# ./vos addsite <machine name> <partition name> root.cellfs commandsexaminecommandsfs examineIssue the fs examine command to verify that the Cache Manager can access both the
root.afs and root.cell volumes, before you attempt to
replicate them. The output lists each volume's name, volumeID number, quota, size, and the size of the partition that
houses them. If you get an error message instead, do not continue before taking corrective action.
# ./fs examine /afs
# ./fs examine /afs/cellnamecommandsvos releasevos commandsreleasevolumereleasing replicatedreleasing replicated volumeIssue the vos release command to release a replica of the root.afs and root.cell volumes to the sites you defined in Step
5.
# ./vos release root.afs
# ./vos release root.cellfs commandscheckvolumescommandsfs checkvolumesIssue the fs checkvolumes to force the Cache Manager to notice that you have
released read-only versions of the volumes, then issue the fs examine command again. This
time its output mentions the read-only version of the volumes (root.afs.readonly and
root.cell.readonly) instead of the read/write versions, because of the Cache Manager's
bias to access the read-only version of the root.afs volume if it exists.
# ./fs checkvolumes
# ./fs examine /afs
# ./fs examine /afs/cellnamestoringAFS binaries in volumescreatingvolumefor AFS binariesvolumefor AFS binariesbinariesstoring AFS in volumeusr/afsws directorydirectories/usr/afswsStoring AFS Binaries in AFSSites with existing binary distribution mechanisms, including
those which use packaging systems such as RPM, may wish to skip this step,
and use tools native to their operating system to manage AFS configuration
information.In the conventional configuration, you make AFS client binaries and configuration files available in the subdirectories of
the /usr/afsws directory on client machines (afsws is an
acronym for AFS workstation). You can conserve local disk space by creating /usr/afsws as a link to an AFS volume that houses the AFS client binaries and configuration files for
this system type.In this section you create the necessary volumes. The conventional location to which to link /usr/afsws is /afs/cellname/sysname/usr/afsws, where
sysname is the appropriate system type name as specified in the OpenAFS Release
Notes. The instructions in Installing Additional Client Machines assume that you have
followed the instructions in this section.If you have previously run AFS in the cell, the volumes possibly already exist. If so, you need to perform Step 8 only.The current working directory is still /usr/afs/bin, which houses the fs and vos command suite binaries. In the following commands, it is
possible you still need to specify the pathname to the commands, depending on how your PATH environment variable is set.
commandsvos createvolume for AFS binariesvos commandscreatevolume for AFS binariesIssue the vos create command to create volumes for storing
the AFS client binaries for this system type. The following example instruction creates volumes called
sysname, sysname.usr, and
sysname.usr.afsws. Refer to the OpenAFS Release
Notes to learn the proper value of sysname for this system type.
# vos create <machine name> <partition name> sysname
# vos create <machine name> <partition name> sysname.usr
# vos create <machine name> <partition name> sysname.usr.afswsIssue the fs mkmount command to mount the newly created volumes. Because the
root.cell volume is replicated, you must precede the cellname part
of the pathname with a period to specify the read/write mount point, as shown. Then issue the vos
release command to release a new replica of the root.cell volume, and the
fs checkvolumes command to force the local Cache Manager to access them.
# fs mkmount -dir /afs/.cellname/sysname-volsysname
# fs mkmount -dir /afs/.cellname/sysname/usr-volsysname.usr
# fs mkmount -dir /afs/.cellname/sysname/usr/afsws-volsysname.usr.afsws
# vos release root.cell
# fs checkvolumesIssue the fs setacl command to grant the l
(lookup) and r (read)
permissions to the system:anyuser group on each new directory's ACL.
# cd /afs/.cellname/sysname
# fs setacl -dir . usr usr/afsws -acl system:anyuser rlcommandsfs setquotafs commandssetquotaquota for volumevolumesetting quotasettingvolume quotaIssue the fs setquota command to set an unlimited quota on
the volume mounted at the /afs/cellname/sysname/usr/afsws directory. This
enables you to copy all of the appropriate files from the CD-ROM into the volume without exceeding the volume's
quota.If you wish, you can set the volume's quota to a finite value after you complete the copying operation. At that
point, use the vos examine command to determine how much space the volume is occupying.
Then issue the fs setquota command to set a quota that is slightly larger.
# fs setquota /afs/.cellname/sysname/usr/afsws 0Unpack the distribution tarball into the /tmp/afsdist directory,
if it is not already. copyingAFS binaries into volumeCD-ROMcopying AFS binaries into volumefirst AFS machinecopyingAFS binaries into volumeCopy the contents of the indicated directories from the
distribution into the /afs/cellname/sysname/usr/afsws directory.
# cd /afs/.cellname/sysname/usr/afsws
# cp -rp /tmp/afsdist/sysname/bin .
# cp -rp /tmp/afsdist/sysname/etc .
# cp -rp /tmp/afsdist/sysname/include .
# cp -rp /tmp/afsdist/sysname/lib .creatingsymbolic linkto AFS binariessymbolic linkto AFS binaries from local diskCreate /usr/afsws on the local disk as a symbolic link to the
directory /afs/cellname/@sys/usr/afsws. You can specify the actual system name instead of @sys if you wish, but the advantage of using @sys is that it
remains valid if you upgrade this machine to a different system type.
# ln -s /afs/cellname/@sys/usr/afsws /usr/afswsPATH environment variable for usersvariablesPATH, setting for users(Optional) To enable users to issue commands from the AFS suites (such as fs) without having to specify a pathname to their binaries, include the /usr/afsws/bin and /usr/afsws/etc directories in the PATH
environment variable you define in each user's shell initialization file (such as .cshrc).storingAFS documentation in volumescreatingvolumefor AFS documentationvolumefor AFS documentationdocumentation, creating volume for AFSusr/afsdoc directorydirectories/usr/afsdocStoring AFS Documents in AFSThe AFS distribution includes the following documents: OpenAFS Release NotesOpenAFS Quick BeginningsOpenAFS User GuideOpenAFS Administration ReferenceOpenAFS Administration GuideOpenAFS Documentation is not currently provided with all
distributions, but may be downloaded separately from the OpenAFS
websiteThe OpenAFS Documentation Distribution has a directory for each
document format provided. The different formats are suitable for online
viewing, printing, or both.This section explains how to create and mount a volume to house the documents, making them available to your users. The
recommended mount point for the volume is /afs/cellname/afsdoc. If you wish, you can create a link to the mount point on each client machine's local disk,
called /usr/afsdoc. Alternatively, you can create a link to the mount point in each user's home
directory. You can also choose to permit users to access only certain documents (most probably, the OpenAFS User
Guide) by creating different mount points or setting different ACLs on different document directories.The current working directory is still /usr/afs/bin, which houses the fs and vos command suite binaries you use to create and mount volumes.
In the following commands, it is possible you still need to specify the pathname to the commands, depending on how your PATH
environment variable is set. commandsvos createvolume for AFS documentationvos commandscreatevolume for AFS documentationIssue the vos create command to create a volume for storing the AFS documentation.
Include the -maxquota argument to set an unlimited quota on the volume. This enables you
to copy all of the appropriate files from the CD-ROM into the volume without exceeding the volume's quota.If you wish, you can set the volume's quota to a finite value after you complete the copying operations. At that
point, use the vos examine command to determine how much space the volume is occupying.
Then issue the fs setquota command to set a quota that is slightly larger.
# vos create <machine name> <partition name> afsdoc -maxquota 0Issue the fs mkmount command to mount the new volume. Because the root.cell volume is replicated, you must precede the cellname with a period to
specify the read/write mount point, as shown. Then issue the vos release command to
release a new replica of the root.cell volume, and the fs
checkvolumes command to force the local Cache Manager to access them.
# fs mkmount -dir /afs/.cellname/afsdoc-volafsdoc
# vos release root.cell
# fs checkvolumesIssue the fs setacl command to grant the rl
permissions to the system:anyuser group on the new directory's ACL.
# cd /afs/.cellname/afsdoc
# fs setacl . system:anyuser rlUnpack the OpenAFS documentation distribution into the
/tmp/afsdocs directory. You may use
a different directory, in which case the location you use should be
subsituted in the following examples. For instructions on unpacking
the distribution, consult the documentation for your operating
system's tar command.
copyingAFS documentation from distributionOpenAFS Distributioncopying AFS documentation fromfirst AFS machinecopyingAFS documentation from OpenAFS distributionindex.htm filefilesindex.htmCopy the AFS documents in one or more formats from the unpacked distribution into subdirectories of the /afs/cellname/afsdoc directory. Repeat
the commands for each format.
# mkdirformat_name
# cdformat_name
# cp -rp /tmp/afsdocs/format.If you choose to store the HTML version of the documents in AFS, note that in addition to a subdirectory for each
document there are several files with a .gif extension, which enable readers to move
easily between sections of a document. The file called index.htm is an introductory HTML
page that contains a hyperlink to each of the documents. For online viewing to work properly, these files must remain in
the top-level HTML directory (the one named, for example, /afs/cellname/afsdoc/html).(Optional) If you believe it is helpful to your users to access the AFS documents
in a certain format via a local disk directory, create /usr/afsdoc on the local disk as a
symbolic link to the documentation directory in AFS (/afs/cellname/afsdoc/format_name).
# ln -s /afs/cellname/afsdoc/format_name/usr/afsdocAn alternative is to create a link in each user's home directory to the /afs/cellname/afsdoc/format_name directory.storingsystem binaries in volumescreatingvolumefor system binariesvolumefor system binariesbinariesstoring system in volumesStoring System Binaries in AFSYou can also choose to store other system binaries in AFS volumes, such as the standard UNIX programs conventionally
located in local disk directories such as /etc, /bin, and
/lib. Storing such binaries in an AFS volume not only frees local disk space, but makes it
easier to update binaries on all client machines.The following is a suggested scheme for storing system binaries in AFS. It does not include instructions, but you can use
the instructions in Storing AFS Binaries in AFS (which are for AFS-specific binaries) as a
template.Some files must remain on the local disk for use when AFS is inaccessible (during bootup and file server or network
outages). The required binaries include the following: A text editor, network commands, and so onFiles used during the boot sequence before the afsd program runs, such as
initialization and configuration files, and binaries for commands that mount file systemsFiles used by dynamic kernel loader programsIn most cases, it is more secure to enable only locally authenticated users to access system binaries, by granting the
l (lookup) and r (read) permissions to the system:authuser group on the ACLs of
directories that contain the binaries. If users need to access a binary while unauthenticated, however, the ACL on its directory
must grant those permissions to the system:anyuser group.The following chart summarizes the suggested volume and mount point names for storing system binaries. It uses a separate
volume for each directory. You already created a volume called sysname for this machine's system type
when you followed the instructions in Storing AFS Binaries in AFS.You can name volumes in any way you wish, and mount them at other locations than those suggested here. However, this
scheme has several advantages: Volume names clearly identify volume contentsUsing the sysname prefix on every volume makes it is easy to back up all of the volumes
together, because the AFS Backup System enables you to define sets of volumes based on a string included in all of their
namesIt makes it easy to track related volumes, keeping them together on the same file server machine if desiredThere is a clear relationship between volume name and mount point nameVolume NameMount Pointsysname/afs/cellname/sysnamesysname.bin/afs/cellname/sysname/binsysname.etc/afs/cellname/sysname/etcsysname.usr/afs/cellname/sysname/usrsysname.usr.afsws/afs/cellname/sysname/usr/afswssysname.usr.bin/afs/cellname/sysname/usr/binsysname.usr.etc/afs/cellname/sysname/usr/etcsysname.usr.inc/afs/cellname/sysname/usr/includesysname.usr.lib/afs/cellname/sysname/usr/libsysname.usr.loc/afs/cellname/sysname/usr/localsysname.usr.man/afs/cellname/sysname/usr/mansysname.usr.sys/afs/cellname/sysname/usr/sysforeign cell, enabling accesscellenabling access to foreignaccessto local and foreign cellsAFS filespaceenabling access to foreign cellsroot.cell volumemounting for foreign cells in local filespacedatabase server machineentry in client CellServDB filefor foreign cellCellServDB file (client)adding entryfor foreign cellEnabling Access to Foreign CellsWith current OpenAFS releases, there exist a number of mechanisms for
providing access to foreign cells. You may add mount points in your AFS
filespace for each foreign cell you wish users to access, or you can
enable a 'synthetic' AFS root, which contains mountpoints for either all
AFS cells defined in the client machine's local
/usr/vice/etc/CellServDB, or for all cells
providing location information in the DNS.
Enabling a Synthetic AFS rootWhen a synthetic root is enabled, the client cache machine creates its
own root.afs volume, rather than using the one provided with your cell. This
allows clients to access all cells in the
CellServDB file and, optionally, all cells
registered in the DNS, without requiring system administrator action to
enable this access. Using a synthetic root has the additional advantage that
it allows a client to start its AFS service without a network available, as
it is no longer necessary to contact a fileserver to obtain the root volume.
OpenAFS supports two complimentary mechanisms for creating the
synthetic root. Starting the cache manager with the
-dynroot option adds all cells listed
in /usr/vice/etc/CellServDB to the client's
AFS root. Adding the -afsdb option in
addition to this enables DNS lookups for any cells that are not found in
the client's CellServDB file. Both of these options are added to the AFS
initialisation script, or options file, as detailed in
Configuring the Cache Manager.Adding foreign cells to a conventional root volumeIn this section you create a mount point in your AFS filespace for the root.cell volume
of each foreign cell that you want to enable your users to access. For users working on a client machine to access the cell,
there must in addition be an entry for it in the client machine's local /usr/vice/etc/CellServDB file. (The instructions in Creating the Client
CellServDB File suggest that you use the CellServDB.sample file included in the AFS
distribution as the basis for your cell's client CellServDB file. The sample file lists all of
the cells that had agreed to participate in the AFS global namespace at the time your AFS CD-ROM was created. As mentioned in
that section, the AFS Product Support group also maintains a copy of the file, updating it as necessary.)The chapter in the OpenAFS Administration Guide about cell administration and configuration issues
discusses the implications of participating in the global AFS namespace. The chapter about administering client machines
explains how to maintain knowledge of foreign cells on client machines, and includes suggestions for maintaining a central
version of the file in AFS. Issue the fs mkmount command to mount each foreign cell's root.cell volume on a directory called /afs/foreign_cell. Because the root.afs
volume is replicated, you must create a temporary mount point for its read/write version in a directory to which you have
write access (such as your cell's /afs/.cellname directory).
Create the mount points, issue the vos release command to release new replicas to the
read-only sites for the root.afs volume, and issue the fs
checkvolumes command to force the local Cache Manager to access the new replica.You need to issue the fs mkmount command only once for each foreign cell's
root.cell volume. You do not need to repeat the command on each client machine.Substitute your cell's name for cellname.
# cd /afs/.cellname
# /usr/afs/bin/fs mkmount temp root.afsRepeat the fs mkmount command for each foreign cell you wish to mount at this
time.
# /usr/afs/bin/fs mkmount temp/foreign_cellroot.cell -cforeign_cellIssue the following commands only once.
# /usr/afs/bin/fs rmmount temp
# /usr/afs/bin/vos release root.afs
# /usr/afs/bin/fs checkvolumesfs commandsnewcellcommandsfs newcellIf this machine is going to remain an AFS client after you complete the installation, verify
that the local /usr/vice/etc/CellServDB file includes an entry for each foreign
cell.For each cell that does not already have an entry, complete the following instructions: Create an entry in the CellServDB file. Be sure to comply with the formatting
instructions in Creating the Client CellServDB File.Issue the fs newcell command to add an entry for the cell directly to the
list that the Cache Manager maintains in kernel memory. Provide each database server machine's fully qualified
hostname.
# /usr/afs/bin/fs newcell <foreign_cell> <dbserver1> \
[<dbserver2>] [<dbserver3>]
If you plan to maintain a central version of the CellServDB file (the
conventional location is /afs/cellname/common/etc/CellServDB), create it now as a copy of the local /usr/vice/etc/CellServDB file. Verify that it includes an entry for each foreign cell you
want your users to be able to access.
# mkdir common
# mkdir common/etc
# cp /usr/vice/etc/CellServDB common/etc
# /usr/afs/bin/vos release root.cellIssue the ls command to verify that the new cell's mount point is visible in your
filespace. The output lists the directories at the top level of the new cell's AFS filespace.
# ls /afs/foreign_cellIf you wish to participate in the global AFS namespace, and only
intend running one database server, please
register your cell with grand.central.org at this time.
To do so, email the CellServDB fragment
describing your cell, together with a contact name and email address
for any queries, to cellservdb@grand.central.org. If you intend
on deploying multiple database servers, please wait until you have
installed all of them before registering your cell.If you wish to allow your cell to be located through DNS lookups,
at this time you should also add the necessary configuration to your
DNS.AFS database servers may be located by creating AFSDB records
in the DNS for the domain name corresponding to the name of your cell.
It's outside the scope of this guide to give an indepth description of
managing, or configuring, your site's DNS. You should consult the
documentation for your DNS server for further details on AFSDB
records.Improving Cell Securitycellimproving securitysecurityimprovingroot superusercontrolling accessaccessto root and admin accountsadmin accountcontrolling access toAFS filespacecontrolling access by root superuserThis section discusses ways to improve the security of AFS data
in your cell. Also see the chapter in the OpenAFS
Administration Guide about configuration and administration
issues.Controlling root AccessAs on any machine, it is important to prevent unauthorized users from logging onto an AFS server or client machine as
the local superuser root. Take care to keep the root
password secret.The local root superuser does not have special access to AFS data through the Cache
Manager (as members of the system:administrators group do), but it does have the following
privileges: On client machines, the ability to issue commands from the fs suite that affect
AFS performanceOn server machines, the ability to disable authorization checking, or to install rogue process binariesControlling System Administrator AccessFollowing are suggestions for managing AFS administrative privilege: Create an administrative account for each administrator named
something like
username.admin.
Administrators authenticate under these identities only when
performing administrative tasks, and destroy the administrative
tokens immediately after finishing the task (either by issuing the
unlog command, or the
kinit and
aklog commands to adopt their
regular identity).Set a short ticket lifetime for administrator accounts (for example, 20 minutes) by using the
facilities of your KDC. For instance, with a MIT Kerberos KDC, this
can be performed using the
--max-ticket-life argument to
the kadmin modify_principal
command. Do not however, use a short lifetime for users
who issue long-running backup commands.Limit the number of system administrators in your cell, especially those who belong to the system:administrators group. By default they have all ACL rights on all directories in the local
AFS filespace, and therefore must be trusted not to examine private files.Limit the use of system administrator accounts on machines in public areas. It is especially important not to
leave such machines unattended without first destroying the administrative tokens.Limit the use by administrators of standard UNIX commands that make connections to remote machines (such as the
telnet utility). Many of these programs send passwords across the network without
encrypting them.BOS Serverchecking mode bits on AFS directoriesmode bits on local AFS directoriesUNIX mode bits on local AFS directoriesProtecting Sensitive AFS DirectoriesSome subdirectories of the /usr/afs directory contain files crucial to cell security.
Unauthorized users must not read or write to these files because of the potential for misuse of the information they
contain.As the BOS Server initializes for the first time on a server machine, it creates several files and directories (as
mentioned in Starting the BOS Server). It sets their owner to the local superuser root and sets their mode bits to enable writing by the owner only; in some cases, it also restricts
reading.At each subsequent restart, the BOS Server checks that the owner and mode bits on these files are still set
appropriately. If they are not, it write the following message to the /usr/afs/logs/BosLog
file:
Bosserver reports inappropriate access on server directories
The BOS Server does not reset the mode bits, which enables you to set alternate values if you wish.The following charts lists the expected mode bit settings. A question mark indicates that the BOS Server does not check
that mode bit./usr/afsdrwxr?xr-x/usr/afs/backupdrwx???---/usr/afs/bindrwxr?xr-x/usr/afs/dbdrwx???---/usr/afs/etcdrwxr?xr-x/usr/afs/etc/KeyFile-rw????---/usr/afs/etc/UserList-rw?????--/usr/afs/localdrwx???---/usr/afs/logsdrwxr?xr-xfirst AFS machineclient functionalityremovingremovingclient functionality from first AFS machineRemoving Client FunctionalityFollow the instructions in this section only if you do not wish this machine to remain an AFS client. Removing client
functionality means that you cannot use this machine to access AFS files. Remove the files from the /usr/vice/etc directory. The command does not remove the
directory for files used by the dynamic kernel loader program, if it exists on this system type. Those files are still
needed on a server-only machine.
# cd /usr/vice/etc
# rm *
# rm -rf CCreate symbolic links to the ThisCell and CellServDB files in the /usr/afs/etc directory. This makes it
possible to issue commands from the AFS command suites (such as bos and fs) on this machine.
# ln -s /usr/afs/etc/ThisCell ThisCell
# ln -s /usr/afs/etc/CellServDB CellServDBOn IRIX systems, issue the chkconfig command to deactivate the afsclient configuration variable.
# /etc/chkconfig -f afsclient offReboot the machine. Most system types use the shutdown command, but the appropriate
options vary.
# cd /
# shutdownappropriate_options