Installing Additional Client Machinesinstructionsclient machineinstallingclient functionalityclient machineThis chapter describes how to install AFS client machines after you have installed the first AFS machine. Some parts of the
installation differ depending on whether or not the new client is of the same AFS system type (uses the same AFS binaries) as a
previously installed client machine. overviewinstalling client machineSummary of ProceduresIncorporate AFS into the machine's kernelDefine the machine's cell membershipDefine cache location and sizeCreate the /usr/vice/etc/CellServDB file, which determines which foreign cells the
client can access in addition to the local cellCreate the /afs directory and start the Cache ManagerCreate and mount volumes for housing AFS client binaries (necessary only for clients of a new system type)Create a link from the local /usr/afsws directory to the AFS directory housing the
AFS client binariesModify the machine's authentication system to enable AFS users to obtain tokens at loginBinary Distributioncreating /tmp/afsdist directoryclient machineafsdist directoryclient machineclient machine/tmp/afsdist directorycreating/tmp/afsdist directoryclient machineusr/vice/etc directoryclient machineclient machine/usr/vice/etc directorycreating/usr/vice/etc directoryclient machineCreating AFS Directories on the Local DiskIf you are not installing from a packaged distribution, create the /usr/vice/etc directory on the local disk, to house client binaries and
configuration files. Subsequent instructions copy files from the OpenAFS binary distribution into them. Create the /tmp/afsdist directory as a location to uncompress this distribution, if it does not already exist.
# mkdir /usr/vice
# mkdir /usr/vice/etc
# mkdir /tmp/afsdistPerforming Platform-Specific ProceduresEvery AFS client machine's kernel must incorporate AFS modifications. Some system types use a dynamic kernel loader
program, whereas on other system types you build AFS modifications into a static kernel. Some system types support both
methods.Also modify the machine's authentication system so that users obtain an AFS token as they log into the local file system.
Using AFS is simpler and more convenient for your users if you make the modifications on all client machines. Otherwise, users
must perform a two or three step login procedure (login to the local system, obtain Kerberos credentials, and then issue the klog
command). For further discussion of AFS authentication, see the chapter in the OpenAFS Administration Guide
about cell configuration and administration issues.For convenience, the following sections group the two procedures by system type. Proceed to the appropriate section.
Getting Started on AIX SystemsGetting Started on HP-UX SystemsGetting Started on IRIX SystemsGetting Started on Linux SystemsGetting Started on Solaris Systemsincorporating AFS kernel extensionsclient machineAIXAFS kernel extensionson client machineAIXclient machineAFS kernel extensionson AIXAIXAFS kernel extensionson client machineenabling AFS loginclient machineAIXAFS loginon client machineAIXclient machineAFS loginon AIXAIXAFS loginon client machinesecondary authentication system (AIX)client machineGetting Started on AIX SystemsIn this section you load AFS into the AIX kernel. Then incorporate AFS modifications into the machine's secondary
authentication system, if you wish to enable AFS login.Loading AFS into the AIX KernelThe AIX kernel extension facility is the dynamic kernel loader provided by IBM Corporation. AIX does not support
incorporation of AFS modifications during a kernel build.For AFS to function correctly, the kernel extension facility must run each time the machine reboots, so the AFS
initialization script (included in the AFS distribution) invokes it automatically. In this section you copy the script to the
conventional location and edit it to select the appropriate options depending on whether NFS is also to run.After editing the script, you run it to incorporate AFS into the kernel. In a later section you verify that the script
correctly initializes the Cache Manager, then configure the AIX inittab file so that the
script runs automatically at reboot. Unpack the distribution tarball. The examples below assume
that you have unpacked the files into the
/tmp/afsdist directory. If you
pick a different location, substitute this in all of the following
examples. Once you have unpacked the distribution,
change directory as indicated.
# cd /tmp/afsdist/rs_aix42/root.client/usr/vice/etcCopy the AFS kernel library files to the local /usr/vice/etc/dkload directory,
and the AFS initialization script to the /etc directory.
# cp -rp dkload /usr/vice/etc
# cp -p rc.afs /etc/rc.afsEdit the /etc/rc.afs script, setting the NFS
variable as indicated.If the machine is not to function as an NFS/AFS Translator, set the NFS variable
as follows.
NFS=$NFS_NONE
If the machine is to function as an NFS/AFS Translator and is running AIX 4.2.1 or higher, set the
NFS variable as follows. Note that NFS must already be loaded into the kernel, which
happens automatically on systems running AIX 4.1.1 and later, as long as the file /etc/exports exists.
NFS=$NFS_IAUTH
Invoke the /etc/rc.afs script to load AFS modifications into the kernel. You can
ignore any error messages about the inability to start the BOS Server or the Cache Manager or AFS client.
# /etc/rc.afsEnabling AFS Login on AIX SystemsIn modern AFS installations, you should be using Kerberos v5
for user login, and obtaining AFS tokens following this authentication
step.There are currently no instructions available on configuring AIX to
automatically obtain AFS tokens at login. Following login, users can
obtain tokens by running the aklog
commandSites which still require kaserver
or external Kerberos v4 authentication should consult
Enabling kaserver based AFS Login on AIX Systems
for details of how to enable AIX login.Proceed to Loading and Creating Client Files.incorporating AFS kernel extensionsclient machineHP-UXAFS kernel extensionson client machineHP-UXclient machineAFS kernel extensionson HP-UXHP-UXAFS-modified kernelon client machineenabling AFS loginclient machineHP-UXAFS loginon client machineHP-UXclient machineAFS loginon HP-UXHP-UXAFS loginon client machinePAMon HP-UXclient machineGetting Started on HP-UX SystemsIn this section you build AFS into the HP-UX kernel. Then incorporate AFS modifications into the machine's Pluggable
Authentication Module (PAM) system, if you wish to enable AFS login.Building AFS into the HP-UX KernelOn HP-UX systems, you must build AFS modifications into a new static kernel; HP-UX does not support dynamic loading. If
the machine's hardware and software configuration exactly matches another HP-UX machine on which AFS is already built into the
kernel, you can choose to copy the kernel from that machine to this one. In general, however, it is better to build AFS
modifications into the kernel on each machine according to the following instructions. Move the existing kernel-related files to a safe location.
# cp /stand/vmunix /stand/vmunix.noafs
# cp /stand/system /stand/system.noafsUnpack the OpenAFS HP-UX distribution tarball. The examples
below assume that you have unpacked the files into the
/tmp/afsdist directory. If you
pick a different location, substitute this in all of the following
examples. Once you have unpacked the distribution, change directory
as indicated.
# cd /tmp/afsdist/hp_ux110/root.clientCopy the AFS initialization file to the local directory for initialization files (by convention, /sbin/init.d on HP-UX machines). Note the removal of the .rc
extension as you copy the file.
# cp usr/vice/etc/afs.rc /sbin/init.d/afsCopy the file afs.driver to the local /usr/conf/master.d directory, changing its name to afs as you
do.
# cp usr/vice/etc/afs.driver /usr/conf/master.d/afsCopy the AFS kernel module to the local /usr/conf/lib directory.If the machine's kernel supports NFS server functionality:
# cp bin/libafs.a /usr/conf/libIf the machine's kernel does not support NFS server functionality, change the file's name as you copy it:
# cp bin/libafs.nonfs.a /usr/conf/lib/libafs.aIncorporate the AFS driver into the kernel, either using the SAM program or a
series of individual commands. To use the SAM program: Invoke the SAM program, specifying the hostname of the local machine
as local_hostname. The SAM graphical user
interface pops up.
# sam -displaylocal_hostname:0Choose the Kernel Configuration icon, then the Drivers icon. From the list of drivers, select afs.Open the pull-down Actions menu and choose the Add Driver to Kernel option.Open the Actions menu again and choose the Create a New Kernel option.Confirm your choices by choosing Yes and OK when prompted by subsequent pop-up windows. The SAM program builds the kernel and reboots the system.Login again as the superuser root.
login: root
Password: root_passwordTo use individual commands: Edit the file /stand/system, adding an entry for afs to the Subsystems section.Change to the /stand/build directory and issue the mk_kernel command to build the kernel.
# cd /stand/build
# mk_kernelMove the new kernel to the standard location (/stand/vmunix), reboot
the machine to start using it, and login again as the superuser root.
# mv /stand/build/vmunix_test /stand/vmunix
# cd /
# shutdown -r now
login: root
Password: root_passwordEnabling AFS Login on HP-UX SystemsAt this point you incorporate AFS into the operating system's Pluggable Authentication Module (PAM) scheme. PAM
integrates all authentication mechanisms on the machine, including login, to provide the security infrastructure for
authenticated access to and from the machine.In modern AFS installations, you should be using Kerberos v5
for user login, and obtaining AFS tokens subsequent to this authentication
step. OpenAFS does not currently distribute a PAM module allowing AFS
tokens to be automatically gained at login. Whilst there are a number of
third party modules providing this functionality, it is not know if these
have been tested with HP/UX.Following login, users can
obtain tokens by running the aklog
commandIf you are at a site which still requires
kaserver or external Kerberos v4 based
authentication, please consult
Enabling kaserver based AFS Login on HP-UX systems
for further installation instructions.
Proceed to Loading and Creating Client Files.incorporating AFS kernel extensionsclient machineIRIXAFS kernel extensionson client machineIRIXclient machineAFS kernel extensionson IRIXGetting Started on IRIX SystemsIn this section you incorporate AFS into the IRIX kernel, choosing one of two methods: Dynamic loading using the ml program distributed by Silicon Graphics, Incorporated
(SGI).Building a new static kernel.Then see Enabling AFS Login on IRIX Systems to read about integrated AFS login on IRIX
systems.In preparation for either dynamic loading or kernel building, perform the following procedures: Unpack the OpenAFS IRIX distribution tarball. The examples
below assume that you have unpacked the files into the
/tmp/afsdist directory. If you
pick a different location, substitue this in all of the following
examples. Once you have unpacked the distribution, change directory
as indicated.
# cd /tmp/afsdist/sgi_65/root.clientCopy the AFS initialization script to the local directory for initialization files (by convention, /etc/init.d on IRIX machines). Note the removal of the .rc
extension as you copy the script.
# cp -p usr/vice/etc/afs.rc /etc/init.d/afsIssue the uname -m command to determine the machine's CPU board type. The IPxx value in the output must match one of the supported CPU board types
listed in the OpenAFS Release Notes for the current version of AFS.
# uname -mProceed to either Loading AFS into the IRIX Kernel or Building AFS into the IRIX Kernel.IRIXAFS kernel extensionson client machineafsml variable (IRIX)client machinevariablesafsml (IRIX)client machineIRIXafsml variableclient machineafsxnfs variable (IRIX)client machinevariablesafsxnfs (IRIX)client machineIRIXafsxnfs variableclient machineLoading AFS into the IRIX KernelThe ml program is the dynamic kernel loader provided by SGI for IRIX systems. If you
use it rather than building AFS modifications into a static kernel, then for AFS to function correctly the ml program must run each time the machine reboots. Therefore, the AFS initialization script (included
in the OpenAFS Binary Distribution) invokes it automatically when the afsml configuration variable is
activated. In this section you activate the variable and run the script.In a later section you verify that the script correctly initializes the Cache Manager, then create the links that
incorporate AFS into the IRIX startup and shutdown sequence. Create the local /usr/vice/etc/sgiload directory to house the AFS kernel library
file.
# mkdir /usr/vice/etc/sgiloadCopy the appropriate AFS kernel library file to the /usr/vice/etc/sgiload
directory. The IPxx portion of the library file name must
match the value previously returned by the uname -m command. Also choose the file
appropriate to whether the machine's kernel supports NFS server functionality (NFS must be supported for the machine to
act as an NFS/AFS Translator). Single- and multiprocessor machines use the same library file.(You can choose to copy all of the kernel library files into the /usr/vice/etc/sgiload directory, but they require a significant amount of space.)If the machine's kernel supports NFS server functionality:
# cp -p usr/vice/etc/sgiload/libafs.IPxx.o /usr/vice/etc/sgiloadIf the machine's kernel does not support NFS server functionality:
# cp -p usr/vice/etc/sgiload/libafs.IPxx.nonfs.o \
/usr/vice/etc/sgiloadIssue the chkconfig command to activate the afsml configuration variable.
# /etc/chkconfig -f afsml onIf the machine is to function as an NFS/AFS Translator and the kernel supports NFS server functionality, activate
the afsxnfs variable.
# /etc/chkconfig -f afsxnfs onRun the /etc/init.d/afs script to load AFS extensions into the kernel. The script
invokes the ml command, automatically determining which kernel library file to use
based on this machine's CPU type and the activation state of the afsxnfs
variable.You can ignore any error messages about the inability to start the BOS Server or the Cache Manager or AFS
client.
# /etc/init.d/afs startProceed to Enabling AFS Login on IRIX Systems.IRIXAFS-modified kernelon client machineBuilding AFS into the IRIX KernelIf you prefer to build a kernel, and the machine's hardware and software configuration exactly matches another IRIX
machine on which AFS is already built into the kernel, you can choose to copy the kernel from that machine to this one. In
general, however, it is better to build AFS modifications into the kernel on each machine according to the following
instructions. Copy the kernel initialization file afs.sm to the local /var/sysgen/system directory, and the kernel master file afs to
the local /var/sysgen/master.d directory.
# cp -p bin/afs.sm /var/sysgen/system
# cp -p bin/afs /var/sysgen/master.dCopy the appropriate AFS kernel library file to the local file /var/sysgen/boot/afs.a; the IPxx
portion of the library file name must match the value previously returned by the uname
-m command. Also choose the file appropriate to whether the machine's kernel supports NFS server
functionality (NFS must be supported for the machine to act as an NFS/AFS Translator). Single- and multiprocessor
machines use the same library file.If the machine's kernel supports NFS server functionality:
# cp -p bin/libafs.IPxx.a /var/sysgen/boot/afs.aIf the machine's kernel does not support NFS server functionality:
# cp -p bin/libafs.IPxx.nonfs.a /var/sysgen/boot/afs.aIssue the chkconfig command to deactivate the afsml configuration variable.
# /etc/chkconfig -f afsml offIf the machine is to function as an NFS/AFS Translator and the kernel supports NFS server functionality, activate
the afsxnfs variable.
# /etc/chkconfig -f afsxnfs onCopy the existing kernel file, /unix, to a safe location. Compile the new kernel,
which is created in the file /unix.install. It overwrites the existing /unix file when the machine reboots in the next step.
# cp /unix /unix_noafs
# autoconfigReboot the machine to start using the new kernel, and login again as the superuser root.
# cd /
# shutdown -i6 -g0 -y
login: root
Password: root_passwordProceed to Enabling AFS Login on IRIX Systems.enabling AFS loginclient machineIRIXAFS loginon client machineIRIXclient machineAFS loginon IRIXEnabling AFS Login on IRIX SystemsWhilst the standard IRIX command-line
login program and the
graphical xdm login program both have
the ability to grant AFS tokens, this ability relies upon the deprecated
kaserver authentication system. As this system is not recommended for
new installations, this is not documented here.Users who have been successfully authenticated via Kerberos 5
authentication may obtain AFS tokens following login by running the
aklog command.If you are at a site which still requires
kaserver or external Kerberos v4 based
authentication, please consult
Enabling kaserver based AFS Login on Linux Systems
for further installation instructions.Proceed to Loading and Creating Client Files.
incorporating AFS kernel extensionsclient machineLinuxAFS kernel extensionson client machineLinuxclient machineAFS kernel extensionson LinuxLinuxAFS kernel extensionson client machineenabling AFS loginclient machineLinuxAFS loginon client machineLinuxclient machineAFS loginon LinuxLinuxAFS loginon client machinePAMon Linuxclient machineGetting Started on Linux SystemsIn this section you load AFS into the Linux kernel. Then incorporate AFS modifications into the machine's Pluggable
Authentication Module (PAM) system, if you wish to enable AFS login.Loading AFS into the Linux KernelThe modprobe program is the dynamic kernel loader for Linux. Linux does not support
incorporation of AFS modifications during a kernel build.For AFS to function correctly, the modprobe program must run each time the machine
reboots, so your distributions's AFS initialization script invokes it automatically. The script also includes
commands that select the appropriate AFS library file automatically. In this section you run the script.In a later section you also verify that the script correctly initializes the Cache Manager, then activate a
configuration variable, which results in the script being incorporated into the Linux startup and shutdown sequence.The procedure for starting up OpenAFS depends upon your distributionFedora and RedHat Enterprise LinuxOpenAFS ships RPMS for all current Fedora and RHEL releases.
Download and install the RPM set for your operating system.
RPMs are available from the OpenAFS web site. You will need the
openafs and
openfs-client packages, along
with an openafs-kernel package
matching your current, running ,kernel.You can find the version of your current kernel by running
# uname -r
2.6.20-1.2933.fc6Once downloaded, the packages may be installed with the
rpm command
# rpm -U openafs-* openafs-client-* openafs-server-* openafs-kernel-*
Systems packaged as tar filesIf you are running a system where the OpenAFS Binary Distribution
is provided as a tar file, or where you have built the system from
source yourself, you need to install the relevant components by hand
Unpack the distribution tarball. The examples below assume
that you have unpacked the files into the
/tmp/afsdistdirectory. If you
pick a different location, substitute this in all of the following
examples. Once you have unpacked the distribution,
change directory as indicated.
# cd /tmp/afsdist/linux/root.client/usr/vice/etcCopy the AFS kernel library files to the local /usr/vice/etc/modload directory.
The filenames for the libraries have the format libafs-version.o, where
version indicates the kernel build level. The string .mp in
the version indicates that the file is appropriate for machines running a multiprocessor
kernel.
# cp -rp modload /usr/vice/etcCopy the AFS initialization script to the local directory for initialization files (by convention, /etc/rc.d/init.d on Linux machines). Note the removal of the .rc
extension as you copy the script.
# cp -p afs.rc /etc/rc.d/init.d/afsEnabling AFS Login on Linux SystemsAt this point you incorporate AFS into the operating system's Pluggable Authentication Module (PAM) scheme. PAM
integrates all authentication mechanisms on the machine, including login, to provide the security infrastructure for
authenticated access to and from the machine.At this time, we recommend that new sites requiring AFS credentials
to be gained as part of PAM authentication use Russ Alberry's
pam_afs_session, rather than utilising the bundled pam_afs2 module.
A typical PAM stack should authenticate the user using an external
Kerberos V service, and then use the AFS PAM module to obtain AFS
credentials in the session sectionIf you are at a site which still requires
kaserver or external Kerberos v4 based
authentication, please consult
Enabling kaserver based AFS Login on Linux Systems
for further installation instructions.Proceed to
Loading and Creating Client Files.incorporating AFS kernel extensionsclient machineSolarisAFS kernel extensionson client machineSolarisclient machineAFS kernel extensionson SolarisSolarisAFS kernel extensionson client machineenabling AFS loginclient machineSolarisAFS loginon client machineSolarisclient machineAFS loginon SolarisSolarisAFS loginon client machinePAMon Solarisclient machineSolarisfile systems clean-up scripton client machinefile systems clean-up script (Solaris)client machinescriptsfile systems clean-up (Solaris)client machineGetting Started on Solaris SystemsIn this section you load AFS into the Solaris kernel. Then incorporate AFS modifications into the machine's Pluggable
Authentication Module (PAM) system, if you wish to enable AFS login.Loading AFS into the Solaris KernelThe modload program is the dynamic kernel loader provided by Sun Microsystems for
Solaris systems. Solaris does not support incorporation of AFS modifications during a kernel build.For AFS to function correctly, the modload program must run each time the machine
reboots, so the AFS initialization script (included on the AFS CD-ROM) invokes it automatically. In this section you copy the
appropriate AFS library file to the location where the modload program accesses it and then
run the script.In a later section you verify that the script correctly initializes the Cache Manager, then create the links that
incorporate AFS into the Solaris startup and shutdown sequence. Unpack the OpenAFS Solaris distribution tarball. The examples
below assume that you have unpacked the files into the
/tmp/afsdist directory. If you
pick a diferent location, substitute this in all of the following
exmaples. Once you have unpacked the distribution, change directory
as indicated.
# cd /tmp/afsdist/sun4x_56/root.client/usr/vice/etcCopy the AFS initialization script to the local directory for initialization files (by convention, /etc/init.d on Solaris machines). Note the removal of the .rc
extension as you copy the script.
# cp -p afs.rc /etc/init.d/afsCopy the appropriate AFS kernel library file to the local file /kernel/fs/afs.If the machine is running Solaris 2.6 or the 32-bit version of Solaris 7, its kernel supports NFS server
functionality, and the nfsd process is running:
# cp -p modload/libafs.o /kernel/fs/afsIf the machine is running Solaris 2.6 or the 32-bit version of Solaris 7, and its kernel does not support NFS
server functionality or the nfsd process is not running:
# cp -p modload/libafs.nonfs.o /kernel/fs/afsIf the machine is running the 64-bit version of Solaris 7, its kernel supports NFS server functionality, and the
nfsd process is running:
# cp -p modload/libafs64.o /kernel/fs/sparcv9/afsIf the machine is running the 64-bit version of Solaris 7, and its kernel does not support NFS server
functionality or the nfsd process is not running:
# cp -p modload/libafs64.nonfs.o /kernel/fs/sparcv9/afsRun the AFS initialization script to load AFS modifications into the kernel. You can ignore any error messages
about the inability to start the BOS Server or the Cache Manager or AFS client.
# /etc/init.d/afs startWhen an entry called afs does not already exist in the local /etc/name_to_sysnum file, the script automatically creates it and reboots the machine to start
using the new version of the file. If this happens, log in again as the superuser root
after the reboot and run the initialization script again. This time the required entry exists in the /etc/name_to_sysnum file, and the modload program runs.
login: root
Password: root_password
# /etc/init.d/afs startEnabling AFS Login on Solaris SystemsAt this point you incorporate AFS into the operating system's Pluggable Authentication Module (PAM) scheme. PAM
integrates all authentication mechanisms on the machine, including login, to provide the security infrastructure for
authenticated access to and from the machine.In modern AFS installations, you should be using Kerberos v5
for user login, and obtaining AFS tokens subsequent to this authentication
step. OpenAFS does not currently distribute a PAM module allowing AFS
tokens to be automatically gained at login. Some of these, such as
pam-krb5 and pam-afs-session from http://www.eyrie.org/~eagle/software/
or pam_afs2 from ftp://achilles.ctd.anl.gov/pub/DEE/pam_afs2-0.1.tar,
have been tested with Solaris.If you are at a site which still requires
kaserver or external Kerberos v4 based
authentication, please consult
Enabling kaserver based AFS Login on Solaris Systems
for further installation instructions.Editing the File Systems Clean-up Script on Solaris SystemsSome Solaris distributions include a script that locates
and removes unneeded files from various file systems. Its
conventional location is
/usr/lib/fs/nfs/nfsfind. The
script generally uses an argument to the
find command to define which file
systems to search. In this step you modify the
command to exclude the /afs
directory. Otherwise, the command traverses the AFS
filespace of every cell that is accessible from the machine, which can take many hours. The following alterations are
possibilities, but you must verify that they are appropriate for your cell.The first possible alteration is to add the -local flag to the existing command,
so that it looks like the following:
find $dir -local -name .nfs\* -mtime +7 -mount -exec rm -f {} \;
Another alternative is to exclude any directories whose names begin with the lowercase letter a or a non-alphabetic character.
find /[A-Zb-z]* remainder of existing commandDo not use the following command, which still searches under the /afs directory,
looking for a subdirectory of type 4.2.
find / -fstype 4.2 /* do not use */
Proceed to Loading and Creating Client Files.Binary Distributioncopying client files fromclient machineclient machinecopying client files to local diskcopyingclient files to local diskclient machinecell namesetting in client ThisCell fileclient machinesettingcell name in client ThisCell fileclient machineclient machinecell membershipclient machineThisCell fileThisCell file (client)client machineCellServDB file (client)creatingon client machinedatabase server machineentry in client CellServDB fileon client machinecreatingCellServDB file (client)client machineclient machineCellServDB filecreating during initial installationLoading and Creating Client FilesIf you are using a non-packaged distribution (that is, one provided as
a tarball) you should now copy files from the istribution to the
/usr/vice/etc directory. On some platforms
that use a dynamic loader program to incorporate AFS modifications into the
kernel, you have already copied over some the files.
Copying them again does no harm.Every AFS client machine has a copy of the /usr/vice/etc/ThisCell file on its local disk
to define the machine's cell membership for the AFS client programs that run on it. Among other functions, this file determines
the following: The cell in which users authenticate when they log onto the machine, assuming it is using an AFS-modified login
utilityThe cell in which users authenticate by default when they issue the aklog
commandThe cell membership of the AFS server processes that the AFS command interpreters on this machine contact by
defaultSimilarly, the /usr/vice/etc/CellServDB file on a client machine's local disk lists the
database server machines in each cell that the local Cache Manager can contact. If there is no entry in the file for a cell, or
the list of database server machines is wrong, then users working on this machine cannot access the cell. The chapter in the
OpenAFS Administration Guide about administering client machines explains how to maintain the file after
creating it. A version of the client CellServDB file was created during the installation of
your cell's first machine (in Creating the Client CellServDB File). It is probably also
appropriate for use on this machine.Remember that the Cache Manager consults the /usr/vice/etc/CellServDB file only at
reboot, when it copies the information into the kernel. For the Cache Manager to perform properly, the CellServDB file must be accurate at all times. Refer to the chapter in the OpenAFS
Administration Guide about administering client machines for instructions on updating this file, with or without
rebooting. If you have not already done so, unpack the distribution
tarball for this machine's system type into a suitable location on
the filesystem, such as /tmp/afsdist.
If you use a different location, substitue that in the examples that
follow.Copy files to the local /usr/vice/etc directory.This step places a copy of the AFS initialization script (and related files, if applicable) into the /usr/vice/etc directory. In the preceding instructions for incorporating AFS into the kernel, you
copied the script directly to the operating system's conventional location for initialization files. When you incorporate
AFS into the machine's startup sequence in a later step, you can choose to link the two files.On some system types that use a dynamic kernel loader program, you previously copied AFS library files into a
subdirectory of the /usr/vice/etc directory. On other system types, you copied the
appropriate AFS library file directly to the directory where the operating system accesses it. The following commands do
not copy or recopy the AFS library files into the /usr/vice/etc directory, because on
some system types the library files consume a large amount of space. If you want to copy them, add the -r flag to the first cp command and skip the second cp command.
# cd /cdrom/sysname/root.client/usr/vice/etc
# cp -p * /usr/vice/etc
# cp -rp C /usr/vice/etcCreate the /usr/vice/etc/ThisCell file.
# echo "cellname" > /usr/vice/etc/ThisCellCreate the
/usr/vice/etc/CellServDB file. Use a
network file transfer program such as
sftp or
scp to copy it from one of the
following sources, which are listed in decreasing order of
preference: Your cell's central CellServDB source file (the conventional location is
/afs/cellname/common/etc/CellServDB)The global CellServDB
file maintained at grand.central.orgAn existing client machine in your cellThe CellServDB.sample
file included in the
sysname/root.client/usr/vice/etc
directory of each OpenAFS distribution; add an entry for the
local cell by following the instructions in
Creating the Client CellServDB File
client cachecacheAFS cachecachedisk cachecachememory cachecachecacherequirementscachechoosing sizerequirementscacheusr/vice/etc/cacheinfocacheinfo filecacheinfo filefilescacheinfousr/vice/cache directorydirectories/usr/vice/cachecacheconfiguringclient machineconfiguringcacheclient machinesettingcache size and locationclient machineclient machinecache size and locationConfiguring the CacheThe Cache Manager uses a cache on the local disk or in machine memory to store local copies of files fetched from file
server machines. As the afsd program initializes the Cache Manager, it sets basic cache
configuration parameters according to definitions in the local /usr/vice/etc/cacheinfo file.
The file has three fields: The first field names the local directory on which to mount the AFS filespace. The conventional location is the
/afs directory.The second field defines the local disk directory to use for the disk cache. The conventional location is the
/usr/vice/cache directory, but you can specify an alternate directory if another
partition has more space available. There must always be a value in this field, but the Cache Manager ignores it if the
machine uses a memory cache.The third field specifies the number of kilobyte (1024 byte) blocks to allocate for the cache.The values you define must meet the following requirements. On a machine using a disk cache, the Cache Manager expects always to be able to use the amount of space specified in
the third field. Failure to meet this requirement can cause serious problems, some of which can be repaired only by
rebooting. You must prevent non-AFS processes from filling up the cache partition. The simplest way is to devote a
partition to the cache exclusively.The amount of space available in memory or on the partition housing the disk cache directory imposes an absolute
limit on cache size.The maximum supported cache size can vary in each AFS release; see the OpenAFS Release Notes
for the current version.For a disk cache, you cannot specify a value in the third field that exceeds 95% of the space available on the
partition mounted at the directory named in the second field. If you violate this restriction, the afsd program exits without starting the Cache Manager and prints an appropriate message on the
standard output stream. A value of 90% is more appropriate on most machines. Some operating systems (such as AIX) do not
automatically reserve some space to prevent the partition from filling completely; for them, a smaller value (say, 80% to
85% of the space available) is more appropriate.For a memory cache, you must leave enough memory for other processes and applications to run. If you try to allocate
more memory than is actually available, the afsd program exits without initializing the
Cache Manager and produces the following message on the standard output stream.
afsd: memCache allocation failure at number KB
The number value is how many kilobytes were allocated just before the failure, and so
indicates the approximate amount of memory available.Within these hard limits, the factors that determine appropriate cache size include the number of users working on the
machine, the size of the files with which they work, and (for a memory cache) the number of processes that run on the machine.
The higher the demand from these factors, the larger the cache needs to be to maintain good performance.Disk caches smaller than 10 MB do not generally perform well. Machines serving multiple users usually perform better with
a cache of at least 60 to 70 MB. The point at which enlarging the cache further does not really improve performance depends on
the factors mentioned previously and is difficult to predict.Memory caches smaller than 1 MB are nonfunctional, and the performance of caches smaller than 5 MB is usually
unsatisfactory. Suitable upper limits are similar to those for disk caches but are probably determined more by the demands on
memory from other sources on the machine (number of users and processes). Machines running only a few processes possibly can use
a smaller memory cache.Configuring a Disk CacheNot all file system types that an operating system supports are necessarily supported for use as the cache partition.
For possible restrictions, see the OpenAFS Release Notes.To configure the disk cache, perform the following procedures: Create the local directory to use for caching. The following instruction shows the conventional location,
/usr/vice/cache. If you are devoting a partition exclusively to caching, as
recommended, you must also configure it, make a file system on it, and mount it at the directory created in this step.
# mkdir /usr/vice/cacheCreate the cacheinfo file to define the configuration parameters discussed
previously. The following instruction shows the standard mount location, /afs, and the
standard cache location, /usr/vice/cache.
# echo "/afs:/usr/vice/cache:#blocks" > /usr/vice/etc/cacheinfoThe following example defines the disk cache size as 50,000 KB:
# echo "/afs:/usr/vice/cache:50000" > /usr/vice/etc/cacheinfoConfiguring a Memory CacheTo configure a memory cache, create the cacheinfo file to define the configuration
parameters discussed previously. The following instruction shows the standard mount location, /afs, and the standard cache location, /usr/vice/cache (though the
exact value of the latter is irrelevant for a memory cache).
# echo "/afs:/usr/vice/cache:#blocks" > /usr/vice/etc/cacheinfoThe following example allocates 25,000 KB of memory for the cache.
# echo "/afs:/usr/vice/cache:25000" > /usr/vice/etc/cacheinfoafs (/afs) directorycreatingclient machineafs (/afs) directoryas root of AFS filespaceAFS filespaceroot at /afs directorydirectories/afsafsdoptions file (Linux)filesafsd options file (Linux)filesafsafsd options file (Linux)afs fileafsd options file (Linux)etc/sysconfig/afsafs fileLinuxafsd options fileclient machineafsd options file (Linux)afsdcommand in AFS init. scriptcommandsafsdOPTIONS variable in AFS initialization filefilesAFS initializationAFS initialization scriptscriptsAFS initializationAFS initialization scriptAFS initialization scriptsetting afsd parametersclient machineclient machineafsd command parametersvariablesOPTIONS (in AFS initialization file)environment variablesvariablesCache Managerclient machineconfiguringCache Managerclient machineclient machineCache Managerfilesvfs (AIX)vfs fileetc/vfs fileAIXediting /etc/vfs fileclient machinevfs file (AIX)Configuring the Cache ManagerBy convention, the Cache Manager mounts the AFS filespace on the local /afs directory. In
this section you create that directory.The afsd program sets several cache configuration parameters as it initializes the Cache
Manager, and starts daemons that improve performance. You can use the afsd command's arguments
to override the parameters' default values and to change the number of some of the daemons. Depending on the machine's cache
size, its amount of RAM, and how many people work on it, you can sometimes improve Cache Manager performance by overriding the
default values. For a discussion of all of the afsd command's arguments, see its reference page
in the OpenAFS Administration Reference.On platforms using the standard 'afs' initialisation script (this does
not apply to Fedora or RHEL based distributions), the
afsd command line in the AFS
initialization script on each system type includes an
OPTIONS variable. You can use it to set
nondefault values for the command's arguments, in one
of the following ways: You can create an afsdoptions file that sets values for
arguments to the afsd command. If the file exists, its contents are automatically
substituted for the OPTIONS variable in the AFS initialization script. The AFS
distribution for some system types includes an options file; on other system types, you must create it.You use two variables in the AFS initialization script to specify the path to the options file:
CONFIG and AFSDOPT. On system types that define a
conventional directory for configuration files, the CONFIG variable indicates it by
default; otherwise, the variable indicates an appropriate location.List the desired afsd options on a single line in the options file, separating each
option with one or more spaces. The following example sets the -stat argument to 2500,
the -daemons argument to 4, and the -volumes argument to
100.
-stat 2500 -daemons 4 -volumes 100
On a machine that uses a disk cache, you can set the OPTIONS variable in the AFS
initialization script to one of $SMALL, $MEDIUM, or
$LARGE. The AFS initialization script uses one of these settings if the afsd options file named by the AFSDOPT variable does not exist. In
the script as distributed, the OPTIONS variable is set to the value
$MEDIUM.Do not set the OPTIONS variable to $SMALL,
$MEDIUM, or $LARGE on a machine that uses a memory
cache. The arguments it sets are appropriate only on a machine that uses a disk cache.The script (or on some system types the afsd options file named by the
AFSDOPT variable) defines a value for each of SMALL,
MEDIUM, and LARGE that sets afsd command arguments appropriately for client machines of different sizes: SMALL is suitable for a small machine that serves one or two users and has
approximately 8 MB of RAM and a 20-MB cacheMEDIUM is suitable for a medium-sized machine that serves two to six users
and has 16 MB of RAM and a 40-MB cacheLARGE is suitable for a large machine that serves five to ten users and has
32 MB of RAM and a 100-MB cacheYou can choose not to create an afsd options file and to set the
OPTIONS variable in the initialization script to a null value rather than to the default
$MEDIUM value. You can then either set arguments directly on the afsd command line in the script, or set no arguments (and so accept default values for all Cache
Manager parameters).If you are running on a Fedora or RHEL based system, the
openafs-client initialization script behaves differently from that
described above. It sources
/etc/sysconfig/openafs, in which the
AFSD_ARGS variable may be set to contain any, or all, of the afsd
options detailed above. Note that this script does not support setting
an OPTIONS variable, or the
SMALL,
MEDIUM and
LARGE methods of defining cache size.
Create the local directory on which to mount the AFS filespace, by convention /afs.
If the directory already exists, verify that it is empty.
# mkdir /afsOn AIX systems, add the following line to the /etc/vfs file. It enables AIX to
unmount AFS correctly during shutdown.
afs 4 none none
On non-package based Linux systems, copy the afsd options file from the /usr/vice/etc directory to the /etc/sysconfig directory, removing
the .conf extension as you do so.
# cp /usr/vice/etc/afs.conf /etc/sysconfig/afsEdit the machine's AFS initialization script or afsd options file to set
appropriate values for afsd command parameters. The appropriate file for each system type
is as follows: On AIX systems, /etc/rc.afsOn HP-UX systems, /sbin/init.d/afsOn IRIX systems, /etc/init.d/afsOn Fedora and RHEL systems, /etc/sysconfig/openafsOn Linux systems, /etc/sysconfig/afs (the afsd options file)On Solaris systems, /etc/init.d/afsUse one of the methods described in the introduction to this section to add the following flags to the afsd command line. Also set any performance-related arguments you wish. Add the -memcache flag if the machine is to use a memory cache.Add the -verbose flag to display a trace of the Cache Manager's
initialization on the standard output stream.AFS initialization scriptrunningclient machineclient machineAFS initialization scriptrunning AFS init. scriptclient machineinstallingAFS initialization scriptclient machineAFS initialization scriptadding to machine startup sequenceclient machineStarting the Cache Manager and Installing the AFS Initialization ScriptIn this section you run the AFS initialization script to start the Cache Manager. If the script works correctly, perform
the steps that incorporate it into the machine's startup and shutdown sequence. If there are problems during the initialization,
attempt to resolve them. The AFS Product Support group can provide assistance if necessary.On machines that use a disk cache, it can take a while for the afsd program to run the
first time on a machine, because it must create all of the Vn files
in the cache directory. Subsequent Cache Manager initializations do not take nearly as long, because the Vn files already exist.On system types that use a dynamic loader program, you must reboot the machine before running the initialization script,
so that it can freshly load AFS modifications into the kernel.Proceed to the instructions for your system type:Running the Script on AIX SystemsRunning the Script on HP-UX SystemsRunning the Script on IRIX SystemsRunning the Script on Linux SystemsRunning the Script on Solaris SystemsAIXAFS initialization scripton client machinerc.afs file (AFS init. file for AIX)filesrc.afsetc/rc.afsrc.afs fileRunning the Script on AIX SystemsReboot the machine and log in again as the local superuser root.
# cd /
# shutdown -r now
login: root
Password: root_passwordRun the AFS initialization script.
# /etc/rc.afsEdit the AIX initialization file, /etc/inittab, adding the following line to invoke
the AFS initialization script. Place it just after the line that starts NFS daemons.
rcafs:2:wait:/etc/rc.afs > /dev/console 2>&1 # Start AFS services
(Optional) There are now copies of the AFS initialization file in both the
/usr/vice/etc and /etc directories. If you want to avoid
potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the
original script from the AFS CD-ROM if necessary.
# cd /usr/vice/etc
# rm rc.afs
# ln -s /etc/rc.afsIf a volume for housing AFS binaries for this machine's system type does not already exist, proceed to Setting Up Volumes and Loading Binaries into AFS. Otherwise, the installation is
complete.afs fileAFS initialization filefilesafsAFS initialization fileHP-UXAFS initialization scripton client machineRunning the Script on HP-UX SystemsRun the AFS initialization script.
# /sbin/init.d/afs startChange to the /sbin/init.d directory and issue the ln
-s command to create symbolic links that incorporate the AFS initialization script into the HP-UX startup and
shutdown sequence.
# cd /sbin/init.d
# ln -s ../init.d/afs /sbin/rc2.d/S460afs
# ln -s ../init.d/afs /sbin/rc2.d/K800afs(Optional) There are now copies of the AFS initialization file in both the
/usr/vice/etc and /sbin/init.d directories. If you want
to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always
retrieve the original script from the AFS CD-ROM if necessary.
# cd /usr/vice/etc
# rm afs.rc
# ln -s /sbin/init.d/afs afs.rcIf a volume for housing AFS binaries for this machine's system type does not already exist, proceed to Setting Up Volumes and Loading Binaries into AFS. Otherwise, the installation is
complete.afs fileAFS initialization filefilesafsAFS initialization fileIRIXAFS initialization scripton client machineetc/init.d/afsafs fileafsclient variable (IRIX)client machinevariablesafsclient (IRIX)client machineIRIXafsclient variableclient machineRunning the Script on IRIX SystemsIf you have configured the machine to use the ml dynamic loader program, reboot the
machine and log in again as the local superuser root.
# cd /
# shutdown -i6 -g0 -y
login: root
Password: root_passwordIssue the chkconfig command to activate the afsclient configuration variable.
# /etc/chkconfig -f afsclient onRun the AFS initialization script.
# /etc/init.d/afs startChange to the /etc/init.d directory and issue the ln
-s command to create symbolic links that incorporate the AFS initialization script into the IRIX startup and
shutdown sequence.
# cd /etc/init.d
# ln -s ../init.d/afs /etc/rc2.d/S35afs
# ln -s ../init.d/afs /etc/rc0.d/K35afs(Optional) There are now copies of the AFS initialization file in both the
/usr/vice/etc and /etc/init.d directories. If you want
to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always
retrieve the original script from the AFS CD-ROM if necessary.
# cd /usr/vice/etc
# rm afs.rc
# ln -s /etc/init.d/afs afs.rcIf a volume for housing AFS binaries for this machine's system type does not already exist, proceed to Setting Up Volumes and Loading Binaries into AFS. Otherwise, the installation is
complete.afs fileAFS initialization filefilesafsAFS initialization fileetc/rc.d/init.d/afsafs fileLinuxAFS initialization scripton client machineRunning the Script on Fedora / RHEL SystemsReboot the machine and log in again as the local superuser root.
# cd /
# shutdown -r now
login: root
Password: root_passwordRun the AFS initialization script.
# /etc/rc.d/init.d/openafs-client startIssue the chkconfig command to activate the openafs-client
configuration variable. Based on the instruction in the AFS initialization file that begins with the string
#chkconfig, the command automatically creates the symbolic links that incorporate the
script into the Linux startup and shutdown sequence.
# /sbin/chkconfig --add openafs-clientRunning the Script on other Linux SystemsReboot the machine and log in again as the local superuser root.
# cd /
# shutdown -r now
login: root
Password: root_passwordRun the AFS initialization script.
# /etc/rc.d/init.d/afs startIssue the chkconfig command to activate the afs
configuration variable. Based on the instruction in the AFS initialization file that begins with the string
#chkconfig, the command automatically creates the symbolic links that incorporate the
script into the Linux startup and shutdown sequence.
# /sbin/chkconfig --add afs(Optional) There are now copies of the AFS initialization file in both the
/usr/vice/etc and /etc/rc.d/init.d directories, and
copies of the afsd options file in both the /usr/vice/etc and /etc/sysconfig directories. If you want to avoid
potential confusion by guaranteeing that the two copies of each file are always the same, create a link between them. You
can always retrieve the original script or options file from the AFS CD-ROM if necessary.
# cd /usr/vice/etc
# rm afs.rc afs.conf
# ln -s /etc/rc.d/init.d/afs afs.rc
# ln -s /etc/sysconfig/afs afs.confIf a volume for housing AFS binaries for this machine's system type does not already exist, proceed to Setting Up Volumes and Loading Binaries into AFS. Otherwise, the installation is
complete.afs fileAFS initialization filefilesafsAFS initialization fileSolarisAFS initialization scripton client machineRunning the Script on Solaris SystemsReboot the machine and log in again as the local superuser root.
# cd /
# shutdown -i6 -g0 -y
login: root
Password: root_passwordRun the AFS initialization script.
# /etc/init.d/afs startChange to the /etc/init.d directory and issue the ln
-s command to create symbolic links that incorporate the AFS initialization script into the Solaris startup and
shutdown sequence.
# cd /etc/init.d
# ln -s ../init.d/afs /etc/rc3.d/S99afs
# ln -s ../init.d/afs /etc/rc0.d/K66afs(Optional) There are now copies of the AFS initialization file in both the
/usr/vice/etc and /etc/init.d directories. If you want
to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always
retrieve the original script from the OpenAFS Binary Distribution if necessary.
# cd /usr/vice/etc
# rm afs.rc
# ln -s /etc/init.d/afs afs.rcIf a volume for housing AFS binaries for this machine's system type does not already exist, proceed to Setting Up Volumes and Loading Binaries into AFS. Otherwise, the installation is
complete.storingAFS binaries in volumescreatingvolumefor AFS binariesvolumefor AFS binariesbinariesstoring AFS in volumeusr/afsws directorydirectories/usr/afswsSetting Up Volumes and Loading Binaries into AFSIf you are using an operating system which uses packaged
binaries, such as .rpms or .debs, you should allow these package management
systems to maintain your AFS binaries, rather than following the
instructions in this section.In this section, you link /usr/afsws on the local disk to the directory in AFS that
houses AFS binaries for this system type. The conventional name for the AFS directory is /afs/cellname/sysname/usr/afsws.If this machine is an existing system type, the AFS directory presumably already exists. You can simply create a link from
the local /usr/afsws directory to it. Follow the instructions in Linking /usr/afsws on an Existing System Type.If this machine is a new system type (there are no AFS machines of this type in your cell), you must first create and
mount volumes to store its AFS binaries, and then create the link from /usr/afsws to the new
directory. See Creating Binary Volumes for a New System Type.You can also store UNIX system binaries (the files normally stored in local disk directories such as /bin, /etc, and /lib) in volumes
mounted under /afs/cellname/sysname. See Storing System Binaries in AFS
.Linking /usr/afsws on an Existing System TypeIf this client machine is an existing system type, there is already a volume mounted in the AFS filespace that houses
AFS client binaries for it. Create /usr/afsws on the local disk as a symbolic link to the directory /afs/cellname/@sys/usr/afsws. You can
specify the actual system name instead of @sys if you wish, but the advantage of using
@sys is that it remains valid if you upgrade this machine to a different system type.
# ln -s /afs/cellname/@sys/usr/afsws /usr/afsws(Optional) If you believe it is helpful to your users to access the AFS documents
in a certain format via a local disk directory, create /usr/afsdoc on the local disk as
a symbolic link to the documentation directory in AFS (/afs/cellname/afsdoc/format_name).
# ln -s /afs/cellname/afsdoc/format_name/usr/afsdocAn alternative is to create a link in each user's home directory to the /afs/cellname/afsdoc/format_name directory.Creating Binary Volumes for a New System TypeIf this client machine is a new system type, you must create and mount volumes for its binaries before you can link the
local /usr/afsws directory to an AFS directory.To create and mount the volumes, you use the
kinit command to authenticate as an
administrator, followed by the aklog
command to gain tokens, and then issue commands from the
vos and
fs command suites. However, the
command binaries are not yet available on this machine (by convention,
they are accessible via the /usr/afsws
link that you are about to create). You have two choices:
Perform all steps except the last one (Step 10) on an existing AFS machine. On a
file server machine, the aklog, fs and vos binaries reside in the /usr/afs/bin directory. On client
machines, the aklog and fs binaries reside in the
/usr/afsws/bin directory and the vos binary in the
/usr/afsws/etc directory. Depending on how your PATH environment variable is set, you
possibly need to precede the command names with a pathname.If you work on another AFS machine, be sure to substitute the new system type name for the
sysname argument in the following commands, not the system type of the machine on which you
are issuing the commands.Copy the necessary command binaries to a temporary location on the local disk, which enables you to perform the
steps on the local machine. The following procedure installs them in the /tmp directory
and removes them at the end. Depending on how your PATH environment variable is set, you possibly need to precede the
command names with a pathname.Perform the following steps to create a volume for housing AFS binaries. Working either on the local machine or another AFS machine,
extract the Open AFS distribtion tarball onto a directory on that
machine. The following instructions assume that you are using the
/tmp/afsdist directory.If working on the local machine, copy the necessary binaries to a temporary location on the local disk. Substitute
a different directory name for /tmp if you wish.
# cd /tmp/afsdist/new_sysname/root.server/usr/afs/bin
# cp -p aklog /tmp
# cp -p fs /tmp
# cp -p vos /tmpAuthenticate as the user admin.
# kinit admin
Password: admin_password
# aklogIssue the vos create command to create volumes for storing
the AFS client binaries for this system type. The following example instruction creates volumes called
sysname, sysname.usr, and
sysname.usr.afsws. Refer to the OpenAFS Release
Notes to learn the proper value of sysname for this system type.
# vos create <machine name> <partition name> sysname
# vos create <machine name> <partition name> sysname.usr
# vos create <machine name> <partition name> sysname.usr.afswsIssue the fs mkmount command to mount the newly created volumes. Because the
root.cell volume is replicated, you must precede the cellname part
of the pathname with a period to specify the read/write mount point, as shown. Then issue the vos
release command to release a new replica of the root.cell volume, and the
fs checkvolumes command to force the local Cache Manager to access them.
# fs mkmount -dir /afs/.cellname/sysname-volsysname
# fs mkmount -dir /afs/.cellname/sysname/usr-volsysname.usr
# fs mkmount -dir /afs/.cellname/sysname/usr/afsws-volsysname.usr.afsws
# vos release root.cell
# fs checkvolumesIssue the fs setacl command to grant the l
(lookup) and r (read)
permissions to the system:anyuser group on each new directory's ACL.
# cd /afs/.cellname/sysname
# fs setacl -dir . usr usr/afsws -acl system:anyuser rlIssue the fs setquota command to set an unlimited quota on the volume mounted at
the /afs/cellname/sysname/usr/afsws directory. This
enables you to copy all of the appropriate files from the CD-ROM into the volume without exceeding the volume's
quota.If you wish, you can set the volume's quota to a finite value after you complete the copying operation. At that
point, use the vos examine command to determine how much space the volume is occupying.
Then issue the fs setquota command to set a quota that is slightly larger.
# fs setquota /afs/.cellname/sysname/usr/afsws 0Copy the contents of the indicated
directories from the OpenAFS binary distribution into the
/afs/cellname/sysname/usr/afsws directory.
# cd /afs/.cellname/sysname/usr/afsws
# cp -rp /cdrom/sysname/bin .
# cp -rp /cdrom/sysname/etc .
# cp -rp /cdrom/sysname/include .
# cp -rp /cdrom/sysname/lib .Issue the fs setacl command
to set the ACL on each directory appropriately. If you wish to
enable access to the software for locally authenticated users only,
set the ACL on the etc,
include, and
lib subdirectories to grant the
l and
r permissions to the
system:authuser group rather than
the system:anyuser group. The
system:anyuser group must retain
the l and
r permissions on the
bin subdirectory to enable
unauthenticated users to access the
aklog binary.
# cd /afs/.cellname/sysname/usr/afsws
# fs setacl -dir etc include lib -acl system:authuser rl \
system:anyuser nonePerform this step on the new client machine even if you have performed the previous steps
on another machine. Create /usr/afsws on the local disk as a symbolic link to the
directory /afs/cellname/@sys/usr/afsws. You can specify the actual system name instead of @sys if you wish, but the advantage of using @sys is that it
remains valid if you upgrade this machine to a different system type.
# ln -s /afs/cellname/@sys/usr/afsws /usr/afsws(Optional) To enable users to issue commands from the AFS suites (such as
fs) without having to specify a pathname to their binaries, include the /usr/afsws/bin and /usr/afsws/etc directories in the PATH
environment variable you define in each user's shell initialization file (such as .cshrc).(Optional) If you believe it is helpful to your users to access the AFS documents
in a certain format via a local disk directory, create /usr/afsdoc on the local disk as
a symbolic link to the documentation directory in AFS (/afs/cellname/afsdoc/format_name).
# ln -s /afs/cellname/afsdoc/format_name/usr/afsdocAn alternative is to create a link in each user's home directory to the /afs/cellname/afsdoc/format_name directory.(Optional) If working on the local machine, remove the AFS binaries from the
temporary location. They are now accessible in the /usr/afsws directory.
# cd /tmp
# rm klog fs vos