Installing Additional Server Machinesinstructionsfile server machine after firstinstallingfile server machine after firstserver machine after firstfile server machine, additional
Instructions for the following procedures appear in the indicated section of this chapter. Installing an Additional File Server MachineInstalling Database Server FunctionalityRemoving Database Server FunctionalityThe instructions make the following assumptions. You have already installed your cell's first file server machine by following the instructions in Installing the First AFS MachineYou are logged in as the local superuser rootYou are working at the consoleA standard version of one of the operating systems supported by the current version of AFS is running on the
machineYou can access the data on the OpenAFS Binary Distribution for
your operating system, either on the local filesystem or via an NFS
mount of the distribution's contents.requirementsfile server machine (additional)Installing an Additional File Server MachineThe procedure for installing a new file server machine is similar to installing the first file server machine in your
cell. There are a few parts of the installation that differ depending on whether the machine is the same AFS system type as an
existing file server machine or is the first file server machine of its system type in your cell. The differences mostly concern
the source for the needed binaries and files, and what portions of the Update Server you install: On a new system type, you must load files and binaries from the
OpenAFS distribution. You may install the server portion of the
Update Server to make this machine the binary distribution machine
for its system type.On an existing system type, you can copy files and binaries
from a previously installed file server machine, rather
than from the OpenAFS distribution. You may install the client
portion of the Update Server to accept updates of binaries, because a
previously installed machine of this type was installed as the binary
distribution machine.On some system types, distribtution of the appropriate binaries
may be acheived using the system's own package management system. In
these cases, it is recommended that this system is used, rather than
installing the binaries by hand.These instructions are brief; for more detailed information, refer to the corresponding steps in Installing the First AFS Machine. overviewinstalling server machine after firstTo install a new file server machine, perform the following procedures: Copy needed binaries and files onto this machine's local disk,
as required.Incorporate AFS modifications into the kernelConfigure partitions for storing volumesReplace the standard fsck utility with the AFS-modified version on some system
typesStart the Basic OverSeer (BOS) ServerStart the appropriate portion of the Update Server, if
requiredStart the fs process, which incorporates three component processes: the File
Server, Volume Server, and SalvagerAfter completing the instructions in this section, you can install database server functionality on the machine according
to the instructions in Installing Database Server Functionality. usr/afs directoryserver machine after firstfile server machine, additional/usr/afs directorycreating/usr/afs directoryserver machine after firstusr/afs/bin directoryserver machine after firstfile server machine, additional/usr/afs/bin directorycreating/usr/afs/bin directoryserver machine after firstusr/vice/etc directoryserver machine after firstfile server machine, additional/usr/vice/etc directorycreating/usr/vice/etc directoryserver machine after firstCreating AFS Directories and Performing Platform-Specific ProceduresIf your operating systems AFS distribution is supplied as packages,
such as .rpms or .debs, you should just install those packages as detailed
in the previous chapter.Create the /usr/afs and /usr/vice/etc directories on
the local disk. Subsequent instructions copy files from the AFS distribution into them, at the appropriate point for
each system type.
# mkdir /usr/afs
# mkdir /usr/afs/bin
# mkdir /usr/vice
# mkdir /usr/vice/etc
# mkdir /tmp/afsdistAs on the first file server machine, the initial procedures in installing an additional file server machine vary a good
deal from platform to platform. For convenience, the following sections group together all of the procedures for a system
type. Most of the remaining procedures are the same on every system type, but differences are noted as appropriate. The
initial procedures are the following. Incorporate AFS modifications into the kernel, either by using a dynamic kernel loader program or by building a
new static kernelConfigure server partitions to house AFS volumesReplace the operating system vendor's fsck program with a version that recognizes
AFS data file server machine, additionalAFS loginfirst AFS machineIf the machine is to remain an AFS client machine, modify the machine's authentication system so that users obtain
an AFS token as they log into the local file system. (For this procedure only, the instructions direct you to the
platform-specific section in Installing the First AFS Machine.)To continue, proceed to the section for this system type: Getting Started on AIX SystemsGetting Started on HP-UX SystemsGetting Started on IRIX SystemsGetting Started on Linux SystemsGetting Started on Solaris SystemsGetting Started on AIX SystemsBegin by running the AFS initialization script to call the AIX kernel extension facility, which dynamically loads AFS
modifications into the kernel. Then configure partitions and replace the AIX fsck program
with a version that correctly handles AFS volumes. incorporating AFS kernel extensionsserver machine after firstAIXAFS kernel extensionson server machine after firstAIXfile server machine, additionalAFS kernel extensionson AIXAIXAFS kernel extensionson add'l server machineUnpack the distribution tarball. The examples below assume
that you have unpacked the files into the
/tmp/afsdist directory. If you
pick a different location, substitute this in all of the following
examples. Once you have unpacked the distribution,
change directory as indicated.
# cd /tmp/afsdist/rs_aix42/root.client/usr/vice/etcCopy the AFS kernel library files to the local /usr/vice/etc/dkload directory,
and the AFS initialization script to the /etc directory.
# cp -rp dkload /usr/vice/etc
# cp -p rc.afs /etc/rc.afsEdit the /etc/rc.afs script, setting the NFS
variable as indicated.If the machine is not to function as an NFS/AFS Translator, set the NFS
variable as follows.
NFS=$NFS_NONE
If the machine is to function as an NFS/AFS Translator and is running AIX 4.2.1 or higher, set the
NFS variable as follows. Note that NFS must already be loaded into the kernel, which
happens automatically on systems running AIX 4.1.1 and later, as long as the file /etc/exports exists.
NFS=$NFS_IAUTH
Invoke the /etc/rc.afs script to load AFS modifications into the kernel. You
can ignore any error messages about the inability to start the BOS Server or the Cache Manager or AFS client.
# /etc/rc.afsconfiguringAFS server partition on server machine after firstAIXAFS server partitionconfiguring on server machine after firstAIXfile server machine, additionalAFS server partitionon AIXAIXAFS server partitionon add'l server machineCreate a directory called /vicepxx for each AFS
server partition you are configuring (there must be at least one). Repeat the command for each partition.
# mkdir /vicepxxUse the SMIT program to create a journaling file system on each partition to be
configured as an AFS server partition.Mount each partition at one of the /vicepxx
directories. Choose one of the following three methods: Use the SMIT programUse the mount -a command to mount all partitions at onceUse the mount command on each partition in turnAlso configure the partitions so that they are mounted automatically at each reboot. For more information, refer
to the AIX documentation. replacing fsck programserver machine after firstAIXfsck programon server machine after firstAIXfile server machine, additionalfsck programon AIXAIXfsck programon add'l server machineOn systems prior to AIX 5.1, move the AIX
fsck program helper to a safe
location and install the version from the AFS distribution in
its place. Note that on AIX 5.1, and later, systems this step is
not required, and the v3fshelper
program is not shipped for these systems.The AFS binary distribution must still be available in the
/tmp/afsdist directory.
# cd /sbin/helpers
# mv v3fshelper v3fshelper.noafs
# cp -p /tmp/afsdist/rs_aix42/root.server/etc/v3fshelper v3fshelperIf the machine is to remain an AFS client, incorporate AFS into its authentication system, following the
instructions in Enabling AFS Login on AIX Systems.Proceed to Starting Server Programs.Getting Started on HP-UX SystemsBegin by building AFS modifications into the kernel, then configure server partitions and replace the HP-UX fsck program with a version that correctly handles AFS volumes.If the machine's hardware and software configuration exactly matches another HP-UX machine on which AFS is already
built into the kernel, you can copy the kernel from that machine to this one. In general, however, it is better to build AFS
modifications into the kernel on each machine according to the following instructions.
incorporating AFS kernel extensionsserver machine after firstHP-UXAFS kernel extensionson server machine after firstHP-UXfile server machine, additionalAFS kernel extensionson HP-UXHP-UXAFS-modified kernelon add'l server machineMove the existing kernel-related files to a safe location.
# cp /stand/vmunix /stand/vmunix.noafs
# cp /stand/system /stand/system.noafsUnpack the OpenAFS HP-UX distribution tarball. The examples
below assume that you have unpacked the files into the
/tmp/afsdist directory. If you
pick a different location, substitute this in all of the following
examples. Once you have unpacked the distribution, change
directory as indicated.
# cd /tmp/afsdist/hp_ux110/root.clientCopy the AFS initialization file to the local directory for initialization files (by convention, /sbin/init.d on HP-UX machines). Note the removal of the .rc
extension as you copy the file.
# cp usr/vice/etc/afs.rc /sbin/init.d/afsCopy the file afs.driver to the local /usr/conf/master.d directory, changing its name to afs as you
do.
# cp usr/vice/etc/afs.driver /usr/conf/master.d/afsCopy the AFS kernel module to the local /usr/conf/lib directory.If the machine's kernel supports NFS server functionality:
# cp bin/libafs.a /usr/conf/libIf the machine's kernel does not support NFS server functionality, change the file's name as you copy it:
# cp bin/libafs.nonfs.a /usr/conf/lib/libafs.aIncorporate the AFS driver into the kernel, either using the SAM program or a
series of individual commands. To use the SAM program: Invoke the SAM program, specifying the hostname of the local
machine as local_hostname. The SAM graphical
user interface pops up.
# sam -displaylocal_hostname:0Choose the Kernel Configuration icon, then the Drivers icon. From the list of drivers, select afs.Open the pull-down Actions menu and choose the Add Driver to Kernel option.Open the Actions menu again and choose the Create a New Kernel option.Confirm your choices by choosing Yes and OK when prompted by subsequent pop-up windows. The SAM program builds the kernel and reboots the system.Login again as the superuser root.
login: root
Password: root_passwordTo use individual commands: Edit the file /stand/system, adding an entry for afs to the Subsystems section.Change to the /stand/build directory and issue the mk_kernel command to build the kernel.
# cd /stand/build
# mk_kernelMove the new kernel to the standard location (/stand/vmunix),
reboot the machine to start using it, and login again as the superuser root.
# mv /stand/build/vmunix_test /stand/vmunix
# cd /
# shutdown -r now
login: root
Password: root_passwordconfiguringAFS server partition on server machine after firstHP-UXAFS server partitionconfiguring on server machine after firstHP-UXfile server machine, additionalAFS server partitionon HP-UXHP-UXAFS server partitionon add'l server machineCreate a directory called /vicepxx for each AFS
server partition you are configuring (there must be at least one). Repeat the command for each partition.
# mkdir /vicepxxUse the SAM program to create a file system on each partition. For
instructions, consult the HP-UX documentation.On some HP-UX systems that use logical volumes, the SAM program automatically
mounts the partitions. If it has not, mount each partition by issuing either the mount
-a command to mount all partitions at once or the mount command to mount
each partition in turn. replacing fsck programserver machine after firstHP-UXfsck programon server machine after firstHP-UXfile server machine, additionalfsck programon HP-UXHP-UXfsck programon add'l server machineCreate the command configuration file /sbin/lib/mfsconfig.d/afs. Use a text
editor to place the indicated two lines in it:
format_revision 1
fsck 0 m,P,p,d,f,b:c:y,n,Y,N,q,
Create and change directory to an AFS-specific command directory called /sbin/fs/afs.
# mkdir /sbin/fs/afs
# cd /sbin/fs/afsCopy the AFS-modified version of the fsck program (the vfsck binary) and related files from the distribution directory to the new AFS-specific command
directory.
# cp -p /tmp/afsdist/hp_ux110/root.server/etc/* .Change the vfsck binary's name to fsck and set
the mode bits appropriately on all of the files in the /sbin/fs/afs directory.
# mv vfsck fsck
# chmod 755 *Edit the /etc/fstab file, changing the file system type for each AFS server
partition from hfs to afs. This ensures that the
AFS-modified fsck program runs on the appropriate partitions.The sixth line in the following example of an edited file shows an AFS server partition, /vicepa.
/dev/vg00/lvol1 / hfs defaults 0 1
/dev/vg00/lvol4 /opt hfs defaults 0 2
/dev/vg00/lvol5 /tmp hfs defaults 0 2
/dev/vg00/lvol6 /usr hfs defaults 0 2
/dev/vg00/lvol8 /var hfs defaults 0 2
/dev/vg00/lvol9 /vicepa afs defaults 0 2
/dev/vg00/lvol7 /usr/vice/cache hfs defaults 0 2
If the machine is to remain an AFS client, incorporate AFS into its authentication system, following the
instructions in Enabling AFS Login on HP-UX Systems.Proceed to Starting Server Programs.Getting Started on IRIX SystemsBegin by incorporating AFS modifications into the kernel. Either use the ml dynamic
loader program, or build a static kernel. Then configure partitions to house AFS volumes. AFS supports use of both EFS and
XFS partitions for housing AFS volumes. SGI encourages use of XFS partitions. file server machine, additionalfsck programon IRIXfsck programon server machine after firstIRIXYou do not need to replace IRIX fsck program, because the version that SGI
distributes handles AFS volumes properly. incorporating AFS kernel extensionsserver machine after firstIRIXAFS kernel extensionson server machine after firstIRIXfile server machine, additionalAFS kernel extensionson IRIXPrepare for incorporating AFS into the kernel by performing the following procedures. Unpack the OpenAFS IRIX distribution tarball. The
examples below assume that you have unpacked the files into
the /tmp/afsdist
directory. If you pick a different location, substitue this
in all of the following examples. Once you have unpacked
the distribution, change directory as indicated.
# cd /tmp/afsdist/sgi_65/root.clientCopy the AFS initialization script to the local directory for initialization files (by convention,
/etc/init.d on IRIX machines). Note the removal of the .rc extension as you copy the script.
# cp -p usr/vice/etc/afs.rc /etc/init.d/afsIssue the uname -m command to determine the machine's CPU board type. The
IPxx value in the output must match one of the
supported CPU board types listed in the OpenAFS Release Notes for the current version of
AFS.
# uname -mIncorporate AFS into the kernel, either using the ml program or by building AFS
modifications into a static kernel. IRIXAFS kernel extensionson server machine after firstTo use the ml program: afsml variable (IRIX)server machine after firstvariablesafsml (IRIX)server machine after firstIRIXafsml variableserver machine after firstafsxnfs variable (IRIX)server machine after firstvariablesafsxnfs (IRIX)server machine after firstIRIXafsxnfs variableserver machine after firstCreate the local /usr/vice/etc/sgiload directory to house the AFS
kernel library file.
# mkdir /usr/vice/etc/sgiloadCopy the appropriate AFS kernel library file to the /usr/vice/etc/sgiload directory. The IPxx portion of the library file name must match the
value previously returned by the uname -m command. Also choose the file
appropriate to whether the machine's kernel supports NFS server functionality (NFS must be supported for
the machine to act as an NFS/AFS Translator). Single- and multiprocessor machines use the same library
file.(You can choose to copy all of the kernel library files into the /usr/vice/etc/sgiload directory, but they require a significant amount of
space.)If the machine's kernel supports NFS server functionality:
# cp -p usr/vice/etc/sgiload/libafs.IPxx.o /usr/vice/etc/sgiloadIf the machine's kernel does not support NFS server functionality:
# cp -p usr/vice/etc/sgiload/libafs.IPxx.nonfs.o \
/usr/vice/etc/sgiloadIssue the chkconfig command to activate the afsml configuration variable.
# /etc/chkconfig -f afsml onIf the machine is to function as an NFS/AFS Translator and the kernel supports NFS server
functionality, activate the afsxnfs variable.
# /etc/chkconfig -f afsxnfs onRun the /etc/init.d/afs script to load AFS extensions into the
kernel. The script invokes the ml command, automatically determining
which kernel library file to use based on this machine's CPU type and the activation state of the
afsxnfs variable.You can ignore any error messages about the inability to start the BOS Server or the Cache Manager
or AFS client.
# /etc/init.d/afs startProceed to Step 3.IRIXAFS-modified kernelon add'l server machineIf you prefer to build a kernel, and the machine's hardware and software configuration exactly matches
another IRIX machine on which AFS is already built into the kernel, you can copy the kernel from that machine to
this one. In general, however, it is better to build AFS modifications into the kernel on each machine according
to the following instructions. Copy the kernel initialization file afs.sm to the local /var/sysgen/system directory, and the kernel master file afs to the local /var/sysgen/master.d directory.
# cp -p bin/afs.sm /var/sysgen/system
# cp -p bin/afs /var/sysgen/master.dCopy the appropriate AFS kernel library file to the local file /var/sysgen/boot/afs.a; the IPxx portion of the library file name must match the
value previously returned by the uname -m command. Also choose the file
appropriate to whether the machine's kernel supports NFS server functionality (NFS must be supported for
the machine to act as an NFS/AFS Translator). Single- and multiprocessor machines use the same library
file.If the machine's kernel supports NFS server functionality:
# cp -p bin/libafs.IPxx.a /var/sysgen/boot/afs.aIf the machine's kernel does not support NFS server functionality:
# cp -p bin/libafs.IPxx.nonfs.a /var/sysgen/boot/afs.aIssue the chkconfig command to deactivate the afsml configuration variable.
# /etc/chkconfig -f afsml offIf the machine is to function as an NFS/AFS Translator and the kernel supports NFS server
functionality, activate the afsxnfs variable.
# /etc/chkconfig -f afsxnfs onCopy the existing kernel file, /unix, to a safe location. Compile
the new kernel, which is created in the file /unix.install. It overwrites
the existing /unix file when the machine reboots in the next step.
# cp /unix /unix_noafs
# autoconfigReboot the machine to start using the new kernel, and login again as the superuser root.
# cd /
# shutdown -i6 -g0 -y
login: root
Password: root_passwordconfiguringAFS server partition on server machine after firstIRIXAFS server partitionconfiguring on server machine after firstIRIXfile server machine, additionalAFS server partitionon IRIXIRIXAFS server partitionon add'l server machineCreate a directory called /vicepxx for each AFS server partition you are configuring (there
must be at least one). Repeat the command for each partition.
# mkdir /vicepxxAdd a line with the following format to the file systems registry file, /etc/fstab, for each partition (or logical volume created with the XLV volume manager) to be
mounted on one of the directories created in the previous step.For an XFS partition or logical volume:
/dev/dsk/disk /vicepxx xfs rw,raw=/dev/rdsk/disk 0 0
For an EFS partition:
/dev/dsk/disk /vicepxx efs rw,raw=/dev/rdsk/disk 0 0
The following are examples of an entry for each file system type:
/dev/dsk/dks0d2s6 /vicepa xfs rw,raw=/dev/rdsk/dks0d2s6 0 0
/dev/dsk/dks0d3s1 /vicepb efs rw,raw=/dev/rdsk/dks0d3s1 0 0
Create a file system on each partition that is to be mounted on a /vicepxx directory. The following commands are probably appropriate,
but consult the IRIX documentation for more information. In both cases, raw_device is a raw
device name like /dev/rdsk/dks0d0s0 for a single disk partition or /dev/rxlv/xlv0 for a logical volume.For XFS file systems, include the indicated options to configure the partition or logical volume with inodes
large enough to accommodate AFS-specific information:
# mkfs -t xfs -i size=512 -l size=4000braw_deviceFor EFS file systems:
# mkfs -t efsraw_deviceMount each partition by issuing either the mount -a command to mount all
partitions at once or the mount command to mount each partition in turn.(Optional) If you have configured partitions or logical volumes to use XFS,
issue the following command to verify that the inodes are configured properly (are large enough to accommodate
AFS-specific information). If the configuration is correct, the command returns no output. Otherwise, it specifies the
command to run in order to configure each partition or logical volume properly.
# /usr/afs/bin/xfs_size_checkIf the machine is to remain an AFS client, incorporate AFS into its authentication system, following the
instructions in Enabling AFS Login on IRIX Systems.Proceed to Starting Server Programs.Getting Started on Linux Systemsfile server machine, additionalfsck programon Linuxfsck programon server machine after firstLinuxBegin by running the AFS initialization script to call the insmod program, which
dynamically loads AFS modifications into the kernel. Then create partitions for storing AFS volumes. You do not need to
replace the Linux fsck program. The procedure for starting up OpenAFS depends upon your distributionFor Fedora and RedHat Enterprise Linux systems (or their
derivateds), download and install the RPM set for your operating system
from the OpenAFS distribution site. You will need the
openafs and
openafs-server packages, along
with an openafs-kernel package
matching your current, running, kernel. If you wish to install
client functionality, you will also require the
openafs-client package.You can find the version of your current kernel by running
# uname -r
2.6.20-1.2933.fc6Once downloaded, the packages may be installed with the
rpm command
# rpm -U openafs-* openafs-client-* openafs-server-* openafs-kernel-*
incorporating AFS kernel extensionsserver machine after firstLinuxAFS kernel extensionson server machine after firstLinuxfile server machine, additionalAFS kernel extensionson LinuxLinuxAFS kernel extensionson add'l server machineFor systems which are provided as a tarball, or built from
source, unpack the distribution tarball. The examples below assume
that you have unpacked the files into the
/tmp/afsdistdirectory. If you
pick a different location, substitute this in all of the following
examples. Once you have unpacked the distribution,
change directory as indicated.
# cd /tmp/afsdist/linux/root.client/usr/vice/etcCopy the AFS kernel library files to the local /usr/vice/etc/modload directory.
The filenames for the libraries have the format libafs-version.o, where
version indicates the kernel build level. The string .mp
in the version indicates that the file is appropriate for machines running a multiprocessor
kernel.
# cp -rp modload /usr/vice/etcCopy the AFS initialization script to the local directory for initialization files (by convention, /etc/rc.d/init.d on Linux machines). Note the removal of the .rc extension as you copy the script.
# cp -p afs.rc /etc/rc.d/init.d/afsconfiguringAFS server partition on server machine after firstLinuxAFS server partitionconfiguring on server machine after firstLinuxfile server machine, additionalAFS server partitionon LinuxLinuxAFS server partitionon add'l server machineCreate a directory called /vicepxx for each AFS
server partition you are configuring (there must be at least one). Repeat the command for each partition.
# mkdir /vicepxxAdd a line with the following format to the file systems registry file, /etc/fstab, for each directory just created. The entry maps the directory name to the disk
partition to be mounted on it.
/dev/disk /vicepxx ext2 defaults 0 2
The following is an example for the first partition being configured.
/dev/sda8 /vicepa ext2 defaults 0 2
Create a file system on each partition that is to be mounted at a /vicepxx directory. The following command is probably appropriate,
but consult the Linux documentation for more information.
# mkfs -v /dev/diskMount each partition by issuing either the mount -a command to mount all
partitions at once or the mount command to mount each partition in turn.If the machine is to remain an AFS client, incorporate AFS into its authentication system, following the
instructions in Enabling AFS Login on Linux Systems.Proceed to Starting Server Programs.Getting Started on Solaris SystemsBegin by running the AFS initialization script to call the modload program, which
dynamically loads AFS modifications into the kernel. Then configure partitions and replace the Solaris fsck program with a version that correctly handles AFS volumes. incorporating AFS kernel extensionsserver machine after firstSolarisAFS kernel extensionson server machine after firstSolarisfile server machine, additionalAFS kernel extensionsSolarisSolarisAFS kernel extensionson add'l server machineUnpack the OpenAFS Solaris distribution tarball. The examples
below assume that you have unpacked the files into the
/tmp/afsdist directory. If you
pick a diferent location, substitute this in all of the following
exmaples. Once you have unpacked the distribution, change directory
as indicated.
# cd /tmp/afsdist/sun4x_56/root.client/usr/vice/etcCopy the AFS initialization script to the local directory for initialization files (by convention, /etc/init.d on Solaris machines). Note the removal of the .rc
extension as you copy the script.
# cp -p afs.rc /etc/init.d/afsCopy the appropriate AFS kernel library file to the local file /kernel/fs/afs.If the machine is running Solaris 2.6 or the 32-bit version of Solaris 7, its kernel supports NFS server
functionality, and the nfsd process is running:
# cp -p modload/libafs.o /kernel/fs/afsIf the machine is running Solaris 2.6 or the 32-bit version of Solaris 7, and its kernel does not support NFS
server functionality or the nfsd process is not running:
# cp -p modload/libafs.nonfs.o /kernel/fs/afsIf the machine is running the 64-bit version of Solaris 7, its kernel supports NFS server functionality, and the
nfsd process is running:
# cp -p modload/libafs64.o /kernel/fs/sparcv9/afsIf the machine is running the 64-bit version of Solaris 7, and its kernel does not support NFS server
functionality or the nfsd process is not running:
# cp -p modload/libafs64.nonfs.o /kernel/fs/sparcv9/afsRun the AFS initialization script to load AFS modifications into the kernel. You can ignore any error messages
about the inability to start the BOS Server or the Cache Manager or AFS client.
# /etc/init.d/afs startWhen an entry called afs does not already exist in the local /etc/name_to_sysnum file, the script automatically creates it and reboots the machine to start
using the new version of the file. If this happens, log in again as the superuser root after the reboot and run the initialization script again. This time the required entry
exists in the /etc/name_to_sysnum file, and the modload program runs.
login: root
Password: root_password
# /etc/init.d/afs startreplacing fsck programserver machine after firstSolarisfsck programon server machine after firstSolarisfile server machine, additionalfsck programon SolarisSolarisfsck programon add'l server machineCreate the /usr/lib/fs/afs directory to house the AFS-modified fsck program and related files.
# mkdir /usr/lib/fs/afs
# cd /usr/lib/fs/afsCopy the vfsck binary to the newly created directory, changing the name as you
do so.
# cp /cdrom/sun4x_56/root.server/etc/vfsck fsckWorking in the /usr/lib/fs/afs directory, create the following links to Solaris
libraries:
# ln -s /usr/lib/fs/ufs/clri
# ln -s /usr/lib/fs/ufs/df
# ln -s /usr/lib/fs/ufs/edquota
# ln -s /usr/lib/fs/ufs/ff
# ln -s /usr/lib/fs/ufs/fsdb
# ln -s /usr/lib/fs/ufs/fsirand
# ln -s /usr/lib/fs/ufs/fstyp
# ln -s /usr/lib/fs/ufs/labelit
# ln -s /usr/lib/fs/ufs/lockfs
# ln -s /usr/lib/fs/ufs/mkfs
# ln -s /usr/lib/fs/ufs/mount
# ln -s /usr/lib/fs/ufs/ncheck
# ln -s /usr/lib/fs/ufs/newfs
# ln -s /usr/lib/fs/ufs/quot
# ln -s /usr/lib/fs/ufs/quota
# ln -s /usr/lib/fs/ufs/quotaoff
# ln -s /usr/lib/fs/ufs/quotaon
# ln -s /usr/lib/fs/ufs/repquota
# ln -s /usr/lib/fs/ufs/tunefs
# ln -s /usr/lib/fs/ufs/ufsdump
# ln -s /usr/lib/fs/ufs/ufsrestore
# ln -s /usr/lib/fs/ufs/volcopyAppend the following line to the end of the file /etc/dfs/fstypes.
afs AFS Utilities
Edit the /sbin/mountall file, making two changes. Add an entry for AFS to the case statement for option 2, so that it reads
as follows:
case "$2" in
ufs) foptions="-o p"
;;
afs) foptions="-o p"
;;
s5) foptions="-y -t /var/tmp/tmp$$ -D"
;;
*) foptions="-y"
;;
Edit the file so that all AFS and UFS partitions are checked in parallel. Replace the following section of
code:
# For fsck purposes, we make a distinction between ufs and
# other file systems
#
if [ "$fstype" = "ufs" ]; then
ufs_fscklist="$ufs_fscklist $fsckdev"
saveentry $fstype "$OPTIONS" $special $mountp
continue
fi
with the following section of code:
# For fsck purposes, we make a distinction between ufs/afs
# and other file systems.
#
if [ "$fstype" = "ufs" -o "$fstype" = "afs" ]; then
ufs_fscklist="$ufs_fscklist $fsckdev"
saveentry $fstype "$OPTIONS" $special $mountp
continue
fi
configuringAFS server partition on server machine after firstSolarisAFS server partitionconfiguring on server machine after firstSolarisfile server machine, additionalAFS server partitionon SolarisSolarisAFS server partitionon add'l server machineCreate a directory called /vicepxx for each AFS
server partition you are configuring (there must be at least one). Repeat the command for each partition.
# mkdir /vicepxxAdd a line with the following format to the file systems registry file, /etc/vfstab, for each partition to be mounted on a directory created in the previous step. Note
the value afs in the fourth field, which tells Solaris to use the AFS-modified
fsck program on this partition.
/dev/dsk/disk /dev/rdsk/disk /vicepxx afs boot_order yes
The following is an example for the first partition being configured.
/dev/dsk/c0t6d0s1 /dev/rdsk/c0t6d0s1 /vicepa afs 3 yes
Create a file system on each partition that is to be mounted at a /vicepxx directory. The following command is probably appropriate,
but consult the Solaris documentation for more information.
# newfs -v /dev/rdsk/diskIssue the mountall command to mount all partitions at once.If the machine is to remain an AFS client, incorporate AFS into its authentication system, following the
instructions in Enabling AFS Login and Editing the File Systems Clean-up Script on Solaris
Systems.Proceed to Starting Server Programs.file server machine, additionalserver functionalityinstallingserver functionalityserver machine after firstStarting Server ProgramsIn this section you initialize the BOS Server, the Update Server, the controller process for NTPD, and the fs process. You begin by copying the necessary server files to the local disk. copyingserver files to local diskserver machine after firstBinary Distributioncopying server files fromserver machine after firstfile server machine, additionalcopyingserver files to local diskCopy file server binaries to the local /usr/afs/bin directory. On a machine of an existing system type, you can either
copy files from the OpenAFS binary distribution or use a
remote file transfer protocol to copy files from an existing
server machine of the same system type. To load from the
binary distribution, see the instructions just following for
a machine of a new system type. If using a remote file
transfer protocol, copy the complete contents of the
existing server machine's
/usr/afs/bin
directory.If you are working from a tarball distribtion, rather
than one distributed in a packaged format, you must use the
following instructions to copy files from
the OpenAFS Binary Distribution.
Unpack the distribution tarball. The examples
below assume that you have unpacked the files into the
/tmp/afsdist
directory. If you pick a different location, substitute
this in all of the following examples.Copy files from the distribution to the local /usr/afs directory.
# cd /tmp/afsdist/sysname/root.server/usr/afs
# cp -rp * /usr/afsusr/afs/etc directoryserver machine after firstfile server machine, additional/usr/afs/etc directorycreating/usr/afs/etc directoryserver machine after firstcreatingCellServDB file (server)server machine after firstUserList fileserver machine after firstKeyFile fileserver machine after firstCellServDB file (server)creatingon server machine after firstdatabase server machineentry in server CellServDB fileon server machine after firstThisCell file (server)server machine after firstfile server machine, additionalcell membership, definingfor server processessettingcell name in server ThisCell fileserver machine after firstfile server machine, additionalThisCell file (server)Copy the contents of the
/usr/afs/etc directory from an
existing file server machine, using a remote file transfer protocol
such as sftp or
scp. If you use a system
control machine, it is best to copy the contents of its
/usr/afs/etc directory. If you
choose not to run a system control machine, copy the directory's
contents from any existing file server machine.
BOS Serverstartingserver machine after firststartingBOS Serverserver machine after firstfile server machine, additionalBOS Serverauthorization checking (disabling)server machine after firstdisabling authorization checkingserver machine after firstfile server machine, additionalauthorization checking (disabling)Change to the /usr/afs/bin directory and start the BOS Server (bosserver process). Include the -noauth flag to prevent the AFS
processes from performing authorization checking. This is a grave compromise of security; finish the remaining
instructions in this section in an uninterrupted pass.
# cd /usr/afs/bin
# ./bosserver -noauth &BosConfig fileadding entriesserver machine after firstaddingentries to BosConfig fileserver machine after firstUpdate Serverstarting client portionupclient processstartingUpdate Server client portionfile server machine, additionalUpdate Server client portionIf you run a system control machine, create the upclientetc process as an instance of the client portion of the Update Server. It accepts updates
of the common configuration files stored in the system control machine's /usr/afs/etc
directory from the upserver process (server portion of the Update Server) running on
that machine. The cell's first file server machine was installed as the system control machine in Starting the Server Portion of the Update Server. (If you do not run a system control machine,
you must update the contents of the /usr/afs/etc directory on each file server machine,
using the appropriate bos commands.)By default, the Update Server performs updates every 300 seconds (five minutes). Use the -t argument to specify a different number of seconds. For the
machine name argument, substitute the name of the machine you are installing. The
command appears on multiple lines here only for legibility reasons.
# ./bos create <machine name> upclientetc simple \
"/usr/afs/bin/upclient <system control machine> \
[-t <time>] /usr/afs/etc" -cell <cell name> -noauthUpdate Serverstarting server portionserver machine after firststartingUpdate Server server portionserver machine after firstfile server machine, additionalUpdate Server server portionCreate an instance of the Update
Server to handle distribution of the file server binaries
stored in the /usr/afs/bin
directory. If your architecture using a package management system
such as 'rpm' or 'apt' to maintain its binaries, note that
distributing binaries via this system may interfere with your local
package management tools.
If this is the first file server machine of its AFS system type, create the upserver process as an instance of the server portion of the Update Server. It distributes
its copy of the file server process binaries to the other file server machines of this system type that you
install in future. Creating this process makes this machine the binary distribution machine for its type.
# ./bos create <machine name> upserver simple \
"/usr/afs/bin/upserver -clear /usr/afs/bin" \
-cell <cell name> -noauthIf this machine is an existing system type, create the upclientbin process
as an instance of the client portion of the Update Server. It accepts updates of the AFS binaries from the
upserver process running on the binary distribution machine for its system type.
For distribution to work properly, the upserver process must already by running
on that machine.Use the -clear argument to specify that the upclientbin process requests unencrypted transfer of the binaries in the /usr/afs/bin directory. Binaries are not sensitive and encrypting them is
time-consuming.By default, the Update Server performs updates every 300 seconds (five minutes). Use the -t argument to specify an different number of seconds.
# ./bos create <machine name> upclientbin simple \
"/usr/afs/bin/upclient <binary distribution machine> \
[-t <time>] -clear /usr/afs/bin" -cell <cell name> -noauthrunntp processserver machine after firststartingrunntp processserver machine after firstfile server machine, additionalrunntp processNTPDserver machine after firstHistorically, AFS provided its own version of the
Network Time Protocol Daemon. Whilst this is still provided for
existing sites, we recommend that you configure and run your
own timeservice independently of AFS. The instructions below are
provided for those sites still reliant upon OpenAFS's ntp system.
Start the runntp process, which configures the Network Time Protocol Daemon
(NTPD) to choose a database server machine chosen randomly from the local /usr/afs/etc/CellServDB file as its time source. In the standard configuration, the first
database server machine installed in your cell refers to a time source outside the cell, and serves as the basis for
clock synchronization on all server machines.
# ./bos create <machine name> runntp simple \
/usr/afs/bin/runntp -cell <cell name> -noauthDo not run the runntp process if NTPD or another time synchronization protocol
is already running on the machine. Some versions of some operating systems run a time synchronization program by
default, as detailed in the OpenAFS Release Notes.Attempting to run multiple instances of the NTPD causes an error. Running NTPD together with another time
synchronization protocol is unnecessary and can cause instability in the clock setting.File Serverserver machine after firststartingFile Serverserver machine after firstfile server machine, additionalFile ServerVolume Serverserver machine after firststartingVolume Serverserver machine after firstfile server machine, additionalVolume ServerSalvager (salvager process)server machine after firstfs processserver machine after firststartingfs processserver machine after firstfile server machine, additionalfs processIssue the bos create command
to start the fs process or the
dafs process, depending on if you
want to run the Demand-Attach File Server or not. See Appendix C, The Demand-Attach File Server for
more information on whether you want to run it or not.
If you do not want to run the Demand-Attach File Server, start the fs process, which binds together the File Server, Volume Server, and
Salvager.
# ./bos create <machine name> fs fs \
/usr/afs/bin/fileserver /usr/afs/bin/volserver \
/usr/afs/bin/salvager -cell <cell name> -noauthIf you want to run the Demand-Attach File Server, start the
dafs process, which binds together
the File Server, Volume Server, Salvage Server, and Salvager.
# ./bos create <machine name> dafs dafs \
/usr/afs/bin/dafileserver /usr/afs/bin/davolserver \
/usr/afs/bin/salvageserver \
/usr/afs/bin/dasalvager -cell <cell name> -noauthinstallingclient functionalityserver machine after firstfile server machine, additionalclient functionalityInstalling Client FunctionalityIf you want this machine to be a client as well as a server, follow the instructions in this section. Otherwise, skip to
Completing the Installation.Begin by loading the necessary client files to the local disk. Then create the necessary configuration files and start
the Cache Manager. For more detailed explanation of the procedures involved, see the corresponding instructions in Installing the First AFS Machine (in the sections following Overview:
Installing Client Functionality).If another AFS machine of this machine's system type exists, the AFS binaries are probably already accessible in your
AFS filespace (the conventional location is /afs/cellname/sysname/usr/afsws). If not, or if this is
the first AFS machine of its type, copy the AFS binaries for this system type into an AFS volume by following the instructions
in Storing AFS Binaries in AFS. Because this machine is not yet an AFS client, you must perform
the procedure on an existing AFS machine. However, remember to perform the final step (linking the local directory /usr/afsws to the appropriate location in the AFS file tree) on this machine itself. If you also want
to create AFS volumes to house UNIX system binaries for the new system type, see Storing System
Binaries in AFS. Binary Distributioncopying client files fromserver machine after firstfile server machine, additionalcopyingclient files to local diskcopyingclient files to local diskserver machine after firstCopy client binaries and files to the local disk. On a machine of an existing system type, you can either
load files from the OpenAFS Binary Distribution or use a
remote file transfer protocol to copy files from an existing
server machine of the same system type. To load from the
binary distribution, see the instructions just following
for a machine of a new system type. If using a remote file
transfer protocol, copy the complete contents of the existing
client machine's
/usr/vice/etc
directory.On a machine of a new system type, you must use the
following instructions to copy files from the OpenAFS
Binary Distribution. If your distribution is provided in
a packaged format, then simply installing the packages will
perform the necessary actions.
Unpack the distribution tarball. The examples
below assume that you have unpacked the files into the
/tmp/afsdist
directory. If you pick a different location, substitute
this in all of the following examples.Copy files to the local /usr/vice/etc directory.This step places a copy of the AFS initialization script (and related files, if applicable) into the
/usr/vice/etc directory. In the preceding instructions for incorporating
AFS into the kernel, you copied the script directly to the operating system's conventional location for
initialization files. When you incorporate AFS into the machine's startup sequence in a later step, you can
choose to link the two files.On some system types that use a dynamic kernel loader program, you previously copied AFS library files
into a subdirectory of the /usr/vice/etc directory. On other system types,
you copied the appropriate AFS library file directly to the directory where the operating system accesses
it. The following commands do not copy or recopy the AFS library files into the /usr/vice/etc directory, because on some system types the library files consume a
large amount of space. If you want to copy them, add the -r flag to the
first cp command and skip the second cp
command.
# cd /tmp/afsdist/sysname/root.client/usr/vice/etc
# cp -p * /usr/vice/etc
# cp -rp C /usr/vice/etccell namesetting in client ThisCell fileserver machine after firstcell namesetting in server ThisCell fileserver machine after firstsettingcell name in client ThisCell fileserver machine after firstfile server machine, additionalThisCell file (client)file server machine, additionalcell membership, definingfor client processesThisCell file (client)server machine after firstChange to the /usr/vice/etc directory and create the ThisCell file as a copy of the /usr/afs/etc/ThisCell file. You
must first remove the symbolic link to the /usr/afs/etc/ThisCell file that the BOS
Server created automatically in Starting Server Programs.
# cd /usr/vice/etc
# rm ThisCell
# cp /usr/afs/etc/ThisCell ThisCellRemove the symbolic link to the /usr/afs/etc/CellServDB file.
# rm CellServDBdatabase server machineentry in client CellServDB fileon server machine after firstCellServDB file (client)creatingon server machine after firstcreatingCellServDB file (client)server machine after firstCreate the
/usr/vice/etc/CellServDB file.
Use a network file transfer program such as
sftp or
scp to copy it from
one of the following sources, which are listed in
decreasing order of preference:
Your cell's central CellServDB source file (the conventional location is
/afs/cellname/common/etc/CellServDB)The global CellServDB
file maintained at grand.central.orgAn existing client machine in your cellThe CellServDB.sample
file included in the
sysname/root.client/usr/vice/etc
directory of each OpenAFS distribution; add an entry for the
local cell by following the instructions in
Creating the Client CellServDB File
cacheconfiguringserver machine after firstconfiguringcacheserver machine after firstsettingcache size and locationserver machine after firstfile server machine, additionalcache size and locationCreate the cacheinfo file for either a disk cache or a memory cache. For a
discussion of the appropriate values to record in the file, see Configuring the
Cache.To configure a disk cache, issue the following commands. If you are devoting a partition exclusively to caching,
as recommended, you must also configure it, make a file system on it, and mount it at the directory created in this
step.
# mkdir /usr/vice/cache
# echo "/afs:/usr/vice/cache:#blocks" > cacheinfoTo configure a memory cache:
# echo "/afs:/usr/vice/cache:#blocks" > cacheinfoCache Managerserver machine after firstconfiguringCache Managerserver machine after firstfile server machine, additionalCache Managerafs (/afs) directorycreatingserver machine after firstAFS initialization scriptsetting afsd parametersserver machine after firstfile server machine, additionalafsd command parametersCreate the local directory on which to mount the AFS filespace, by convention /afs. If the directory already exists, verify that it is empty.
# mkdir /afsOn AIX systems, add the following line to the /etc/vfs file. It enables AIX to
unmount AFS correctly during shutdown.
afs 4 none none
On non-packaged Linux systems, copy the afsd options file from the /usr/vice/etc directory to the /etc/sysconfig directory,
removing the .conf extension as you do so.
# cp /usr/vice/etc/afs.conf /etc/sysconfig/afsEdit the machine's AFS initialization script or afsd options file to set
appropriate values for afsd command parameters. The script resides in the indicated
location on each system type: On AIX systems, /etc/rc.afsOn HP-UX systems, /sbin/init.d/afsOn IRIX systems, /etc/init.d/afsOn Fedora and RHEL systems,
/etc/sysconfig/openafs.
Note that this file has a different format from a standard
afsd options file.On non-packaged Linux systems, /etc/sysconfig/afs (the afsd options file)On Solaris systems, /etc/init.d/afsUse one of the methods described in Configuring the Cache Manager to add the
following flags to the afsd command line. If you intend for the machine to remain an
AFS client, also set any performance-related arguments you wish. Add the -memcache flag if the machine is to use a memory cache.Add the -verbose flag to display a trace of the Cache Manager's
initialization on the standard output stream.Add the --dynroot or
--afsdb options if you
wish to have a synthetic AFS root, as discussed in
Enabling Access to Foreign Cells
If appropriate, follow the instructions in Storing AFS Binaries in AFS to copy the
AFS binaries for this system type into an AFS volume. See the introduction to this section for further
discussion.Completing the InstallationAt this point you run the machine's AFS initialization script to verify that it correctly loads AFS modifications into
the kernel and starts the BOS Server, which starts the other server processes. If you have installed client files, the script
also starts the Cache Manager. If the script works correctly, perform the steps that incorporate it into the machine's startup
and shutdown sequence. If there are problems during the initialization, attempt to resolve them. The AFS Product Support group
can provide assistance if necessary.If the machine is configured as a client using a disk cache, it can take a while for the afsd program to create all of the Vn files
in the cache directory. Messages on the console trace the initialization process. Issue the bos shutdown command to shut down the AFS server processes other than
the BOS Server. Include the -wait flag to delay return of the command shell prompt
until all processes shut down completely.
# /usr/afs/bin/bos shutdown <machine name> -waitIssue the ps command to learn the BOS Server's process ID number (PID), and then
the kill command to stop the bosserver process.
# psappropriate_ps_options| grep bosserver
# killbosserver_PIDAFS initialization scriptadding to machine startup sequenceserver machine after firstAFS initialization scriptrunningserver machine after firstfile server machine, additionalAFS initialization scriptrunning AFS init. scriptserver machine after firstinstallingAFS initialization scriptserver machine after firstAIXAFS initialization scripton add'l server machineRun the AFS initialization script by issuing the appropriate commands for this system type.On AIX systems:Reboot the machine and log in again as the local superuser root.
# cd /
# shutdown -r now
login: root
Password: root_passwordRun the AFS initialization script.
# /etc/rc.afsEdit the AIX initialization file, /etc/inittab, adding the following line
to invoke the AFS initialization script. Place it just after the line that starts NFS daemons.
rcafs:2:wait:/etc/rc.afs > /dev/console 2>&1 # Start AFS services
(Optional) There are now copies of the AFS initialization file in both the
/usr/vice/etc and /etc directories. If you want
to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can
always retrieve the original script from the AFS CD-ROM if necessary.
# cd /usr/vice/etc
# rm rc.afs
# ln -s /etc/rc.afsProceed to Step 4.HP-UXAFS initialization scripton add'l server machineOn HP-UX systems:Run the AFS initialization script.
# /sbin/init.d/afs startChange to the /sbin/init.d directory and issue the ln
-s command to create symbolic links that incorporate the AFS initialization script into the HP-UX
startup and shutdown sequence.
# cd /sbin/init.d
# ln -s ../init.d/afs /sbin/rc2.d/S460afs
# ln -s ../init.d/afs /sbin/rc2.d/K800afs(Optional) There are now copies of the AFS initialization file in both the
/usr/vice/etc and /sbin/init.d directories. If
you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them.
You can always retrieve the original script from the AFS CD-ROM if necessary.
# cd /usr/vice/etc
# rm afs.rc
# ln -s /sbin/init.d/afs afs.rcProceed to Step 4.IRIXAFS initialization scripton add'l server machineafsclient variable (IRIX)server machine after firstvariablesafsclient (IRIX)server machine after firstIRIXafsclient variableserver machine after firstafsserver variable (IRIX)server machine after firstvariablesafsserver (IRIX)server machine after firstIRIXafsserver variableserver machine after firstOn IRIX systems:If you have configured the machine to use the ml dynamic loader program,
reboot the machine and log in again as the local superuser root.
# cd /
# shutdown -i6 -g0 -y
login: root
Password: root_passwordIssue the chkconfig command to activate the afsserver configuration variable.
# /etc/chkconfig -f afsserver onIf you have configured this machine as an AFS client and want to it remain one, also issue the chkconfig command to activate the afsclient configuration
variable.
# /etc/chkconfig -f afsclient onRun the AFS initialization script.
# /etc/init.d/afs startChange to the /etc/init.d directory and issue the ln
-s command to create symbolic links that incorporate the AFS initialization script into the IRIX
startup and shutdown sequence.
# cd /etc/init.d
# ln -s ../init.d/afs /etc/rc2.d/S35afs
# ln -s ../init.d/afs /etc/rc0.d/K35afs(Optional) There are now copies of the AFS initialization file in both the
/usr/vice/etc and /etc/init.d directories. If
you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them.
You can always retrieve the original script from the AFS CD-ROM if necessary.
# cd /usr/vice/etc
# rm afs.rc
# ln -s /etc/init.d/afs afs.rcProceed to Step 4.LinuxAFS initialization scripton add'l server machineOn Fedora or RHEL Linux systems:Reboot the machine and log in again as the local superuser root.
# cd /
# shutdown -r now
login: root
Password: root_passwordRun the OpenAFS initialization scripts.
# /etc/rc.d/init.d/openafs-client start
# /etc/rc.d/init.d/openafs-server startIssue the chkconfig
command to activate the
openafs-client and
openafs-server configuration
variables. Based on the instruction in the AFS initialization
files that begins with the string
#chkconfig, the command
automatically creates the symbolic links that incorporate the
script into the Linux startup and shutdown sequence.
# /sbin/chkconfig --add openafs-client
# /sbin/chkconfig --add openafs-serverOn Linux systems:Reboot the machine and log in again as the local superuser root.
# cd /
# shutdown -r now
login: root
Password: root_passwordRun the OpenAFS initialization script.
# /etc/rc.d/init.d/afs startIssue the chkconfig command to activate the afs configuration variable. Based on the instruction in the AFS initialization file that
begins with the string #chkconfig, the command automatically creates the symbolic
links that incorporate the script into the Linux startup and shutdown sequence.
# /sbin/chkconfig --add afs(Optional) There are now copies of the AFS initialization file in both the
/usr/vice/etc and /etc/rc.d/init.d directories,
and copies of the afsd options file in both the /usr/vice/etc and /etc/sysconfig directories. If you want
to avoid potential confusion by guaranteeing that the two copies of each file are always the same, create a link
between them. You can always retrieve the original script or options file from the AFS CD-ROM if necessary.
# cd /usr/vice/etc
# rm afs.rc afs.conf
# ln -s /etc/rc.d/init.d/afs afs.rc
# ln -s /etc/sysconfig/afs afs.confProceed to Step 4.SolarisAFS initialization scripton add'l server machineOn Solaris systems:Reboot the machine and log in again as the local superuser root.
# cd /
# shutdown -i6 -g0 -y
login: root
Password: root_passwordRun the AFS initialization script.
# /etc/init.d/afs startChange to the /etc/init.d directory and issue the ln
-s command to create symbolic links that incorporate the AFS initialization script into the Solaris
startup and shutdown sequence.
# cd /etc/init.d
# ln -s ../init.d/afs /etc/rc3.d/S99afs
# ln -s ../init.d/afs /etc/rc0.d/K66afs(Optional) There are now copies of the AFS initialization file in both the
/usr/vice/etc and /etc/init.d directories. If
you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them.
You can always retrieve the original script from the OpenAFS Binary Distribution if necessary.
# cd /usr/vice/etc
# rm afs.rc
# ln -s /etc/init.d/afs afs.rcVerify that /usr/afs and its subdirectories on the new
file server machine meet the ownership and mode bit requirements outlined in Protecting
Sensitive AFS Directories. If necessary, use the chmod command to correct the
mode bits.To configure this machine as a database server machine, proceed to Installing Database
Server Functionality.database server machinerequirements for installationrequirementsdatabase server machineInstalling Database Server FunctionalityThis section explains how to install database server functionality. Database server machines have two defining
characteristics. First, they run the Protection Server, and Volume Location (VL) Server processes. They
also run the Backup Server if the cell uses the AFS Backup System, as is assumed in these instructions. Second, they appear in
the CellServDB file of every machine in the cell (and of client machines in foreign cells, if
they are to access files in this cell).Note the following requirements for database server machines. In the conventional configuration, database server machines also serve as file server machines (run the File Server,
Volume Server and Salvager processes). If you choose not to run file server functionality on a database server machine,
then the kernel does not have to incorporate AFS modifications, but the local /usr/afs
directory must house most of the standard files and subdirectories. In particular, the /usr/afs/etc/KeyFile file must contain the same keys as all other server machines in the cell. If
you run a system control machine, run the upclientetc process on every database server
machine other than the system control machine; if you do not run a system control machine, use the bos addkey command as instructed in the chapter in the OpenAFS Administration
Guide about maintaining server encryption keys.The instructions in this section assume that the machine on which you are installing database server functionality
is already a file server machine. Contact the OpenAFS mailing list to learn how to install database server
functionality on a non-file server machine.During the installation of database server functionality, you must restart all of the database server machines to
force the election of a new Ubik coordinator (synchronization site) for each database server process. This can cause a
system outage, which usually lasts less than 5 minutes.Updating the kernel memory list of database server machines on each client machine is generally the most
time-consuming part of installing a new database server machine. It is, however, crucial for correct functioning in your
cell. Incorrect knowledge of your cell's database server machines can prevent your users from authenticating, accessing
files, and issuing AFS commands.You update a client's kernel memory list by changing the /usr/vice/etc/CellServDB
file and then either rebooting or issuing the fs newcell command. For instructions, see
the chapter in the OpenAFS Administration Guide about administering client machines.The point at which you update your clients' knowledge of database server machines depends on which of the database
server machines has the lowest IP address. The following instructions indicate the appropriate place to update your client
machines in either case. If the new database server machine has a lower IP address than any existing database server machine, update
the CellServDB file on every client machine before restarting the database server
processes. If you do not, users can become unable to update (write to) any of the AFS databases. This is because the
machine with the lowest IP address is usually elected as the Ubik coordinator, and only the Coordinator accepts
database writes. On client machines that do not have the new list of database server machines, the Cache Manager
cannot locate the new coordinator. (Be aware that if clients contact the new coordinator before it is actually in
service, they experience a timeout before contacting another database server machine. This is a minor, and
temporary, problem compared to being unable to write to the database.)If the new database server machine does not have the lowest IP address of any database server machine, then it
is better to update clients after restarting the database server processes. Client machines do not start using the
new database server machine until you update their kernel memory list, but that does not usually cause timeouts or
update problems (because the new machine is not likely to become the coordinator).overviewinstalling additional database server machineSummary of ProceduresTo install a database server machine, perform the following procedures. Install the bos suite of commands locally, as a precautionAdd the new machine to the /usr/afs/etc/CellServDB file on existing file server
machinesUpdate your cell's central CellServDB source file and the file you make available
to foreign cellsUpdate every client machine's /usr/vice/etc/CellServDB file and kernel memory
list of database server machinesStart the database server processes (Backup Server, Protection Server, and Volume Location
Server)Restart the database server processes on every database server machineIf required, request that grand.central.org add details of
your new database server machine to the global CellServDBIf required, add details of your new database server to the
AFS database location records in your site's DNSdatabase server machineinstallingadditionalinstructionsdatabase server machine, installing additionalinstallingdatabase server machineadditionalInstructionsIt is assumed that your PATH environment variable includes the directory that houses the AFS command binaries. If not,
you possibly need to precede the command names with the appropriate pathname.You can perform the following instructions on either a server or client machine. Login as an AFS administrator who
is listed in the /usr/afs/etc/UserList file on all server machines.
% kinitadmin_user
Password: admin_password
% aklogIf you are working on a client machine configured in the conventional manner, the bos command suite resides in the /usr/afsws/bin directory, a
symbolic link to an AFS directory. An error during installation can potentially block access to AFS, in which case it is
helpful to have a copy of the bos binary on the local disk. This step is not necessary if
you are working on a server machine, where the binary resides in the local /usr/afs/bin
directory.
% cp /usr/afsws/bin/bos /tmpbos commandsaddhostcommandsbos addhostdatabase server machineentry in server CellServDB filefor new db-server machineCellServDB file (server)adding entry for new db-server machineaddingnew db-server machine to CellServDB filesIssue the bos addhost command to add the new database server
machine to the /usr/afs/etc/CellServDB file on existing server machines (as well as the
new database server machine itself).Substitute the new database server machine's fully-qualified hostname for the host name
argument. If you run a system control machine, substitute its fully-qualified hostname for the
machine name argument. If you do not run a system control machine, repeat the bos addhost command once for each server machine in your cell (including the new database server
machine itself), by substituting each one's fully-qualified hostname for the machine name
argument in turn.
% bos addhost <machine name> <host name>
If you run a system control machine, wait for the Update Server to distribute the new CellServDB file, which takes up to five minutes by default. If you are issuing individual bos addhost commands, attempt to issue all of them within five minutes.It is best to maintain a one-to-one mapping between hostnames and IP addresses on a multihomed database server
machine (the conventional configuration for any AFS machine). The BOS Server uses the gethostbyname( ) routine to obtain the IP address associated with the host
name argument. If there is more than one address, the BOS Server records in the CellServDB entry the one that appears first in the list of addresses returned by the routine. The
routine possibly returns addresses in a different order on different machines, which can create inconsistency.(Optional) Issue the bos listhosts command on each
server machine to verify that the new database server machine appears in its CellServDB
file.
% bos listhosts <machine name>
Add the new database server machine to your cell's central CellServDB source file, if you use one. The standard location is /afs/cellname/common/etc/CellServDB.If you are willing to make your cell accessible to users in foreign cells, add the new database server machine to
the file that lists your cell's database server machines. The conventional location is /afs/cellname/service/etc/CellServDB.local. database server machineentry in client CellServDB filefor new db-server machineCellServDB file (client)adding entryfor new db-server machineclient machineCellServDB fileadding entryIf this machine's IP address is lower than any existing database server machine's, update
every client machine's /usr/vice/etc/CellServDB file and kernel memory list to include
this machine. (If this machine's IP address is not the lowest, it is acceptable to wait until Step 12.)There are several ways to update the CellServDB file on client machines, as
detailed in the chapter of the OpenAFS Administration Guide about administering client machines. One
option is to copy over the central update source (which you updated in Step 5), with or
without using the package program. To update the machine's kernel memory list, you can
either reboot after changing the CellServDB file or issue the fs
newcell command.
database server machinestarting database server processesBosConfig fileadding entriesdatabase server machineaddingentries to BosConfig filedatabase server machineIf you are running a cell which still relies upon
kaserver see
Starting the Authentication Service
for an additional installation step.Backup Serverstartingnew db-server machinestartingBackup Servernew db-server machineStart the Backup Server (the buserver process). You must
perform other configuration procedures before actually using the AFS Backup System, as detailed in the OpenAFS
Administration Guide.
% bos create <machine name> buserver simple /usr/afs/bin/buserverProtection Serverstartingnew db-server machinestartingProtection Servernew db-server machineStart the Protection Server (the ptserver process).
% bos create <machine name> ptserver simple /usr/afs/bin/ptserverVL Server (vlserver process)startingnew db-server machinestartingVL Servernew db-server machineStart the Volume Location (VL) Server (the vlserver
process).
% bos create <machine name> vlserver simple /usr/afs/bin/vlservercommandsbos restarton new db-server machinebos commandsrestarton new db-server machinerestarting server processon new db-server machineserver processrestartingon new db-server machineIssue the bos restart command on every database server
machine in the cell, including the new machine. The command restarts the Authentication, Backup, Protection, and VL
Servers, which forces an election of a new Ubik coordinator for each process. The new machine votes in the election and is
considered as a potential new coordinator.A cell-wide service outage is possible during the election of a new coordinator for the VL Server, but it normally
lasts less than five minutes. Such an outage is particularly likely if you are installing your cell's second database
server machine. Messages tracing the progress of the election appear on the console.Repeat this command on each of your cell's database server machines in quick succession. Begin with the machine with
the lowest IP address.
% bos restart <machine name> kaserver buserver ptserver vlserverIf an error occurs, restart all server processes on the database server machines again by using one of the following
methods: Issue the bos restart command with the -bosserver flag for each database server machineReboot each database server machine, either using the bos exec command or at
its consoleIf you did not update the CellServDB file on client machines
in Step 6, do so now.If you wish to participate in the AFS
global name space, send the new database server machine's name and
IP address to grand.central.org. Do so, by emailing an updated
CellServDB fragment for your cell
to cellservdb@grand.central.orgMore details on the registration procedures for the
CellServDB maintained by grand.central.org are available from
http://grand.central.org/csdb.htmldatabase server machineremoving from serviceinstructionsdatabase server machine, removingremovingdatabase server machine from serviceoverviewremoving database server machineRemoving Database Server FunctionalityRemoving database server machine functionality is nearly the reverse of installing it.Summary of ProceduresTo decommission a database server machine, perform the following procedures. Install the bos suite of commands locally, as a precautionIf you participate in the global AFS namespace, notify
grand.central.org that you are decommissioning a database server
machineUpdate your cell's central CellServDB source file and the file you make available
to foreign cellsUpdate every client machine's /usr/vice/etc/CellServDB file and kernel memory
list of database server machinesRemove the machine from the /usr/afs/etc/CellServDB file on file server
machinesStop the database server processes and remove them from the /usr/afs/local/BosConfig file if desiredRestart the database server processes on the remaining database server machinesInstructionsIt is assumed that your PATH environment variable includes the directory that houses the AFS command binaries. If not,
you possibly need to precede the command names with the appropriate pathname.You can perform the following instructions on either a server or client machine. Login as an AFS administrator who
is listed in the /usr/afs/etc/UserList file on all server machines.
% kinitadmin_user
Password: admin_password
% aklogIf you are working on a client machine configured in the conventional manner, the bos command suite resides in the /usr/afsws/bin directory, a
symbolic link to an AFS directory. An error during installation can potentially block access to AFS, in which case it is
helpful to have a copy of the bos binary on the local disk. This step is not necessary if
you are working on a server machine, where the binary resides in the local /usr/afs/bin
directory.
% cp /usr/afsws/bin/bos /tmpIf your cell is included in the global
CellServDB, send the revised list of
your cell's database server machines to grand.central.orgIf the administrators in foreign cells do not learn about the change in your cell,
they cannot update the CellServDB file on their client machines. Users in foreign cells
continue to send database requests to the decommissioned machine, which creates needless network traffic and activity on
the machine. Also, the users experience time-out delays while their request is forwarded to a valid database server
machine.Remove the decommissioned machine from your cell's central CellServDB source file, if you use one. The conventional location is /afs/cellname/common/etc/CellServDB.If you maintain a file that users in foreign cells can access to learn about your cell's database server machines,
update it also. The conventional location is /afs/cellname/service/etc/CellServDB.local. database server machineentry in client CellServDB fileremovingCellServDB file (client)removing entryclient machineCellServDB fileremoving entryremovingentry from CellServDB fileUpdate every client machine's /usr/vice/etc/CellServDB file
and kernel memory list to exclude this machine. Altering the CellServDB file and kernel
memory list before stopping the actual database server processes avoids possible time-out delays that result when users
send requests to a decommissioned database server machine that is still listed in the file.There are several ways to update the CellServDB file on client machines, as
detailed in the chapter of the OpenAFS Administration Guide about administering client machines. One
option is to copy over the central update source (which you updated in Step 5), with or
without using the package program. To update the machine's kernel memory list, you can
either reboot after changing the CellServDB file or issue the fs
newcell command. bos commandsremovehostcommandsbos removehostCellServDB file (server)removing entrydatabase server machineentry in server CellServDB fileremovingIssue the bos removehost command to remove the
decommissioned database server machine from the /usr/afs/etc/CellServDB file on server
machines.Substitute the decommissioned database server machine's fully-qualified hostname for the host
name argument. If you run a system control machine, substitute its fully-qualified hostname for the
machine name argument. If you do not run a system control machine, repeat the bos removehost command once for each server machine in your cell (including the decommissioned
database server machine itself), by substituting each one's fully-qualified hostname for the
machine name argument in turn.
% bos removehost <machine name> <host name>
If you run a system control machine, wait for the Update Server to distribute the new CellServDB file, which takes up to five minutes by default. If issuing individual bos removehost commands, attempt to issue all of them within five minutes.(Optional) Issue the bos listhosts command on each
server machine to verify that the decommissioned database server machine no longer appears in its CellServDB file.
% bos listhosts <machine name>
commandsbos stopbos commandsstopdatabase server machinestopping database server processesstoppingdatabase server processesBackup ServerstoppingProtection ServerstoppingVL Server (vlserver process)stoppingIssue the bos stop command to stop the database server
processes on the machine, by substituting its fully-qualified hostname for the
machine name argument. The command changes each process's status in the /usr/afs/local/BosConfig file to NotRun, but does not remove its
entry from the file.
% bos stop <machine name> kaserver buserver ptserver vlservercommandsbos deletebos commandsdeleteBosConfig fileremoving entriesremovingentries from BosConfig Filedatabase server machineremoving db-server processes from BosConfig file(Optional) Issue the bos
delete command to remove the entries for database server processes from the BosConfig file. This step is unnecessary if you plan to restart the database server functionality
on this machine in future.
% bos delete <machine name> buserver ptserver vlservercommandsbos restarton removed db-server machinebos commandsrestarton removed db-server machinerestarting server processon removed db-server machineserver processrestartingon removed db-server machineIssue the bos restart command on every database server
machine in the cell, to restart the Backup, Protection, and VL Servers. This forces the election of a Ubik
coordinator for each process, ensuring that the remaining database server processes recognize that the machine is no
longer a database server.A cell-wide service outage is possible during the election of a new coordinator for the VL Server, but it normally
lasts less than five minutes. Messages tracing the progress of the election appear on the console.Repeat this command on each of your cell's database server machines in quick succession. Begin with the machine with
the lowest IP address.
% bos restart <machine name> buserver ptserver vlserverIf an error occurs, restart all server processes on the database server machines again by using one of the following
methods: Issue the bos restart command with the -bosserver flag for each database server machineReboot each database server machine, either using the bos exec command or at
its console