openafs/doc/xml/QuickStartUnix/auqbg005.xml
Simon Wilkinson 99d0070673 Documentation: Fix asetkey syntax
To set a key, use "asetkey add", not just "asetkey"

FIXES 125430

Change-Id: Ifa381ec95f9253bcc5c7a1d374fbf88408f82f67
Reviewed-on: http://gerrit.openafs.org/1045
Reviewed-by: Derrick Brashear <shadow@dementia.org>
Tested-by: Derrick Brashear <shadow@dementia.org>
2009-12-29 18:19:36 -08:00

6738 lines
287 KiB
XML

<?xml version="1.0" encoding="UTF-8"?>
<chapter id="HDRWQ17">
<title>Installing the First AFS Machine</title>
<indexterm>
<primary>file server machine</primary>
<seealso>first AFS machine</seealso>
<seealso>file server machine, additional</seealso>
</indexterm>
<indexterm>
<primary>instructions</primary>
<secondary>first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>installing</primary>
<secondary>first AFS machine</secondary>
</indexterm>
<para>This chapter describes how to install the first AFS machine in your cell, configuring it as both a file server machine and a
client machine. After completing all procedures in this chapter, you can remove the client functionality if you wish, as described
in <link linkend="HDRWQ98">Removing Client Functionality</link>.</para>
<para>To install additional file server machines after completing this chapter, see <link linkend="HDRWQ99">Installing Additional
Server Machines</link>.</para>
<para>To install additional client machines after completing this chapter, see <link linkend="HDRWQ133">Installing Additional
Client Machines</link>. <indexterm>
<primary>requirements</primary>
<secondary>first AFS machine</secondary>
</indexterm></para>
<sect1 id="Header_29">
<title>Requirements and Configuration Decisions</title>
<para>The instructions in this chapter assume that you meet the following requirements.
<itemizedlist>
<listitem>
<para>You are logged onto the machine's console as the local superuser <emphasis role="bold">root</emphasis></para>
</listitem>
<listitem>
<para>A standard version of one of the operating systems supported by the current version of AFS is running on the
machine</para>
</listitem>
<listitem>
<para>You have either installed the provided OpenAFS packages for
your system, have access to a binary distribution tarball, or have
successfully built OpenAFS from source</para>
</listitem>
<listitem>
<para>You have a Kerberos v5 realm running for your site. If you are
working with an existing cell which uses
<emphasis role="bold">kaserver</emphasis> or Kerberos v4 for
authentication, please see
<link linkend="KAS001">kaserver and Legacy Kerberos v4 Authentication</link>
for the modifications required to this installation procedure.</para>
</listitem>
<listitem>
<para>You have a NTP, or similar, time service deployed to ensure
rough clock syncronistation between your clients and servers. If you
wish to use AFS's built in timeservice (which is deprecated) please
see Appendix B for the necessary modifications to this installation
procedure.</para>
</listitem>
</itemizedlist></para>
<para>You must make the following configuration decisions while installing the first AFS machine. To speed the installation
itself, it is best to make the decisions before beginning. See the chapter in the <emphasis>OpenAFS Administration
Guide</emphasis> about issues in cell administration and configuration for detailed guidelines. <indexterm>
<primary>cell name</primary>
<secondary>choosing</secondary>
</indexterm> <indexterm>
<primary>AFS filespace</primary>
<secondary>deciding how to configure</secondary>
</indexterm> <indexterm>
<primary>filespace</primary>
<see>AFS filespace</see>
</indexterm> <itemizedlist>
<listitem>
<para>Select the first AFS machine</para>
</listitem>
<listitem>
<para>Select the cell name</para>
</listitem>
<listitem>
<para>Decide which partitions or logical volumes to configure as AFS server partitions, and choose the directory names on
which to mount them</para>
</listitem>
<listitem>
<para>Decide how big to make the client cache</para>
</listitem>
<listitem>
<para>Decide how to configure the top levels of your cell's AFS filespace</para>
</listitem>
</itemizedlist></para>
<para>This chapter is divided into three large sections corresponding to the three parts of installing the first AFS machine.
Perform all of the steps in the order they appear. Each functional section begins with a summary of the procedures to perform.
The sections are as follows: <itemizedlist>
<listitem>
<para>Installing server functionality (begins in <link linkend="HDRWQ18">Overview: Installing Server
Functionality</link>)</para>
</listitem>
<listitem>
<para>Installing client functionality (begins in <link linkend="HDRWQ63">Overview: Installing Client
Functionality</link>)</para>
</listitem>
<listitem>
<para>Configuring your cell's filespace, establishing further security mechanisms, and enabling access to foreign cells
(begins in <link linkend="HDRWQ71">Overview: Completing the Installation of the First AFS Machine</link>)</para>
</listitem>
</itemizedlist></para>
<indexterm>
<primary>overview</primary>
<secondary>installing server functionality on first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>server functionality</secondary>
</indexterm>
<indexterm>
<primary>installing</primary>
<secondary>server functionality</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
</sect1>
<sect1 id="HDRWQ18">
<title>Overview: Installing Server Functionality</title>
<para>In the first phase of installing your cell's first AFS machine, you install file server and database server functionality
by performing the following procedures:
<orderedlist>
<listitem>
<para>Choose which machine to install as the first AFS machine</para>
</listitem>
<listitem>
<para>Create AFS-related directories on the local disk</para>
</listitem>
<listitem>
<para>Incorporate AFS modifications into the machine's kernel</para>
</listitem>
<listitem>
<para>Configure partitions or logical volumes for storing AFS volumes</para>
</listitem>
<listitem>
<para>On some system types, install and configure an AFS-modified version of the <emphasis role="bold">fsck</emphasis>
program</para>
</listitem>
<listitem>
<para>If the machine is to remain a client machine, incorporate AFS into its authentication system</para>
</listitem>
<listitem>
<para>Start the Basic OverSeer (BOS) Server</para>
</listitem>
<listitem>
<para>Define the cell name and the machine's cell membership</para>
</listitem>
<listitem>
<para>Start the database server processes: Backup Server, Protection Server, and Volume Location
(VL) Server</para>
</listitem>
<listitem>
<para>Configure initial security mechanisms</para>
</listitem>
<listitem>
<para>Start the <emphasis role="bold">fs</emphasis> process, which incorporates three component processes: the File
Server, Volume Server, and Salvager</para>
</listitem>
<listitem>
<para>Start the server portion of the Update Server</para>
</listitem>
</orderedlist></para>
</sect1>
<sect1 id="HDRWQ19">
<title>Choosing the First AFS Machine</title>
<para>The first AFS machine you install must have sufficient disk space to store AFS volumes. To take best advantage of AFS's
capabilities, store client-side binaries as well as user files in volumes. When you later install additional file server
machines in your cell, you can distribute these volumes among the different machines as you see fit.</para>
<para>These instructions configure the first AFS machine as a <emphasis>database server machine</emphasis>, the <emphasis>binary
distribution machine</emphasis> for its system type, and the cell's <emphasis>system control machine</emphasis>. For a
description of these roles, see the <emphasis>OpenAFS Administration Guide</emphasis>.</para>
<para>Installation of additional machines is simplest if the first machine has the lowest IP address of any database server
machine you currently plan to install. If you later install database server functionality on a machine with a lower IP address,
you must first update the <emphasis role="bold">/usr/vice/etc/CellServDB</emphasis> file on all of your cell's client machines.
For more details, see <link linkend="HDRWQ114">Installing Database Server Functionality</link>.</para>
</sect1>
<sect1 id="Header_32">
<title>Creating AFS Directories</title>
<indexterm>
<primary>usr/afs directory</primary>
<secondary>first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>/usr/afs directory</secondary>
</indexterm>
<indexterm>
<primary>creating</primary>
<secondary>/usr/afs directory</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>usr/vice/etc directory</primary>
<secondary>first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>/usr/vice/etc directory</secondary>
</indexterm>
<indexterm>
<primary>creating</primary>
<secondary>/usr/vice/etc directory</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>/ as start to file and directory names</primary>
<secondary>see alphabetized entries without initial slash</secondary>
</indexterm>
<para>If you are installing from packages (such as Debian .deb or
Fedora/SuSe .rpm files), you should now install all of the available
OpenAFS packages for your system type. Typically, these will include
packages for client and server functionality, and a seperate package
containing a suitable kernel module for your running kernel. Consult
the package lists on the OpenAFS website to determine the packages
appropriate for your system.</para>
<para>If you are installing from a tarfile, or from a locally compiled
source tree you should create the <emphasis role="bold">/usr/afs</emphasis>
and <emphasis role="bold">/usr/vice/etc</emphasis> directories on the
local disk, to house server and client files respectively. Subsequent
instructions copy files from the distribution tarfile into them. </para>
<programlisting>
# <emphasis role="bold">mkdir /usr/afs</emphasis>
# <emphasis role="bold">mkdir /usr/vice</emphasis>
# <emphasis role="bold">mkdir /usr/vice/etc</emphasis>
</programlisting>
</sect1>
<sect1 id="HDRWQ20">
<title>Performing Platform-Specific Procedures</title>
<para>Several of the initial procedures for installing a file server machine differ for each system type. For convenience, the
following sections group them together for each system type: <itemizedlist>
<indexterm>
<primary>kernel extensions</primary>
<see>AFS kernel extensions</see>
</indexterm>
<indexterm>
<primary>loading AFS kernel extensions</primary>
<see>incorporating</see>
</indexterm>
<indexterm>
<primary>building</primary>
<secondary>AFS extensions into kernel</secondary>
<see>incorporating AFS kernel extensions</see>
</indexterm>
<listitem>
<para>Incorporate AFS modifications into the kernel.</para>
<para>The kernel on every AFS client machine and, on some systems,
the AFS fileservers, must incorporate AFS extensions. On machines
that use a dynamic kernel module loader, it is conventional to
alter the machine's initialization script to load the AFS extensions
at each reboot. <indexterm>
<primary>AFS server partition</primary>
<secondary>mounted on /vicep directory</secondary>
</indexterm> <indexterm>
<primary>partition</primary>
<see>AFS server partition</see>
</indexterm> <indexterm>
<primary>logical volume</primary>
<see>AFS server partition</see>
</indexterm> <indexterm>
<primary>requirements</primary>
<secondary>AFS server partition name and location</secondary>
</indexterm> <indexterm>
<primary>naming conventions for AFS server partition</primary>
</indexterm> <indexterm>
<primary>vicep<emphasis>xx</emphasis> directory</primary>
<see>AFS server partition</see>
</indexterm> <indexterm>
<primary>directories</primary>
<secondary>/vicep<emphasis>xx</emphasis></secondary>
<see>AFS server partition</see>
</indexterm></para>
</listitem>
<listitem>
<para>Configure server partitions or logical volumes to house AFS volumes.</para>
<para>Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes
(for convenience, the documentation hereafter refers to partitions only). Each server partition is mounted at a directory
named <emphasis role="bold">/vicep</emphasis><replaceable>xx</replaceable>, where <replaceable>xx</replaceable> is one or
two lowercase letters. By convention, the first 26 partitions are mounted on the directories called <emphasis
role="bold">/vicepa</emphasis> through <emphasis role="bold">/vicepz</emphasis>, the 27th one is mounted on the <emphasis
role="bold">/vicepaa</emphasis> directory, and so on through <emphasis role="bold">/vicepaz</emphasis> and <emphasis
role="bold">/vicepba</emphasis>, continuing up to the index corresponding to the maximum number of server partitions
supported in the current version of AFS (which is specified in the <emphasis>OpenAFS Release Notes</emphasis>).</para>
<para>The <emphasis role="bold">/vicep</emphasis><replaceable>xx</replaceable> directories must reside in the file server
machine's root directory, not in one of its subdirectories (for example, <emphasis role="bold">/usr/vicepa</emphasis> is
not an acceptable directory location).</para>
<para>You can also add or remove server partitions on an existing file server machine. For instructions, see the chapter
in the <emphasis>OpenAFS Administration Guide</emphasis> about maintaining server machines.</para>
<note>
<para>Not all file system types supported by an operating system are necessarily supported as AFS server partitions. For
possible restrictions, see the <emphasis>OpenAFS Release Notes</emphasis>.</para>
</note>
</listitem>
<listitem>
<para>On some system types, install and configure a modified <emphasis role="bold">fsck</emphasis> program which
recognizes the structures that the File Server uses to organize volume data on AFS server partitions. The <emphasis
role="bold">fsck</emphasis> program provided with the operating system does not understand the AFS data structures, and so
removes them to the <emphasis role="bold">lost+found</emphasis> directory.</para>
</listitem>
<listitem>
<para>If the machine is to remain an AFS client machine, modify the machine's authentication system so that users obtain
an AFS token as they log into the local file system. Using AFS is simpler and more convenient for your users if you make
the modifications on all client machines. Otherwise, users must perform a two or three step login procedure (login to the local
system, then obtain Kerberos credentials, and then issue the <emphasis role="bold">aklog</emphasis> command). For further discussion of AFS
authentication, see the chapter in the <emphasis>OpenAFS Administration Guide</emphasis> about cell configuration and
administration issues.</para>
</listitem>
</itemizedlist></para>
<para>To continue, proceed to the appropriate section: <itemizedlist>
<listitem>
<para><link linkend="HDRWQ21">Getting Started on AIX Systems</link></para>
</listitem>
<listitem>
<para><link linkend="HDRWQ31">Getting Started on HP-UX Systems</link></para>
</listitem>
<listitem>
<para><link linkend="HDRWQ36">Getting Started on IRIX Systems</link></para>
</listitem>
<listitem>
<para><link linkend="HDRWQ41">Getting Started on Linux Systems</link></para>
</listitem>
<listitem>
<para><link linkend="HDRWQ45">Getting Started on Solaris Systems</link></para>
</listitem>
</itemizedlist></para>
</sect1>
<sect1 id="HDRWQ21">
<title>Getting Started on AIX Systems</title>
<para>Begin by running the AFS initialization script to call the AIX kernel extension facility, which dynamically loads AFS
modifications into the kernel. Then use the <emphasis role="bold">SMIT</emphasis> program to configure partitions for storing
AFS volumes, and replace the AIX <emphasis role="bold">fsck</emphasis> program helper with a version that correctly handles AFS
volumes. If the machine is to remain an AFS client machine, incorporate AFS into the AIX secondary authentication system.
<indexterm>
<primary>incorporating AFS kernel extensions</primary>
<secondary>first AFS machine</secondary>
<tertiary>AIX</tertiary>
</indexterm> <indexterm>
<primary>AFS kernel extensions</primary>
<secondary>on first AFS machine</secondary>
<tertiary>AIX</tertiary>
</indexterm> <indexterm>
<primary>first AFS machine</primary>
<secondary>AFS kernel extensions</secondary>
<tertiary>on AIX</tertiary>
</indexterm> <indexterm>
<primary>AIX</primary>
<secondary>AFS kernel extensions</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm></para>
<sect2 id="HDRWQ22">
<title>Loading AFS into the AIX Kernel</title>
<para>The AIX kernel extension facility is the dynamic kernel loader
provided by IBM Corporation. AIX does not support incorporation of
AFS modifications during a kernel build.</para>
<para>For AFS to function correctly, the kernel extension facility must run each time the machine reboots, so the AFS
initialization script (included in the AFS distribution) invokes it automatically. In this section you copy the script to the
conventional location and edit it to select the appropriate options depending on whether NFS is also to run.</para>
<para>After editing the script, you run it to incorporate AFS into the kernel. In later sections you verify that the script
correctly initializes all AFS components, then configure the AIX <emphasis role="bold">inittab</emphasis> file so that the
script runs automatically at reboot. <orderedlist>
<listitem>
<para>Unpack the distribution tarball. The examples below assume
that you have unpacked the files into the
<emphasis role="bold">/tmp/afsdist</emphasis> directory. If you
pick a different location, substitute this in all of the following
examples. Once you have unpacked the distribution,
change directory as indicated.
<programlisting>
# <emphasis role="bold">cd /tmp/afsdist/rs_aix42/root.client/usr/vice/etc</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Copy the AFS kernel library files to the local <emphasis role="bold">/usr/vice/etc/dkload</emphasis> directory,
and the AFS initialization script to the <emphasis role="bold">/etc</emphasis> directory. <programlisting>
# <emphasis role="bold">cp -rp dkload /usr/vice/etc</emphasis>
# <emphasis role="bold">cp -p rc.afs /etc/rc.afs</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Edit the <emphasis role="bold">/etc/rc.afs</emphasis> script, setting the <computeroutput>NFS</computeroutput>
variable as indicated.</para>
<para>If the machine is not to function as an NFS/AFS Translator, set the <computeroutput>NFS</computeroutput> variable
as follows.</para>
<programlisting>
NFS=$NFS_NONE
</programlisting>
<para>If the machine is to function as an NFS/AFS Translator and is running AIX 4.2.1 or higher, set the
<computeroutput>NFS</computeroutput> variable as follows. Note that NFS must already be loaded into the kernel, which
happens automatically on systems running AIX 4.1.1 and later, as long as the file <emphasis
role="bold">/etc/exports</emphasis> exists.</para>
<programlisting>
NFS=$NFS_IAUTH
</programlisting>
</listitem>
<listitem>
<para>Invoke the <emphasis role="bold">/etc/rc.afs</emphasis> script to load AFS modifications into the kernel. You can
ignore any error messages about the inability to start the BOS Server or the Cache Manager or AFS client.
<programlisting>
# <emphasis role="bold">/etc/rc.afs</emphasis>
</programlisting></para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>configuring</primary>
<secondary>AFS server partition on first AFS machine</secondary>
<tertiary>AIX</tertiary>
</indexterm>
<indexterm>
<primary>AFS server partition</primary>
<secondary>configuring on first AFS machine</secondary>
<tertiary>AIX</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>AFS server partition</secondary>
<tertiary>on AIX</tertiary>
</indexterm>
<indexterm>
<primary>AIX</primary>
<secondary>AFS server partition</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
</sect2>
<sect2 id="HDRWQ23">
<title>Configuring Server Partitions on AIX Systems</title>
<para>Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each
server partition is mounted at a directory named <emphasis role="bold">/vicep</emphasis><replaceable>xx</replaceable>, where
<replaceable>xx</replaceable> is one or two lowercase letters. The <emphasis
role="bold">/vicep</emphasis><replaceable>xx</replaceable> directories must reside in the file server machine's root
directory, not in one of its subdirectories (for example, <emphasis role="bold">/usr/vicepa</emphasis> is not an acceptable
directory location). For additional information, see <link linkend="HDRWQ20">Performing Platform-Specific
Procedures</link>.</para>
<para>To configure server partitions on an AIX system, perform the following procedures: <orderedlist>
<listitem>
<para>Create a directory called <emphasis role="bold">/vicep</emphasis><replaceable>xx</replaceable> for each AFS server
partition you are configuring (there must be at least one). Repeat the command for each partition. <programlisting>
# <emphasis role="bold">mkdir /vicep</emphasis><replaceable>xx</replaceable>
</programlisting></para>
</listitem>
<listitem>
<para>Use the <emphasis role="bold">SMIT</emphasis> program to create a journaling file system on each partition to be
configured as an AFS server partition.</para>
</listitem>
<listitem>
<para>Mount each partition at one of the <emphasis role="bold">/vicep</emphasis><replaceable>xx</replaceable>
directories. Choose one of the following three methods: <itemizedlist>
<listitem>
<para>Use the <emphasis role="bold">SMIT</emphasis> program</para>
</listitem>
<listitem>
<para>Use the <emphasis role="bold">mount -a</emphasis> command to mount all partitions at once</para>
</listitem>
<listitem>
<para>Use the <emphasis role="bold">mount</emphasis> command on each partition in turn</para>
</listitem>
</itemizedlist></para>
<para>Also configure the partitions so that they are mounted automatically at each reboot. For more information, refer
to the AIX documentation.</para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>replacing fsck program</primary>
<secondary>first AFS machine</secondary>
<tertiary>AIX</tertiary>
</indexterm>
<indexterm>
<primary>fsck program</primary>
<secondary>on first AFS machine</secondary>
<tertiary>AIX</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>fsck program</secondary>
<tertiary>on AIX</tertiary>
</indexterm>
<indexterm>
<primary>AIX</primary>
<secondary>fsck program</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
</sect2>
<sect2 id="HDRWQ24">
<title>Replacing the fsck Program Helper on AIX Systems</title>
<note><para>The AFS modified fsck program is not required on AIX 5.1
systems, and the <emphasis role="bold">v3fshelper</emphasis> program
refered to below is not shipped for these systems.</para></note>
<para>In this section, you make modifications to guarantee that the appropriate <emphasis role="bold">fsck</emphasis> program
runs on AFS server partitions. The <emphasis role="bold">fsck</emphasis> program provided with the operating system must never
run on AFS server partitions. Because it does not recognize the structures that the File Server uses to organize volume data,
it removes all of the data. To repeat:</para>
<para><emphasis role="bold">Never run the standard fsck program on AFS server partitions. It discards AFS
volumes.</emphasis></para>
<para>On AIX systems, you do not replace the <emphasis role="bold">fsck</emphasis> binary itself, but rather the
<emphasis>program helper</emphasis> file included in the AIX distribution as <emphasis
role="bold">/sbin/helpers/v3fshelper</emphasis>. <orderedlist>
<listitem>
<para>Move the AIX <emphasis role="bold">fsck</emphasis> program helper to a safe location and install the version from
the AFS distribution in its place.
<programlisting>
# <emphasis role="bold">cd /sbin/helpers</emphasis>
# <emphasis role="bold">mv v3fshelper v3fshelper.noafs</emphasis>
# <emphasis role="bold">cp -p /tmp/afsdist/rs_aix42/root.server/etc/v3fshelper v3fshelper</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>If you plan to retain client functionality on this machine after completing the installation, proceed to <link
linkend="HDRWQ25">Enabling AFS Login on AIX Systems</link>. Otherwise, proceed to <link linkend="HDRWQ50">Starting the
BOS Server</link>.</para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>enabling AFS login</primary>
<secondary>file server machine</secondary>
<tertiary>AIX</tertiary>
</indexterm>
<indexterm>
<primary>AFS login</primary>
<secondary>on file server machine</secondary>
<tertiary>AIX</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>AFS login</secondary>
<tertiary>on AIX</tertiary>
</indexterm>
<indexterm>
<primary>AIX</primary>
<secondary>AFS login</secondary>
<tertiary>on file server machine</tertiary>
</indexterm>
<indexterm>
<primary>secondary authentication system (AIX)</primary>
<secondary>server machine</secondary>
</indexterm>
</sect2>
<sect2 id="HDRWQ25">
<title>Enabling AFS Login on AIX Systems</title>
<note>
<para>If you plan to remove client functionality from this machine after completing the installation, skip this section and
proceed to <link linkend="HDRWQ50">Starting the BOS Server</link>.</para>
</note>
<para>In modern AFS installations, you should be using Kerberos v5
for user login, and obtaining AFS tokens following this authentication
step.</para>
<para>There are currently no instructions available on configuring AIX to
automatically obtain AFS tokens at login. Following login, users can
obtain tokens by running the <emphasis role="bold">aklog</emphasis>
command</para>
<para>Sites which still require <emphasis role="bold">kaserver</emphasis>
or external Kerberos v4 authentication should consult
<link linkend="KAS012">Enabling kaserver based AFS login on AIX systems</link>
for details of how to enable AIX login.</para>
<para>Proceed to <link linkend="HDRWQ50">Starting the BOS Server</link>
(or if referring to these instructions while installing an additional
file server machine, return to <link linkend="HDRWQ108">Starting Server
Programs</link>).</para>
</sect2>
</sect1>
<sect1 id="HDRWQ31">
<title>Getting Started on HP-UX Systems</title>
<para>Begin by building AFS modifications into a new kernel; HP-UX
does not support dynamic loading. Then create partitions for storing
AFS volumes, and install and configure the AFS-modified <emphasis
role="bold">fsck</emphasis> program to run on AFS server
partitions. If the machine is to remain an AFS client machine,
incorporate AFS into the machine's Pluggable Authentication Module
(PAM) scheme. <indexterm>
<primary>incorporating AFS kernel extensions</primary>
<secondary>first AFS machine</secondary>
<tertiary>HP-UX</tertiary>
</indexterm> <indexterm>
<primary>AFS kernel extensions</primary>
<secondary>on first AFS machine</secondary>
<tertiary>HP-UX</tertiary>
</indexterm> <indexterm>
<primary>first AFS machine</primary>
<secondary>AFS kernel extensions</secondary>
<tertiary>on HP-UX</tertiary>
</indexterm> <indexterm>
<primary>HP-UX</primary>
<secondary>AFS-modified kernel</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm></para>
<sect2 id="HDRWQ32">
<title>Building AFS into the HP-UX Kernel</title>
<para>Use the following instructions to build AFS modifications into the kernel on an HP-UX system. <orderedlist>
<listitem>
<para>Move the existing kernel-related files to a safe location. <programlisting>
# <emphasis role="bold">cp /stand/vmunix /stand/vmunix.noafs</emphasis>
# <emphasis role="bold">cp /stand/system /stand/system.noafs</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Unpack the OpenAFS HP-UX distribution tarball. The examples
below assume that you have unpacked the files into the
<emphasis role="bold">/tmp/afsdist</emphasis> directory. If you
pick a different location, substitute this in all of the following
examples. Once you have unpacked the distribution, change directory
as indicated.
<programlisting>
# <emphasis role="bold">cd /tmp/afsdist/hp_ux110/root.client</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Copy the AFS initialization file to the local directory for initialization files (by convention, <emphasis
role="bold">/sbin/init.d</emphasis> on HP-UX machines). Note the removal of the <emphasis role="bold">.rc</emphasis>
extension as you copy the file. <programlisting>
# <emphasis role="bold">cp usr/vice/etc/afs.rc /sbin/init.d/afs</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Copy the file <emphasis role="bold">afs.driver</emphasis> to the local <emphasis
role="bold">/usr/conf/master.d</emphasis> directory, changing its name to <emphasis role="bold">afs</emphasis> as you
do. <programlisting>
# <emphasis role="bold">cp usr/vice/etc/afs.driver /usr/conf/master.d/afs</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Copy the AFS kernel module to the local <emphasis role="bold">/usr/conf/lib</emphasis> directory.</para>
<para>If the machine's kernel supports NFS server functionality:</para>
<programlisting>
# <emphasis role="bold">cp bin/libafs.a /usr/conf/lib</emphasis>
</programlisting>
<para>If the machine's kernel does not support NFS server functionality, change the file's name as you copy it:</para>
<programlisting>
# <emphasis role="bold">cp bin/libafs.nonfs.a /usr/conf/lib/libafs.a</emphasis>
</programlisting>
</listitem>
<listitem>
<para>Incorporate the AFS driver into the kernel, either using the <emphasis role="bold">SAM</emphasis> program or a
series of individual commands. <itemizedlist>
<listitem>
<para>To use the <emphasis role="bold">SAM</emphasis> program: <orderedlist>
<listitem>
<para>Invoke the <emphasis role="bold">SAM</emphasis> program, specifying the hostname of the local machine
as <replaceable>local_hostname</replaceable>. The <emphasis role="bold">SAM</emphasis> graphical user
interface pops up. <programlisting>
# <emphasis role="bold">sam -display</emphasis> <replaceable>local_hostname</replaceable><emphasis role="bold">:0</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Choose the <emphasis role="bold">Kernel Configuration</emphasis> icon, then the <emphasis
role="bold">Drivers</emphasis> icon. From the list of drivers, select <emphasis
role="bold">afs</emphasis>.</para>
</listitem>
<listitem>
<para>Open the pull-down <emphasis role="bold">Actions</emphasis> menu and choose the <emphasis
role="bold">Add Driver to Kernel</emphasis> option.</para>
</listitem>
<listitem>
<para>Open the <emphasis role="bold">Actions</emphasis> menu again and choose the <emphasis
role="bold">Create a New Kernel</emphasis> option.</para>
</listitem>
<listitem>
<para>Confirm your choices by choosing <emphasis role="bold">Yes</emphasis> and <emphasis
role="bold">OK</emphasis> when prompted by subsequent pop-up windows. The <emphasis
role="bold">SAM</emphasis> program builds the kernel and reboots the system.</para>
</listitem>
<listitem>
<para>Login again as the superuser <emphasis role="bold">root</emphasis>. <programlisting>
login: <emphasis role="bold">root</emphasis>
Password: <replaceable>root_password</replaceable>
</programlisting></para>
</listitem>
</orderedlist></para>
</listitem>
<listitem>
<para>To use individual commands: <orderedlist>
<listitem>
<para>Edit the file <emphasis role="bold">/stand/system</emphasis>, adding an entry for <emphasis
role="bold">afs</emphasis> to the <computeroutput>Subsystems</computeroutput> section.</para>
</listitem>
<listitem>
<para>Change to the <emphasis role="bold">/stand/build</emphasis> directory and issue the <emphasis
role="bold">mk_kernel</emphasis> command to build the kernel. <programlisting>
# <emphasis role="bold">cd /stand/build</emphasis>
# <emphasis role="bold">mk_kernel</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Move the new kernel to the standard location (<emphasis role="bold">/stand/vmunix</emphasis>), reboot
the machine to start using it, and login again as the superuser <emphasis role="bold">root</emphasis>.
<programlisting>
# <emphasis role="bold">mv /stand/build/vmunix_test /stand/vmunix</emphasis>
# <emphasis role="bold">cd /</emphasis>
# <emphasis role="bold">shutdown -r now</emphasis>
login: <emphasis role="bold">root</emphasis>
Password: <replaceable>root_password</replaceable>
</programlisting></para>
</listitem>
</orderedlist></para>
</listitem>
</itemizedlist></para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>configuring</primary>
<secondary>AFS server partition on first AFS machine</secondary>
<tertiary>HP-UX</tertiary>
</indexterm>
<indexterm>
<primary>AFS server partition</primary>
<secondary>configuring on first AFS machine</secondary>
<tertiary>HP-UX</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>AFS server partition</secondary>
<tertiary>on HP-UX</tertiary>
</indexterm>
<indexterm>
<primary>HP-UX</primary>
<secondary>AFS server partition</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
</sect2>
<sect2 id="HDRWQ33">
<title>Configuring Server Partitions on HP-UX Systems</title>
<para>Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each
server partition is mounted at a directory named <emphasis role="bold">/vicep</emphasis><replaceable>xx</replaceable>, where
<replaceable>xx</replaceable> is one or two lowercase letters. The <emphasis
role="bold">/vicep</emphasis><replaceable>xx</replaceable> directories must reside in the file server machine's root
directory, not in one of its subdirectories (for example, <emphasis role="bold">/usr/vicepa</emphasis> is not an acceptable
directory location). For additional information, see <link linkend="HDRWQ20">Performing Platform-Specific Procedures</link>.
<orderedlist>
<listitem>
<para>Create a directory called <emphasis role="bold">/vicep</emphasis><replaceable>xx</replaceable> for each AFS server
partition you are configuring (there must be at least one). Repeat the command for each partition. <programlisting>
# <emphasis role="bold">mkdir /vicep</emphasis><replaceable>xx</replaceable>
</programlisting></para>
</listitem>
<listitem>
<para>Use the <emphasis role="bold">SAM</emphasis> program to create a file system on each partition. For instructions,
consult the HP-UX documentation.</para>
</listitem>
<listitem>
<para>On some HP-UX systems that use logical volumes, the <emphasis role="bold">SAM</emphasis> program automatically
mounts the partitions. If it has not, mount each partition by issuing either the <emphasis role="bold">mount
-a</emphasis> command to mount all partitions at once or the <emphasis role="bold">mount</emphasis> command to mount
each partition in turn.</para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>replacing fsck program</primary>
<secondary>first AFS machine</secondary>
<tertiary>HP-UX</tertiary>
</indexterm>
<indexterm>
<primary>fsck program</primary>
<secondary>on first AFS machine</secondary>
<tertiary>HP-UX</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>fsck program</secondary>
<tertiary>on HP-UX</tertiary>
</indexterm>
<indexterm>
<primary>HP-UX</primary>
<secondary>fsck program</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
</sect2>
<sect2 id="HDRWQ34">
<title>Configuring the AFS-modified fsck Program on HP-UX Systems</title>
<para>In this section, you make modifications to guarantee that the appropriate <emphasis role="bold">fsck</emphasis> program
runs on AFS server partitions. The <emphasis role="bold">fsck</emphasis> program provided with the operating system must never
run on AFS server partitions. Because it does not recognize the structures that the File Server uses to organize volume data,
it removes all of the data. To repeat:</para>
<para><emphasis role="bold">Never run the standard fsck program on AFS server partitions. It discards AFS
volumes.</emphasis></para>
<para>On HP-UX systems, there are several configuration files to install in addition to the AFS-modified <emphasis
role="bold">fsck</emphasis> program (the <emphasis role="bold">vfsck</emphasis> binary). <orderedlist>
<listitem>
<para>Create the command configuration file <emphasis role="bold">/sbin/lib/mfsconfig.d/afs</emphasis>. Use a text
editor to place the indicated two lines in it: <programlisting>
format_revision 1
fsck 0 m,P,p,d,f,b:c:y,n,Y,N,q,
</programlisting></para>
</listitem>
<listitem>
<para>Create and change directory to an AFS-specific command directory called <emphasis
role="bold">/sbin/fs/afs</emphasis>. <programlisting>
# <emphasis role="bold">mkdir /sbin/fs/afs</emphasis>
# <emphasis role="bold">cd /sbin/fs/afs</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Copy the AFS-modified version of the <emphasis role="bold">fsck</emphasis> program (the <emphasis
role="bold">vfsck</emphasis> binary) and related files from the distribution directory to the new AFS-specific command
directory. <programlisting>
# <emphasis role="bold">cp -p /tmp/afsdist/hp_ux110/root.server/etc/* .</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Change the <emphasis role="bold">vfsck</emphasis> binary's name to <emphasis role="bold">fsck</emphasis> and set
the mode bits appropriately on all of the files in the <emphasis role="bold">/sbin/fs/afs</emphasis> directory.
<programlisting>
# <emphasis role="bold">mv vfsck fsck</emphasis>
# <emphasis role="bold">chmod 755 *</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Edit the <emphasis role="bold">/etc/fstab</emphasis> file, changing the file system type for each AFS server
partition from <computeroutput>hfs</computeroutput> to <computeroutput>afs</computeroutput>. This ensures that the
AFS-modified <emphasis role="bold">fsck</emphasis> program runs on the appropriate partitions.</para>
<para>The sixth line in the following example of an edited file shows an AFS server partition, <emphasis
role="bold">/vicepa</emphasis>.</para>
<programlisting>
/dev/vg00/lvol1 / hfs defaults 0 1
/dev/vg00/lvol4 /opt hfs defaults 0 2
/dev/vg00/lvol5 /tmp hfs defaults 0 2
/dev/vg00/lvol6 /usr hfs defaults 0 2
/dev/vg00/lvol8 /var hfs defaults 0 2
/dev/vg00/lvol9 /vicepa afs defaults 0 2
/dev/vg00/lvol7 /usr/vice/cache hfs defaults 0 2
</programlisting>
</listitem>
<listitem>
<para>If you plan to retain client functionality on this machine after completing the installation, proceed to <link
linkend="HDRWQ35">Enabling AFS Login on HP-UX Systems</link>. Otherwise, proceed to <link linkend="HDRWQ50">Starting the
BOS Server</link>.</para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>enabling AFS login</primary>
<secondary>file server machine</secondary>
<tertiary>HP-UX</tertiary>
</indexterm>
<indexterm>
<primary>AFS login</primary>
<secondary>on file server machine</secondary>
<tertiary>HP-UX</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>AFS login</secondary>
<tertiary>on HP-UX</tertiary>
</indexterm>
<indexterm>
<primary>HP-UX</primary>
<secondary>AFS login</secondary>
<tertiary>on file server machine</tertiary>
</indexterm>
<indexterm>
<primary>PAM</primary>
<secondary>on HP-UX</secondary>
<tertiary>file server machine</tertiary>
</indexterm>
<indexterm>
<primary>Pluggable Authentication Module</primary>
<see>PAM</see>
</indexterm>
</sect2>
<sect2 id="HDRWQ35">
<title>Enabling AFS Login on HP-UX Systems</title>
<note><para>If you plan to remove client functionality from this machine after completing the installation, skip this section and proceed to <link linkend="HDRWQ50">Starting the BOS Server</link>.</para></note>
<para>At this point you incorporate AFS into the operating system's
Pluggable Authentication Module (PAM) scheme. PAM integrates all
authentication mechanisms on the machine, including login, to
provide the security infrastructure for authenticated access to and
from the machine.</para>
<para>In modern AFS installations, you should be using Kerberos v5
for user login, and obtaining AFS tokens subsequent to this
authentication step. OpenAFS does not currently distribute a PAM
module allowing AFS tokens to be automatically gained at
login. Whilst there are a number of third party modules providing
this functionality, it is not know if these have been tested with
HP/UX.</para>
<para>Following login, users can obtain tokens by running the
<emphasis role="bold">aklog</emphasis> command</para>
<para>Sites which still require <emphasis
role="bold">kaserver</emphasis> or external Kerberos v4
authentication should consult <link linkend="KAS013">Enabling
kaserver based AFS login on HP-UX systems</link> for details of how
to enable HP-UX login.</para>
<para>Proceed to <link linkend="HDRWQ50">Starting the BOS
Server</link> (or if referring to these instructions while
installing an additional file server machine, return to <link
linkend="HDRWQ108">Starting Server Programs</link>).</para>
</sect2>
</sect1>
<sect1 id="HDRWQ36">
<title>Getting Started on IRIX Systems</title>
<indexterm>
<primary>incorporating AFS kernel extensions</primary>
<secondary>first AFS machine</secondary>
<tertiary>IRIX</tertiary>
</indexterm>
<indexterm>
<primary>AFS kernel extensions</primary>
<secondary>on first AFS machine</secondary>
<tertiary>IRIX</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>AFS kernel extensions</secondary>
<tertiary>on IRIX</tertiary>
</indexterm>
<indexterm>
<primary>replacing fsck program</primary>
<secondary>not necessary on IRIX</secondary>
</indexterm>
<indexterm>
<primary>fsck program</primary>
<secondary>on first AFS machine</secondary>
<tertiary>IRIX</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>fsck program</secondary>
<tertiary>on IRIX</tertiary>
</indexterm>
<indexterm>
<primary>IRIX</primary>
<secondary>fsck program replacement not necessary</secondary>
</indexterm>
<para>To incorporate AFS into the kernel on IRIX systems, choose one of two methods: <itemizedlist>
<listitem>
<para>Run the AFS initialization script to invoke the <emphasis role="bold">ml</emphasis> program distributed by Silicon
Graphics, Incorporated (SGI), which dynamically loads AFS modifications into the kernel</para>
</listitem>
<listitem>
<para>Build a new static kernel</para>
</listitem>
</itemizedlist></para>
<para>Then create partitions for storing AFS volumes. You do not need to replace the IRIX <emphasis role="bold">fsck</emphasis>
program because SGI has already modified it to handle AFS volumes properly. If the machine is to remain an AFS client machine,
verify that the IRIX login utility installed on the machine grants an AFS token.</para>
<para>In preparation for either dynamic loading or kernel building, perform the following procedures: <orderedlist>
<listitem>
<para>Unpack the OpenAFS IRIX distribution tarball. The examples
below assume that you have unpacked the files into the
<emphasis role="bold">/tmp/afsdist</emphasis> directory. If you
pick a different location, substitue this in all of the following
examples. Once you have unpacked the distribution, change directory
as indicated.
<programlisting>
# <emphasis role="bold">cd /tmp/afsdist/sgi_65/root.client</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Copy the AFS initialization script to the local directory for initialization files (by convention, <emphasis
role="bold">/etc/init.d</emphasis> on IRIX machines). Note the removal of the <emphasis role="bold">.rc</emphasis>
extension as you copy the script. <programlisting>
# <emphasis role="bold">cp -p usr/vice/etc/afs.rc /etc/init.d/afs</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">uname -m</emphasis> command to determine the machine's CPU board type. The <emphasis
role="bold">IP</emphasis><replaceable>xx</replaceable> value in the output must match one of the supported CPU board types
listed in the <emphasis>OpenAFS Release Notes</emphasis> for the current version of AFS. <programlisting>
# <emphasis role="bold">uname -m</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Proceed to either <link linkend="HDRWQ37">Loading AFS into the IRIX Kernel</link> or <link
linkend="HDRWQ38">Building AFS into the IRIX Kernel</link>.</para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>IRIX</primary>
<secondary>AFS kernel extensions</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>afsml variable (IRIX)</primary>
<secondary>first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>variables</primary>
<secondary>afsml (IRIX)</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>IRIX</primary>
<secondary>afsml variable</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>afsxnfs variable (IRIX)</primary>
<secondary>first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>variables</primary>
<secondary>afsxnfs (IRIX)</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>IRIX</primary>
<secondary>afsxnfs variable</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<sect2 id="HDRWQ37">
<title>Loading AFS into the IRIX Kernel</title>
<para>The <emphasis role="bold">ml</emphasis> program is the dynamic kernel loader provided by SGI for IRIX systems. If you
use it rather than building AFS modifications into a static kernel, then for AFS to function correctly the <emphasis
role="bold">ml</emphasis> program must run each time the machine reboots. Therefore, the AFS initialization script (included
on the AFS CD-ROM) invokes it automatically when the <emphasis role="bold">afsml</emphasis> configuration variable is
activated. In this section you activate the variable and run the script.</para>
<para>In later sections you verify that the script correctly initializes all AFS components, then create the links that
incorporate AFS into the IRIX startup and shutdown sequence. <orderedlist>
<listitem>
<para>Create the local <emphasis role="bold">/usr/vice/etc/sgiload</emphasis> directory to house the AFS kernel library
file. <programlisting>
# <emphasis role="bold">mkdir /usr/vice/etc/sgiload</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Copy the appropriate AFS kernel library file to the <emphasis role="bold">/usr/vice/etc/sgiload</emphasis>
directory. The <emphasis role="bold">IP</emphasis><replaceable>xx</replaceable> portion of the library file name must
match the value previously returned by the <emphasis role="bold">uname -m</emphasis> command. Also choose the file
appropriate to whether the machine's kernel supports NFS server functionality (NFS must be supported for the machine to
act as an NFS/AFS Translator). Single- and multiprocessor machines use the same library file.</para>
<para>(You can choose to copy all of the kernel library files into the <emphasis
role="bold">/usr/vice/etc/sgiload</emphasis> directory, but they require a significant amount of space.)</para>
<para>If the machine's kernel supports NFS server functionality:</para>
<programlisting>
# <emphasis role="bold">cp -p usr/vice/etc/sgiload/libafs.IP</emphasis><replaceable>xx</replaceable><emphasis role="bold">.o /usr/vice/etc/sgiload</emphasis>
</programlisting>
<para>If the machine's kernel does not support NFS server functionality:</para>
<programlisting>
# <emphasis role="bold">cp -p usr/vice/etc/sgiload/libafs.IP</emphasis><replaceable>xx</replaceable><emphasis role="bold">.nonfs.o</emphasis> \
<emphasis role="bold">/usr/vice/etc/sgiload</emphasis>
</programlisting>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">chkconfig</emphasis> command to activate the <emphasis
role="bold">afsml</emphasis> configuration variable. <programlisting>
# <emphasis role="bold">/etc/chkconfig -f afsml on</emphasis>
</programlisting></para>
<para>If the machine is to function as an NFS/AFS Translator and the kernel supports NFS server functionality, activate
the <emphasis role="bold">afsxnfs</emphasis> variable.</para>
<programlisting>
# <emphasis role="bold">/etc/chkconfig -f afsxnfs on</emphasis>
</programlisting>
</listitem>
<listitem>
<para>Run the <emphasis role="bold">/etc/init.d/afs</emphasis> script to load AFS extensions into the kernel. The script
invokes the <emphasis role="bold">ml</emphasis> command, automatically determining which kernel library file to use
based on this machine's CPU type and the activation state of the <emphasis role="bold">afsxnfs</emphasis>
variable.</para>
<para>You can ignore any error messages about the inability to start the BOS Server or the Cache Manager or AFS
client.</para>
<programlisting>
# <emphasis role="bold">/etc/init.d/afs start</emphasis>
</programlisting>
</listitem>
<listitem>
<para>Proceed to <link linkend="HDRWQ39">Configuring Server Partitions on IRIX Systems</link>.</para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>IRIX</primary>
<secondary>AFS-modified kernel</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
</sect2>
<sect2 id="HDRWQ38">
<title>Building AFS into the IRIX Kernel</title>
<para>Use the following instructions to build AFS modifications into the kernel on an IRIX system. <orderedlist>
<listitem>
<para>Copy the kernel initialization file <emphasis role="bold">afs.sm</emphasis> to the local <emphasis
role="bold">/var/sysgen/system</emphasis> directory, and the kernel master file <emphasis role="bold">afs</emphasis> to
the local <emphasis role="bold">/var/sysgen/master.d</emphasis> directory. <programlisting>
# <emphasis role="bold">cp -p bin/afs.sm /var/sysgen/system</emphasis>
# <emphasis role="bold">cp -p bin/afs /var/sysgen/master.d</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Copy the appropriate AFS kernel library file to the local file <emphasis
role="bold">/var/sysgen/boot/afs.a</emphasis>; the <emphasis role="bold">IP</emphasis><replaceable>xx</replaceable>
portion of the library file name must match the value previously returned by the <emphasis role="bold">uname
-m</emphasis> command. Also choose the file appropriate to whether the machine's kernel supports NFS server
functionality (NFS must be supported for the machine to act as an NFS/AFS Translator). Single- and multiprocessor
machines use the same library file.</para>
<para>If the machine's kernel supports NFS server functionality:</para>
<programlisting>
# <emphasis role="bold">cp -p bin/libafs.IP</emphasis><replaceable>xx</replaceable><emphasis role="bold">.a /var/sysgen/boot/afs.a</emphasis>
</programlisting>
<para>If the machine's kernel does not support NFS server functionality:</para>
<programlisting>
# <emphasis role="bold">cp -p bin/libafs.IP</emphasis><replaceable>xx</replaceable><emphasis role="bold">.nonfs.a /var/sysgen/boot/afs.a</emphasis>
</programlisting>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">chkconfig</emphasis> command to deactivate the <emphasis
role="bold">afsml</emphasis> configuration variable. <programlisting>
# <emphasis role="bold">/etc/chkconfig -f afsml off</emphasis>
</programlisting></para>
<para>If the machine is to function as an NFS/AFS Translator and the kernel supports NFS server functionality, activate
the <emphasis role="bold">afsxnfs</emphasis> variable.</para>
<programlisting>
# <emphasis role="bold">/etc/chkconfig -f afsxnfs on</emphasis>
</programlisting>
</listitem>
<listitem>
<para>Copy the existing kernel file, <emphasis role="bold">/unix</emphasis>, to a safe location. Compile the new kernel,
which is created in the file <emphasis role="bold">/unix.install</emphasis>. It overwrites the existing <emphasis
role="bold">/unix</emphasis> file when the machine reboots in the next step. <programlisting>
# <emphasis role="bold">cp /unix /unix_noafs</emphasis>
# <emphasis role="bold">autoconfig</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Reboot the machine to start using the new kernel, and login again as the superuser <emphasis
role="bold">root</emphasis>. <programlisting>
# <emphasis role="bold">cd /</emphasis>
# <emphasis role="bold">shutdown -i6 -g0 -y</emphasis>
login: <emphasis role="bold">root</emphasis>
Password: <replaceable>root_password</replaceable>
</programlisting></para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>configuring</primary>
<secondary>AFS server partition on first AFS machine</secondary>
<tertiary>IRIX</tertiary>
</indexterm>
<indexterm>
<primary>AFS server partition</primary>
<secondary>configuring on first AFS machine</secondary>
<tertiary>IRIX</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>AFS server partition</secondary>
<tertiary>on IRIX</tertiary>
</indexterm>
<indexterm>
<primary>IRIX</primary>
<secondary>AFS server partition</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
</sect2>
<sect2 id="HDRWQ39">
<title>Configuring Server Partitions on IRIX Systems</title>
<para>Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each
server partition is mounted at a directory named <emphasis role="bold">/vicep</emphasis><replaceable>xx</replaceable>, where
<replaceable>xx</replaceable> is one or two lowercase letters. The <emphasis
role="bold">/vicep</emphasis><replaceable>xx</replaceable> directories must reside in the file server machine's root
directory, not in one of its subdirectories (for example, <emphasis role="bold">/usr/vicepa</emphasis> is not an acceptable
directory location). For additional information, see <link linkend="HDRWQ20">Performing Platform-Specific
Procedures</link>.</para>
<para>AFS supports use of both EFS and XFS partitions for housing AFS volumes. SGI encourages use of XFS partitions.
<orderedlist>
<listitem>
<para>Create a directory called <emphasis role="bold">/vicep</emphasis><replaceable>xx</replaceable> for each AFS server
partition you are configuring (there must be at least one). Repeat the command for each partition. <programlisting>
# <emphasis role="bold">mkdir /vicep</emphasis><replaceable>xx</replaceable>
</programlisting></para>
</listitem>
<listitem>
<para>Add a line with the following format to the file systems registry file, <emphasis
role="bold">/etc/fstab</emphasis>, for each partition (or logical volume created with the XLV volume manager) to be
mounted on one of the directories created in the previous step.</para>
<para>For an XFS partition or logical volume:</para>
<programlisting>
/dev/dsk/<replaceable>disk</replaceable> /vicep<replaceable>xx</replaceable> xfs rw,raw=/dev/rdsk/<replaceable>disk</replaceable> 0 0
</programlisting>
<para>For an EFS partition:</para>
<programlisting>
/dev/dsk/<replaceable>disk</replaceable> /vicep<replaceable>xx</replaceable> efs rw,raw=/dev/rdsk/<replaceable>disk</replaceable> 0 0
</programlisting>
<para>The following are examples of an entry for each file system type:</para>
<programlisting>
/dev/dsk/dks0d2s6 /vicepa xfs rw,raw=/dev/rdsk/dks0d2s6 0 0
/dev/dsk/dks0d3s1 /vicepb efs rw,raw=/dev/rdsk/dks0d3s1 0 0
</programlisting>
</listitem>
<listitem>
<para>Create a file system on each partition that is to be mounted on a <emphasis
role="bold">/vicep</emphasis><replaceable>xx</replaceable> directory. The following commands are probably appropriate,
but consult the IRIX documentation for more information. In both cases, <replaceable>raw_device</replaceable> is a raw
device name like <emphasis role="bold">/dev/rdsk/dks0d0s0</emphasis> for a single disk partition or <emphasis
role="bold">/dev/rxlv/xlv0</emphasis> for a logical volume.</para>
<para>For XFS file systems, include the indicated options to configure the partition or logical volume with inodes large
enough to accommodate AFS-specific information:</para>
<programlisting>
# <emphasis role="bold">mkfs -t xfs -i size=512 -l size=4000b</emphasis> <replaceable>raw_device</replaceable>
</programlisting>
<para>For EFS file systems:</para>
<programlisting>
# <emphasis role="bold">mkfs -t efs</emphasis> <replaceable>raw_device</replaceable>
</programlisting>
</listitem>
<listitem>
<para>Mount each partition by issuing either the <emphasis role="bold">mount -a</emphasis> command to mount all
partitions at once or the <emphasis role="bold">mount</emphasis> command to mount each partition in turn.</para>
</listitem>
<listitem>
<para><emphasis role="bold">(Optional)</emphasis> If you have configured partitions or logical volumes to use XFS, issue
the following command to verify that the inodes are configured properly (are large enough to accommodate AFS-specific
information). If the configuration is correct, the command returns no output. Otherwise, it specifies the command to run
in order to configure each partition or logical volume properly. <programlisting>
# <emphasis role="bold">/usr/afs/bin/xfs_size_check</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>If you plan to retain client functionality on this machine after completing the installation, proceed to <link
linkend="HDRWQ40">Enabling AFS Login on IRIX Systems</link>. Otherwise, proceed to <link linkend="HDRWQ50">Starting the
BOS Server</link>.</para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>enabling AFS login</primary>
<secondary>file server machine</secondary>
<tertiary>IRIX</tertiary>
</indexterm>
<indexterm>
<primary>AFS login</primary>
<secondary>on file server machine</secondary>
<tertiary>IRIX</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>AFS login</secondary>
<tertiary>on IRIX</tertiary>
</indexterm>
<indexterm>
<primary>IRIX</primary>
<secondary>AFS login</secondary>
</indexterm>
</sect2>
<sect2 id="HDRWQ40">
<title>Enabling AFS Login on IRIX Systems</title>
<note>
<para>If you plan to remove client functionality from this machine after completing the installation, skip this section and
proceed to <link linkend="HDRWQ50">Starting the BOS Server</link>.</para>
</note>
<para>Whilst the standard IRIX command-line
<emphasis role="bold">login</emphasis> program and the
graphical <emphasis role="bold">xdm</emphasis> login program both have
the ability to grant AFS tokens, this ability relies upon the deprecated
kaserver authentication system.</para>
<para>Users who have been successfully authenticated via Kerberos 5
authentication may obtain AFS tokens following login by running the
<emphasis role="bold">aklog</emphasis> command.</para>
<para>Sites which still require <emphasis role="bold">kaserver</emphasis>
or external Kerberos v4 authentication should consult
<link linkend="KAS014">Enabling kaserver based AFS Login on IRIX Systems</link>
for details of how to enable IRIX login.</para>
<para>After taking any necessary action, proceed to
<link linkend="HDRWQ50">Starting the BOS Server</link>.</para>
</sect2>
</sect1>
<sect1 id="HDRWQ41">
<title>Getting Started on Linux Systems</title>
<indexterm>
<primary>replacing fsck program</primary>
<secondary>not necessary on Linux</secondary>
</indexterm>
<indexterm>
<primary>fsck program</primary>
<secondary>on first AFS machine</secondary>
<tertiary>Linux</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>fsck program</secondary>
<tertiary>on Linux</tertiary>
</indexterm>
<indexterm>
<primary>Linux</primary>
<secondary>fsck program replacement not necessary</secondary>
</indexterm>
<para>Since this guide was originally written, the procedure for starting
OpenAFS has diverged significantly between different Linux distributions.
The instructions that follow are appropriate for both the Fedora and
RedHat Enterprise Linux packages distributed by OpenAFS. Additional
instructions are provided for those building from source.</para>
<para>Begin by running the AFS client startup scripts, which call the
<emphasis role="bold">modprobe</emphasis> program, which dynamically
loads AFS modifications into the kernel. Then create partitions for
storing AFS volumes. You do not need to replace the Linux <emphasis
role="bold">fsck</emphasis> program. If the machine is to remain an
AFS client machine, incorporate AFS into the machine's Pluggable
Authentication Module (PAM) scheme. <indexterm>
<primary>incorporating AFS kernel extensions</primary>
<secondary>first AFS machine</secondary>
<tertiary>Linux</tertiary>
</indexterm> <indexterm>
<primary>AFS kernel extensions</primary>
<secondary>on first AFS machine</secondary>
<tertiary>Linux</tertiary>
</indexterm> <indexterm>
<primary>first AFS machine</primary>
<secondary>AFS kernel extensions</secondary>
<tertiary>on Linux</tertiary>
</indexterm> <indexterm>
<primary>Linux</primary>
<secondary>AFS kernel extensions</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm></para>
<sect2 id="HDRWQ42">
<title>Loading AFS into the Linux Kernel</title>
<para>The <emphasis role="bold">modprobe</emphasis> program is the dynamic kernel loader for Linux. Linux does not support
incorporation of AFS modifications during a kernel build.</para>
<para>For AFS to function correctly, the <emphasis role="bold">modprobe</emphasis> program must run each time the machine
reboots, so your distribution's AFS initialization script invokes it automatically. The script also includes
commands that select the appropriate AFS library file automatically. In this section you run the script.</para>
<para>In later sections you verify that the script correctly initializes all AFS components, then activate a configuration
variable, which results in the script being incorporated into the Linux startup and shutdown sequence.</para>
<para>The procedure for starting up OpenAFS depends upon your distribution</para>
<sect3>
<title>Fedora and RedHat Enterprise Linux</title>
<para>OpenAFS ship RPMS for all current Fedora and RHEL releases.
<orderedlist>
<listitem>
<para>Download and install the RPM set for your operating system.
RPMs are available from the OpenAFS web site. You will need the
<emphasis role="bold">openafs</emphasis>
<emphasis role="bold">openafs-client></emphasis>
<emphasis role="bold">openafs-server</emphasis> packages, along with
an <emphasis role="bold">openafs-kernel</emphasis> package matching
your current, running, kernel.</para>
<para>You can find the version of your current kernel by running
<programlisting>
# uname -r
<replaceable>2.6.20-1.2933.fc6</replaceable>
</programlisting></para>
<para>Once downloaded, the packages may be installed with the
<emphasis role="bold">rpm</emphasis> command
<programlisting>
# rpm -U openafs-* openafs-client-* openafs-server-* openafs-kernel-*
</programlisting></para>
</listitem>
<!-- If you do this with current RHEL and Fedora releases you end up with
a dynroot'd client running - this breaks setting up the root.afs volume
as described later in this guide
<listitem>
<para>Run the AFS initialization script to load AFS extensions into
the kernel. You can ignore any error messages about the inability
to start the BOS Server or the Cache Manager or AFS client.</para>
<programlisting>
# <emphasis role="bold">/etc/rc.d/init.d/openafs-client start</emphasis>
</programlisting>
</listitem>
-->
</orderedlist>
</para>
</sect3>
<sect3>
<title>Systems packaged as tar files</title>
<para>If you are running a system where the OpenAFS Binary Distribution
is provided as a tar file, or where you have built the system from
source yourself, you need to install the relevant components by hand
</para>
<orderedlist>
<listitem>
<para>Unpack the distribution tarball. The examples below assume
that you have unpacked the files into the
<emphasis role="bold">/tmp/afsdist</emphasis>directory. If you
pick a different location, substitute this in all of the following
examples. Once you have unpacked the distribution,
change directory as indicated.
<programlisting>
# <emphasis role="bold">cd /tmp/afsdist/linux/root.client/usr/vice/etc</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Copy the AFS kernel library files to the local <emphasis role="bold">/usr/vice/etc/modload</emphasis> directory.
The filenames for the libraries have the format <emphasis
role="bold">libafs-</emphasis><replaceable>version</replaceable><emphasis role="bold">.o</emphasis>, where
<replaceable>version</replaceable> indicates the kernel build level. The string <emphasis role="bold">.mp</emphasis> in
the <replaceable>version</replaceable> indicates that the file is appropriate for machines running a multiprocessor
kernel. <programlisting>
# <emphasis role="bold">cp -rp modload /usr/vice/etc</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Copy the AFS initialization script to the local directory for initialization files (by convention, <emphasis
role="bold">/etc/rc.d/init.d</emphasis> on Linux machines). Note the removal of the <emphasis role="bold">.rc</emphasis>
extension as you copy the script. <programlisting>
# <emphasis role="bold">cp -p afs.rc /etc/rc.d/init.d/afs</emphasis>
</programlisting></para>
</listitem>
<!-- I don't think we need to do this for Linux, and it complicates things if
dynroot is enabled ...
<listitem>
<para>Run the AFS initialization script to load AFS extensions into the kernel. You can ignore any error messages about
the inability to start the BOS Server or the Cache Manager or AFS client.</para>
<programlisting>
# <emphasis role="bold">/etc/rc.d/init.d/afs start</emphasis>
</programlisting>
</listitem>
-->
</orderedlist>
<indexterm>
<primary>configuring</primary>
<secondary>AFS server partition on first AFS machine</secondary>
<tertiary>Linux</tertiary>
</indexterm>
<indexterm>
<primary>AFS server partition</primary>
<secondary>configuring on first AFS machine</secondary>
<tertiary>Linux</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>AFS server partition</secondary>
<tertiary>on Linux</tertiary>
</indexterm>
<indexterm>
<primary>Linux</primary>
<secondary>AFS server partition</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
</sect3>
</sect2>
<sect2 id="HDRWQ43">
<title>Configuring Server Partitions on Linux Systems</title>
<para>Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each
server partition is mounted at a directory named <emphasis role="bold">/vicep</emphasis><replaceable>xx</replaceable>, where
<replaceable>xx</replaceable> is one or two lowercase letters. The <emphasis
role="bold">/vicep</emphasis><replaceable>xx</replaceable> directories must reside in the file server machine's root
directory, not in one of its subdirectories (for example, <emphasis role="bold">/usr/vicepa</emphasis> is not an acceptable
directory location). For additional information, see <link linkend="HDRWQ20">Performing Platform-Specific Procedures</link>.
<orderedlist>
<listitem>
<para>Create a directory called <emphasis role="bold">/vicep</emphasis><replaceable>xx</replaceable> for each AFS server
partition you are configuring (there must be at least one). Repeat the command for each partition. <programlisting>
# <emphasis role="bold">mkdir /vicep</emphasis><replaceable>xx</replaceable>
</programlisting></para>
</listitem>
<listitem>
<para>Add a line with the following format to the file systems registry file, <emphasis
role="bold">/etc/fstab</emphasis>, for each directory just created. The entry maps the directory name to the disk
partition to be mounted on it. <programlisting>
/dev/<replaceable>disk</replaceable> /vicep<replaceable>xx</replaceable> ext2 defaults 0 2
</programlisting></para>
<para>The following is an example for the first partition being configured.</para>
<programlisting>
/dev/sda8 /vicepa ext2 defaults 0 2
</programlisting>
</listitem>
<listitem>
<para>Create a file system on each partition that is to be mounted at a <emphasis
role="bold">/vicep</emphasis><replaceable>xx</replaceable> directory. The following command is probably appropriate, but
consult the Linux documentation for more information. <programlisting>
# <emphasis role="bold">mkfs -v /dev/</emphasis><replaceable>disk</replaceable>
</programlisting></para>
</listitem>
<listitem>
<para>Mount each partition by issuing either the <emphasis role="bold">mount -a</emphasis> command to mount all
partitions at once or the <emphasis role="bold">mount</emphasis> command to mount each partition in turn.</para>
</listitem>
<listitem>
<para>If you plan to retain client functionality on this machine after completing the installation, proceed to <link
linkend="HDRWQ44">Enabling AFS Login on Linux Systems</link>. Otherwise, proceed to <link linkend="HDRWQ50">Starting the
BOS Server</link>.</para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>enabling AFS login</primary>
<secondary>file server machine</secondary>
<tertiary>Linux</tertiary>
</indexterm>
<indexterm>
<primary>AFS login</primary>
<secondary>on file server machine</secondary>
<tertiary>Linux</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>AFS login</secondary>
<tertiary>on Linux</tertiary>
</indexterm>
<indexterm>
<primary>Linux</primary>
<secondary>AFS login</secondary>
<tertiary>on file server machine</tertiary>
</indexterm>
<indexterm>
<primary>PAM</primary>
<secondary>on Linux</secondary>
<tertiary>file server machine</tertiary>
</indexterm>
</sect2>
<sect2 id="HDRWQ44">
<title>Enabling AFS Login on Linux Systems</title>
<note>
<para>If you plan to remove client functionality from this machine
after completing the installation, skip this section and proceed
to <link linkend="HDRWQ50">Starting the BOS Server</link>.</para>
</note>
<para>At this point you incorporate AFS into the operating system's
Pluggable Authentication Module (PAM) scheme. PAM integrates all
authentication mechanisms on the machine, including login, to provide
the security infrastructure for authenticated access to and from the
machine.</para>
<para>You should first configure your system to obtain Kerberos v5
tickets as part of the authentication process, and then run an AFS PAM
module to obtain tokens from those tickets after authentication. Many
Linux distributions come with a Kerberos v5 PAM module (usually called
pam-krb5 or pam_krb5), or you can download and install <ulink
url="http://www.eyrie.org/~eagle/software/pam-krb5">Russ Allbery's
Kerberos v5 PAM module</ulink>, which is tested regularly with AFS.
See the instructions of whatever PAM module you use for how to
configure it.</para>
<para>Some Kerberos v5 PAM modules do come with native AFS support
(usually requiring the Heimdal Kerberos implementation rather than the
MIT Kerberos implementation). If you are using one of those PAM
modules, you can configure it to obtain AFS tokens. It's more common,
however, to separate the AFS token acquisition into a separate PAM
module.</para>
<para>The recommended AFS PAM module is <ulink
url="http://www.eyrie.org/~eagle/software/pam-afs-session/">Russ
Allbery's pam-afs-session module</ulink>. It should work with any of
the Kerberos v5 PAM modules. To add it to the PAM configuration, you
often only need to add configuration to the session group:</para>
<example>
<title>Linux PAM session example</title>
<literallayout>session required pam_afs_session.so</literallayout>
</example>
<para>If you also want to obtain AFS tokens for <command>scp</command>
and similar commands that don't open a session, you will also need to
add the AFS PAM module to the auth group so that the PAM
<function>setcred</function> call will obtain tokens. The
<literal>pam_afs_session</literal> module will always return success
for authentication so that it can be added to the auth group only for
<function>setcred</function>, so make sure that it's not marked as
<literal>sufficient</literal>.</para>
<example>
<title>Linux PAM auth example</title>
<literallayout>auth [success=ok default=1] pam_krb5.so
auth [default=done] pam_afs_session.so
auth required pam_unix.so try_first_pass</literallayout>
</example>
<para>This example will work if you want to try Kerberos v5 first and
then fall back to regular Unix authentication.
<literal>success=ok</literal> for the Kerberos PAM module followed by
<literal>default=done</literal> for the AFS PAM module will cause a
successful Kerberos login to run the AFS PAM module and then skip the
Unix authentication module. <literal>default=1</literal> on the
Kerberos PAM module causes failure of that module to skip the next
module (the AFS PAM module) and fall back to the Unix module. If you
want to try Unix authentication first and rearrange the order, be sure
to use <literal>default=die</literal> instead.</para>
<para>The PAM configuration is stored in different places in different
Linux distributions. On Red Hat, look in
<filename>/etc/pam.d/system-auth</filename>. On Debian and
derivatives, look in <filename>/etc/pam.d/common-session</filename>
and <filename>/etc/pam.d/common-auth</filename>.</para>
<para>For additional configuration examples and the configuration
options of the AFS PAM module, see its documentation. For more
details on the available options for the PAM configuration, see the
Linux PAM documentation.</para>
<para>Sites which still require <command>kaserver</command> or
external Kerberos v4 authentication should consult <link
linkend="KAS015">Enabling kaserver based AFS Login on Linux
Systems</link> for details of how to enable AFS login on Linux.</para>
<para>Proceed to <link linkend="HDRWQ50">Starting the BOS
Server</link> (or if referring to these instructions while installing
an additional file server machine, return to <link
linkend="HDRWQ108">Starting Server Programs</link>).</para>
</sect2>
</sect1>
<sect1 id="HDRWQ45">
<title>Getting Started on Solaris Systems</title>
<para>Begin by running the AFS initialization script to call the <emphasis role="bold">modload</emphasis> program distributed by
Sun Microsystems, which dynamically loads AFS modifications into the kernel. Then create partitions for storing AFS volumes, and
install and configure the AFS-modified <emphasis role="bold">fsck</emphasis> program to run on AFS server partitions. If the
machine is to remain an AFS client machine, incorporate AFS into the machine's Pluggable Authentication Module (PAM) scheme.
<indexterm>
<primary>incorporating AFS kernel extensions</primary>
<secondary>first AFS machine</secondary>
<tertiary>Solaris</tertiary>
</indexterm> <indexterm>
<primary>AFS kernel extensions</primary>
<secondary>on first AFS machine</secondary>
<tertiary>Solaris</tertiary>
</indexterm> <indexterm>
<primary>first AFS machine</primary>
<secondary>AFS kernel extensions</secondary>
<tertiary>on Solaris</tertiary>
</indexterm> <indexterm>
<primary>Solaris</primary>
<secondary>AFS kernel extensions</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm></para>
<sect2 id="HDRWQ46">
<title>Loading AFS into the Solaris Kernel</title>
<para>The <emphasis role="bold">modload</emphasis> program is the dynamic kernel loader provided by Sun Microsystems for
Solaris systems. Solaris does not support incorporation of AFS modifications during a kernel build.</para>
<para>For AFS to function correctly, the <emphasis role="bold">modload</emphasis> program must run each time the machine
reboots, so the AFS initialization script (included on the AFS CD-ROM) invokes it automatically. In this section you copy the
appropriate AFS library file to the location where the <emphasis role="bold">modload</emphasis> program accesses it and then
run the script.</para>
<para>In later sections you verify that the script correctly initializes all AFS components, then create the links that
incorporate AFS into the Solaris startup and shutdown sequence. <orderedlist>
<listitem>
<para>Unpack the OpenAFS Solaris distribution tarball. The examples
below assume that you have unpacked the files into the
<emphasis role="bold">/tmp/afsdist</emphasis> directory. If you
pick a diferent location, substitute this in all of the following
exmaples. Once you have unpacked the distribution, change directory
as indicated.
<programlisting>
# <emphasis role="bold">cd /tmp/afsdist/sun4x_56/root.client/usr/vice/etc</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Copy the AFS initialization script to the local directory for initialization files (by convention, <emphasis
role="bold">/etc/init.d</emphasis> on Solaris machines). Note the removal of the <emphasis role="bold">.rc</emphasis>
extension as you copy the script. <programlisting>
# <emphasis role="bold">cp -p afs.rc /etc/init.d/afs</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Copy the appropriate AFS kernel library file to the local file <emphasis
role="bold">/kernel/fs/afs</emphasis>.</para>
<para>If the machine is running Solaris 2.6 or the 32-bit version of Solaris 7, its kernel supports NFS server
functionality, and the <emphasis role="bold">nfsd</emphasis> process is running:</para>
<programlisting>
# <emphasis role="bold">cp -p modload/libafs.o /kernel/fs/afs</emphasis>
</programlisting>
<para>If the machine is running Solaris 2.6 or the 32-bit version of Solaris 7, and its kernel does not support NFS
server functionality or the <emphasis role="bold">nfsd</emphasis> process is not running:</para>
<programlisting>
# <emphasis role="bold">cp -p modload/libafs.nonfs.o /kernel/fs/afs</emphasis>
</programlisting>
<para>If the machine is running the 64-bit version of Solaris 7, its kernel supports NFS server functionality, and the
<emphasis role="bold">nfsd</emphasis> process is running:</para>
<programlisting>
# <emphasis role="bold">cp -p modload/libafs64.o /kernel/fs/sparcv9/afs</emphasis>
</programlisting>
<para>If the machine is running the 64-bit version of Solaris 7, and its kernel does not support NFS server
functionality or the <emphasis role="bold">nfsd</emphasis> process is not running:</para>
<programlisting>
# <emphasis role="bold">cp -p modload/libafs64.nonfs.o /kernel/fs/sparcv9/afs</emphasis>
</programlisting>
</listitem>
<listitem>
<para>Run the AFS initialization script to load AFS modifications into the kernel. You can ignore any error messages
about the inability to start the BOS Server or the Cache Manager or AFS client. <programlisting>
# <emphasis role="bold">/etc/init.d/afs start</emphasis>
</programlisting></para>
<para>When an entry called <computeroutput>afs</computeroutput> does not already exist in the local <emphasis
role="bold">/etc/name_to_sysnum</emphasis> file, the script automatically creates it and reboots the machine to start
using the new version of the file. If this happens, log in again as the superuser <emphasis role="bold">root</emphasis>
after the reboot and run the initialization script again. This time the required entry exists in the <emphasis
role="bold">/etc/name_to_sysnum</emphasis> file, and the <emphasis role="bold">modload</emphasis> program runs.</para>
<programlisting>
login: <emphasis role="bold">root</emphasis>
Password: <replaceable>root_password</replaceable>
# <emphasis role="bold">/etc/init.d/afs start</emphasis>
</programlisting>
</listitem>
</orderedlist></para>
<indexterm>
<primary>replacing fsck program</primary>
<secondary>first AFS machine</secondary>
<tertiary>Solaris</tertiary>
</indexterm>
<indexterm>
<primary>fsck program</primary>
<secondary>on first AFS machine</secondary>
<tertiary>Solaris</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>fsck program</secondary>
<tertiary>on Solaris</tertiary>
</indexterm>
<indexterm>
<primary>Solaris</primary>
<secondary>fsck program</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
</sect2>
<sect2 id="HDRWQ47">
<title>Configuring the AFS-modified fsck Program on Solaris Systems</title>
<para>In this section, you make modifications to guarantee that the appropriate <emphasis role="bold">fsck</emphasis> program
runs on AFS server partitions. The <emphasis role="bold">fsck</emphasis> program provided with the operating system must never
run on AFS server partitions. Because it does not recognize the structures that the File Server uses to organize volume data,
it removes all of the data. To repeat:</para>
<para><emphasis role="bold">Never run the standard fsck program on AFS server partitions. It discards AFS volumes.</emphasis>
<orderedlist>
<listitem>
<para>Create the <emphasis role="bold">/usr/lib/fs/afs</emphasis> directory to house the AFS-modified <emphasis
role="bold">fsck</emphasis> program and related files. <programlisting>
# <emphasis role="bold">mkdir /usr/lib/fs/afs</emphasis>
# <emphasis role="bold">cd /usr/lib/fs/afs</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Copy the <emphasis role="bold">vfsck</emphasis> binary to the newly created directory, changing the name as you do
so. <programlisting>
# <emphasis role="bold">cp /tmp/afsdist/sun4x_56/root.server/etc/vfsck fsck</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Working in the <emphasis role="bold">/usr/lib/fs/afs</emphasis> directory, create the following links to Solaris
libraries: <programlisting>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/clri</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/df</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/edquota</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/ff</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/fsdb</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/fsirand</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/fstyp</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/labelit</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/lockfs</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/mkfs</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/mount</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/ncheck</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/newfs</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/quot</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/quota</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/quotaoff</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/quotaon</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/repquota</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/tunefs</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/ufsdump</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/ufsrestore</emphasis>
# <emphasis role="bold">ln -s /usr/lib/fs/ufs/volcopy</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Append the following line to the end of the file <emphasis role="bold">/etc/dfs/fstypes</emphasis>.
<programlisting>
afs AFS Utilities
</programlisting></para>
</listitem>
<listitem>
<para>Edit the <emphasis role="bold">/sbin/mountall</emphasis> file, making two changes. <itemizedlist>
<listitem>
<para>Add an entry for AFS to the <computeroutput>case</computeroutput> statement for option 2, so that it reads
as follows: <programlisting>
case "$2" in
ufs) foptions="-o p"
;;
afs) foptions="-o p"
;;
s5) foptions="-y -t /var/tmp/tmp$$ -D"
;;
*) foptions="-y"
;;
</programlisting></para>
</listitem>
<listitem>
<para>Edit the file so that all AFS and UFS partitions are checked in parallel. Replace the following section of
code: <programlisting>
# For fsck purposes, we make a distinction between ufs and
# other file systems
#
if [ "$fstype" = "ufs" ]; then
ufs_fscklist="$ufs_fscklist $fsckdev"
saveentry $fstype "$OPTIONS" $special $mountp
continue
fi
</programlisting></para>
<para>with the following section of code:</para>
<programlisting>
# For fsck purposes, we make a distinction between ufs/afs
# and other file systems.
#
if [ "$fstype" = "ufs" -o "$fstype" = "afs" ]; then
ufs_fscklist="$ufs_fscklist $fsckdev"
saveentry $fstype "$OPTIONS" $special $mountp
continue
fi
</programlisting>
</listitem>
</itemizedlist></para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>configuring</primary>
<secondary>AFS server partition on first AFS machine</secondary>
<tertiary>Solaris</tertiary>
</indexterm>
<indexterm>
<primary>AFS server partition</primary>
<secondary>configuring on first AFS machine</secondary>
<tertiary>Solaris</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>AFS server partition</secondary>
<tertiary>on Solaris</tertiary>
</indexterm>
<indexterm>
<primary>Solaris</primary>
<secondary>AFS server partition</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
</sect2>
<sect2 id="HDRWQ48">
<title>Configuring Server Partitions on Solaris Systems</title>
<para>Every AFS file server machine must have at least one partition or logical volume dedicated to storing AFS volumes. Each
server partition is mounted at a directory named <emphasis role="bold">/vicep</emphasis><replaceable>xx</replaceable>, where
<replaceable>xx</replaceable> is one or two lowercase letters. The <emphasis
role="bold">/vicep</emphasis><replaceable>xx</replaceable> directories must reside in the file server machine's root
directory, not in one of its subdirectories (for example, <emphasis role="bold">/usr/vicepa</emphasis> is not an acceptable
directory location). For additional information, see <link linkend="HDRWQ20">Performing Platform-Specific Procedures</link>.
<orderedlist>
<listitem>
<para>Create a directory called <emphasis role="bold">/vicep</emphasis><replaceable>xx</replaceable> for each AFS server
partition you are configuring (there must be at least one). Repeat the command for each partition. <programlisting>
# <emphasis role="bold">mkdir /vicep</emphasis><replaceable>xx</replaceable>
</programlisting></para>
</listitem>
<listitem>
<para>Add a line with the following format to the file systems registry file, <emphasis
role="bold">/etc/vfstab</emphasis>, for each partition to be mounted on a directory created in the previous step. Note
the value <computeroutput>afs</computeroutput> in the fourth field, which tells Solaris to use the AFS-modified
<emphasis role="bold">fsck</emphasis> program on this partition. <programlisting>
/dev/dsk/<replaceable>disk</replaceable> /dev/rdsk/<replaceable>disk</replaceable> /vicep<replaceable>xx</replaceable> afs <replaceable>boot_order</replaceable> yes
</programlisting></para>
<para>The following is an example for the first partition being configured.</para>
<programlisting>
/dev/dsk/c0t6d0s1 /dev/rdsk/c0t6d0s1 /vicepa afs 3 yes
</programlisting>
</listitem>
<listitem>
<para>Create a file system on each partition that is to be mounted at a <emphasis
role="bold">/vicep</emphasis><replaceable>xx</replaceable> directory. The following command is probably appropriate, but
consult the Solaris documentation for more information. <programlisting>
# <emphasis role="bold">newfs -v /dev/rdsk/</emphasis><replaceable>disk</replaceable>
</programlisting></para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">mountall</emphasis> command to mount all partitions at once.</para>
</listitem>
<listitem>
<para>If you plan to retain client functionality on this machine after completing the installation, proceed to <link
linkend="HDRWQ49">Enabling AFS Login and Editing the File Systems Clean-up Script on Solaris Systems</link>. Otherwise,
proceed to <link linkend="HDRWQ50">Starting the BOS Server</link>.</para>
</listitem>
</orderedlist></para>
</sect2>
<sect2 id="HDRWQ49">
<title>Enabling AFS Login on Solaris Systems</title>
<indexterm>
<primary>enabling AFS login</primary>
<secondary>file server machine</secondary>
<tertiary>Solaris</tertiary>
</indexterm>
<indexterm>
<primary>AFS login</primary>
<secondary>on file server machine</secondary>
<tertiary>Solaris</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>AFS login</secondary>
<tertiary>on Solaris</tertiary>
</indexterm>
<indexterm>
<primary>Solaris</primary>
<secondary>AFS login</secondary>
<tertiary>on file server machine</tertiary>
</indexterm>
<indexterm>
<primary>PAM</primary>
<secondary>on Solaris</secondary>
<tertiary>file server machine</tertiary>
</indexterm>
<note>
<para>If you plan to remove client functionality from this machine after completing the installation, skip this section and
proceed to <link linkend="HDRWQ50">Starting the BOS Server</link>.</para>
</note>
<para>At this point you incorporate AFS into the operating system's
Pluggable Authentication Module (PAM) scheme. PAM integrates all
authentication mechanisms on the machine, including login, to provide
the security infrastructure for authenticated access to and from the
machine.</para>
<para>Explaining PAM is beyond the scope of this document. It is
assumed that you understand the syntax and meanings of settings in the
PAM configuration file (for example, how the
<computeroutput>other</computeroutput> entry works, the effect of
marking an entry as <computeroutput>required</computeroutput>,
<computeroutput>optional</computeroutput>, or
<computeroutput>sufficient</computeroutput>, and so on).</para>
<para>You should first configure your system to obtain Kerberos v5
tickets as part of the authentication process, and then run an AFS PAM
module to obtain tokens from those tickets after authentication.
Current versions of Solaris come with a Kerberos v5 PAM module that
will work, or you can download and install <ulink
url="http://www.eyrie.org/~eagle/software/pam-krb5">Russ Allbery's
Kerberos v5 PAM module</ulink>, which is tested regularly with AFS.
See the instructions of whatever PAM module you use for how to
configure it.</para>
<para>Some Kerberos v5 PAM modules do come with native AFS support
(usually requiring the Heimdal Kerberos implementation rather than the
MIT Kerberos implementation). If you are using one of those PAM
modules, you can configure it to obtain AFS tokens. It's more common,
however, to separate the AFS token acquisition into a separate PAM
module.</para>
<para>The recommended AFS PAM module is <ulink
url="http://www.eyrie.org/~eagle/software/pam-afs-session/">Russ
Allbery's pam-afs-session module</ulink>. It should work with any of
the Kerberos v5 PAM modules. To add it to the PAM configuration, you
often only need to add configuration to the session group in
<filename>pam.conf</filename>:</para>
<example>
<title>Solaris PAM session example</title>
<literallayout>login session required pam_afs_session.so</literallayout>
</example>
<para>This example enables PAM authentication only for console login.
You may want to add a similar line for the ssh service and for any
other login service that you use, including possibly the
<literal>other</literal> service (which serves as a catch-all). You
may also want to add options to the AFS PAM session module
(particularly <literal>retain_after_close</literal>, which is
necessary for some versions of Solaris.</para>
<para>For additional configuration examples and the configuration
options of the AFS PAM module, see its documentation. For more
details on the available options for the PAM configuration, see the
<filename>pam.conf</filename> manual page.</para>
<para>Sites which still require <emphasis
role="bold">kaserver</emphasis> or external Kerberos v4 authentication
should consult <link linkend="KAS016">Enabling kaserver based AFS
Login on Solaris Systems"</link> for details of how to enable AFS
login on Solaris.</para>
<para>Proceed to <link linkend="HDRWQ49a">Editing the File Systems
Clean-up Script on Solaris Systems</link></para>
</sect2>
<sect2 id="HDRWQ49a">
<title>Editing the File Systems Clean-up Script on Solaris Systems</title>
<indexterm>
<primary>Solaris</primary>
<secondary>file systems clean-up script</secondary>
<tertiary>on file server machine</tertiary>
</indexterm>
<indexterm>
<primary>file systems clean-up script (Solaris)</primary>
<secondary>file server machine</secondary>
</indexterm>
<indexterm>
<primary>scripts</primary>
<secondary>file systems clean-up (Solaris)</secondary>
<tertiary>file server machine</tertiary>
</indexterm>
<orderedlist>
<listitem>
<para>Some Solaris distributions include a script that locates and removes unneeded files from various file systems. Its
conventional location is <emphasis role="bold">/usr/lib/fs/nfs/nfsfind</emphasis>. The script generally uses an argument
to the <emphasis role="bold">find</emphasis> command to define which file systems to search. In this step you modify the
command to exclude the <emphasis role="bold">/afs</emphasis> directory. Otherwise, the command traverses the AFS
filespace of every cell that is accessible from the machine, which can take many hours. The following alterations are
possibilities, but you must verify that they are appropriate for your cell.</para>
<para>The first possible alteration is to add the <emphasis role="bold">-local</emphasis> flag to the existing command,
so that it looks like the following:</para>
<programlisting>
find $dir -local -name .nfs\* -mtime +7 -mount -exec rm -f {} \;
</programlisting>
<para>Another alternative is to exclude any directories whose names begin with the lowercase letter <emphasis
role="bold">a</emphasis> or a non-alphabetic character.</para>
<programlisting>
find /[A-Zb-z]* <replaceable>remainder of existing command</replaceable>
</programlisting>
<para>Do not use the following command, which still searches under the <emphasis role="bold">/afs</emphasis> directory,
looking for a subdirectory of type <emphasis role="bold">4.2</emphasis>.</para>
<programlisting>
find / -fstype 4.2 /* <replaceable>do not use</replaceable> */
</programlisting>
</listitem>
<listitem>
<para>Proceed to <link linkend="HDRWQ50">Starting the BOS Server</link> (or if referring to these instructions while
installing an additional file server machine, return to <link linkend="HDRWQ108">Starting Server
Programs</link>).</para>
</listitem>
</orderedlist>
<indexterm>
<primary>Basic OverSeer Server</primary>
<see>BOS Server</see>
</indexterm>
<indexterm>
<primary>BOS Server</primary>
<secondary>starting</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>starting</primary>
<secondary>BOS Server</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>BOS Server</secondary>
</indexterm>
<indexterm>
<primary>authorization checking (disabling)</primary>
<secondary>first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>disabling authorization checking</primary>
<secondary>first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>authorization checking (disabling)</secondary>
</indexterm>
</sect2>
</sect1>
<sect1 id="HDRWQ50">
<title>Starting the BOS Server</title>
<para>You are now ready to start the AFS server processes on this machine.
If you are not working from a packaged distribution, begin by copying the
AFS server binaries from the distribution to the conventional local disk
location, the <emphasis role="bold">/usr/afs/bin</emphasis> directory. The
following instructions also create files in other subdirectories of the
<emphasis role="bold">/usr/afs</emphasis> directory.</para>
<para>Then issue the <emphasis role="bold">bosserver</emphasis> command to initialize the Basic OverSeer (BOS) Server, which
monitors and controls other AFS server processes on its server machine. Include the <emphasis role="bold">-noauth</emphasis>
flag to disable authorization checking. Because you have not yet configured your cell's AFS authentication and authorization
mechanisms, the BOS Server cannot perform authorization checking as it does during normal operation. In no-authorization mode,
it does not verify the identity or privilege of the issuer of a <emphasis role="bold">bos</emphasis> command, and so performs
any operation for anyone.</para>
<para>Disabling authorization checking gravely compromises cell security. You must complete all subsequent steps in one
uninterrupted pass and must not leave the machine unattended until you restart the BOS Server with authorization checking
enabled, in <link linkend="HDRWQ72">Verifying the AFS Initialization Script</link>.</para>
<para>As it initializes for the first time, the BOS Server creates the following directories and files, setting the owner to the
local superuser <emphasis role="bold">root</emphasis> and the mode bits to limit the ability to write (and in some cases, read)
them. For a description of the contents and function of these directories and files, see the chapter in the <emphasis>OpenAFS
Administration Guide</emphasis> about administering server machines. For further discussion of the mode bit settings, see <link
linkend="HDRWQ96">Protecting Sensitive AFS Directories</link>. <indexterm>
<primary>Binary Distribution</primary>
<secondary>copying server files from</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm> <indexterm>
<primary>first AFS machine</primary>
<secondary>subdirectories of /usr/afs</secondary>
</indexterm> <indexterm>
<primary>creating</primary>
<secondary>/usr/afs/bin directory</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm> <indexterm>
<primary>creating</primary>
<secondary>/usr/afs/etc directory</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm> <indexterm>
<primary>copying</primary>
<secondary>server files to local disk</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm> <indexterm>
<primary>first AFS machine</primary>
<secondary>copying</secondary>
<tertiary>server files to local disk</tertiary>
</indexterm> <indexterm>
<primary>usr/afs/bin directory</primary>
<secondary>first AFS machine</secondary>
</indexterm> <indexterm>
<primary>usr/afs/etc directory</primary>
<secondary>first AFS machine</secondary>
</indexterm> <indexterm>
<primary>usr/afs/db directory</primary>
</indexterm> <indexterm>
<primary>usr/afs/local directory</primary>
</indexterm> <indexterm>
<primary>usr/afs/logs directory</primary>
</indexterm> <itemizedlist>
<listitem>
<para><emphasis role="bold">/usr/afs/db</emphasis></para>
</listitem>
<listitem>
<para><emphasis role="bold">/usr/afs/etc/CellServDB</emphasis></para>
</listitem>
<listitem>
<para><emphasis role="bold">/usr/afs/etc/ThisCell</emphasis></para>
</listitem>
<listitem>
<para><emphasis role="bold">/usr/afs/local</emphasis></para>
</listitem>
<listitem>
<para><emphasis role="bold">/usr/afs/logs</emphasis></para>
</listitem>
</itemizedlist></para>
<para>The BOS Server also creates symbolic links called <emphasis role="bold">/usr/vice/etc/ThisCell</emphasis> and <emphasis
role="bold">/usr/vice/etc/CellServDB</emphasis> to the corresponding files in the <emphasis role="bold">/usr/afs/etc</emphasis>
directory. The AFS command interpreters consult the <emphasis role="bold">CellServDB</emphasis> and <emphasis
role="bold">ThisCell</emphasis> files in the <emphasis role="bold">/usr/vice/etc</emphasis> directory because they generally run
on client machines. On machines that are AFS servers only (as this machine currently is), the files reside only in the <emphasis
role="bold">/usr/afs/etc</emphasis> directory; the links enable the command interpreters to retrieve the information they need.
Later instructions for installing the client functionality replace the links with actual files. <orderedlist>
<listitem>
<para>If you are not working from a packaged distribution, you may need to copy files from the distribution media to the local <emphasis role="bold">/usr/afs</emphasis> directory.
<programlisting>
# <emphasis role="bold">cd /tmp/afsdist/</emphasis><replaceable>sysname</replaceable><emphasis role="bold">/root.server/usr/afs</emphasis>
# <emphasis role="bold">cp -rp * /usr/afs</emphasis>
</programlisting> <indexterm>
<primary>commands</primary>
<secondary>bosserver</secondary>
</indexterm> <indexterm>
<primary>bosserver command</primary>
</indexterm></para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">bosserver</emphasis> command. Include the <emphasis role="bold">-noauth</emphasis>
flag to disable authorization checking. <programlisting>
# <emphasis role="bold">/usr/afs/bin/bosserver -noauth &amp;</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Verify that the BOS Server created <emphasis role="bold">/usr/vice/etc/ThisCell</emphasis> and <emphasis
role="bold">/usr/vice/etc/CellServDB</emphasis> as symbolic links to the corresponding files in the <emphasis
role="bold">/usr/afs/etc</emphasis> directory. <programlisting>
# <emphasis role="bold">ls -l /usr/vice/etc</emphasis>
</programlisting></para>
<para>If either or both of <emphasis role="bold">/usr/vice/etc/ThisCell</emphasis> and <emphasis
role="bold">/usr/vice/etc/CellServDB</emphasis> do not exist, or are not links, issue the following commands.</para>
<programlisting>
# <emphasis role="bold">cd /usr/vice/etc</emphasis>
# <emphasis role="bold">ln -s /usr/afs/etc/ThisCell</emphasis>
# <emphasis role="bold">ln -s /usr/afs/etc/CellServDB</emphasis>
</programlisting>
</listitem>
</orderedlist></para>
<indexterm>
<primary>cell name</primary>
<secondary>defining during installation of first machine</secondary>
</indexterm>
<indexterm>
<primary>defining</primary>
<secondary>cell name during installation of first machine</secondary>
</indexterm>
<indexterm>
<primary>cell name</primary>
<secondary>setting in server ThisCell file</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>setting</primary>
<secondary>cell name in server ThisCell file</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>ThisCell file (server)</secondary>
</indexterm>
<indexterm>
<primary>usr/afs/etc/ThisCell</primary>
<see>ThisCell file (server)</see>
</indexterm>
<indexterm>
<primary>ThisCell file (server)</primary>
<secondary>first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>files</primary>
<secondary>ThisCell (server)</secondary>
</indexterm>
<indexterm>
<primary>database server machine</primary>
<secondary>entry in server CellServDB file</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>cell membership, defining</secondary>
<tertiary>for server processes</tertiary>
</indexterm>
<indexterm>
<primary>usr/afs/etc/CellServDB file</primary>
<see>CellServDB file (server)</see>
</indexterm>
<indexterm>
<primary>CellServDB file (server)</primary>
<secondary>creating</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>creating</primary>
<secondary>CellServDB file (server)</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>files</primary>
<secondary>CellServDB (server)</secondary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>CellServDB file (server)</secondary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>defining</secondary>
<tertiary>as database server</tertiary>
</indexterm>
<indexterm>
<primary>defining</primary>
<secondary>first AFS machine as database server</secondary>
</indexterm>
</sect1>
<sect1 id="HDRWQ51">
<title>Defining Cell Name and Membership for Server Processes</title>
<para>Now assign your cell's name. The chapter in the <emphasis>OpenAFS Administration Guide</emphasis> about cell configuration
and administration issues discusses the important considerations, explains why changing the name is difficult, and outlines the
restrictions on name format. Two of the most important restrictions are that the name cannot include uppercase letters or more
than 64 characters.</para>
<para>Use the <emphasis role="bold">bos setcellname</emphasis> command to assign the cell name. It creates two files:
<itemizedlist>
<listitem>
<para><emphasis role="bold">/usr/afs/etc/ThisCell</emphasis>, which defines this machine's cell membership</para>
</listitem>
<listitem>
<para><emphasis role="bold">/usr/afs/etc/CellServDB</emphasis>, which lists the cell's database server machines; the
machine named on the command line is placed on the list automatically</para>
</listitem>
</itemizedlist> <note>
<para>In the following and every instruction in this guide, for the <replaceable>machine name</replaceable> argument
substitute the fully-qualified hostname (such as <emphasis role="bold">fs1.example.com</emphasis>) of the machine you are
installing. For the <replaceable>cell name</replaceable> argument substitute your cell's complete name (such as <emphasis
role="bold">example.com</emphasis>).</para>
</note></para>
<indexterm>
<primary>commands</primary>
<secondary>bos setcellname</secondary>
</indexterm>
<indexterm>
<primary>bos commands</primary>
<secondary>setcellname</secondary>
</indexterm>
<orderedlist>
<listitem>
<para>If necessary, add the directory containing the <emphasis role="bold">bos</emphasis> command to your path.
<programlisting>
# <emphasis role="bold">export PATH=$PATH:/usr/afs/bin</emphasis>
</programlisting>
</para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">bos setcellname</emphasis> command to set the cell name. <programlisting>
# <emphasis role="bold">bos setcellname</emphasis> &lt;<replaceable>machine name</replaceable>&gt; &lt;<replaceable>cell name</replaceable>&gt; <emphasis
role="bold">-noauth</emphasis>
</programlisting></para>
<para>Because you are not authenticated and authorization checking is disabled, the <emphasis role="bold">bos</emphasis>
command interpreter possibly produces error messages about being unable to obtain tickets and running unauthenticated. You
can safely ignore the messages. <indexterm>
<primary>commands</primary>
<secondary>bos listhosts</secondary>
</indexterm> <indexterm>
<primary>bos commands</primary>
<secondary>listhosts</secondary>
</indexterm> <indexterm>
<primary>CellServDB file (server)</primary>
<secondary>displaying entries</secondary>
</indexterm> <indexterm>
<primary>displaying</primary>
<secondary>CellServDB file (server) entries</secondary>
</indexterm></para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">bos listhosts</emphasis> command to verify that the machine you are installing is now
registered as the cell's first database server machine. <programlisting>
# <emphasis role="bold">bos listhosts</emphasis> &lt;<replaceable>machine name</replaceable>&gt; <emphasis role="bold">-noauth</emphasis>
Cell name is <replaceable>cell_name</replaceable>
Host 1 is <replaceable>machine_name</replaceable>
</programlisting></para>
</listitem>
</orderedlist>
<indexterm>
<primary>database server machine</primary>
<secondary>installing</secondary>
<tertiary>first</tertiary>
</indexterm>
<indexterm>
<primary>instructions</primary>
<secondary>database server machine, installing first</secondary>
</indexterm>
<indexterm>
<primary>installing</primary>
<secondary>database server machine</secondary>
<tertiary>first</tertiary>
</indexterm>
<indexterm>
<primary>Backup Server</primary>
<secondary>starting</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>buserver process</primary>
<see>Backup Server</see>
</indexterm>
<indexterm>
<primary>starting</primary>
<secondary>Backup Server</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>Backup Server</secondary>
</indexterm>
<indexterm>
<primary>Protection Server</primary>
<secondary>starting</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>ptserver process</primary>
<see>Protection Server</see>
</indexterm>
<indexterm>
<primary>starting</primary>
<secondary>Protection Server</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>Protection Server</secondary>
</indexterm>
<indexterm>
<primary>VL Server (vlserver process)</primary>
<secondary>starting</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>Volume Location Server</primary>
<see>VL Server</see>
</indexterm>
<indexterm>
<primary>starting</primary>
<secondary>VL Server</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>VL Server</secondary>
</indexterm>
<indexterm>
<primary>usr/afs/local/BosConfig</primary>
<see>BosConfig file</see>
</indexterm>
<indexterm>
<primary>BosConfig file</primary>
<secondary>adding entries</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>adding</primary>
<secondary>entries to BosConfig file</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>files</primary>
<secondary>BosConfig</secondary>
</indexterm>
<indexterm>
<primary>initializing</primary>
<secondary>server process</secondary>
<see>starting</see>
</indexterm>
<indexterm>
<primary>server process</primary>
<secondary>see also entry for each server's name</secondary>
</indexterm>
</sect1>
<sect1 id="HDRWQ52">
<title>Starting the Database Server Processes</title>
<para>Next use the <emphasis role="bold">bos create</emphasis> command to create entries for the three database server processes
in the <emphasis role="bold">/usr/afs/local/BosConfig</emphasis> file and start them running. The three processes run on database
server machines only: <itemizedlist>
<listitem>
<para>The Backup Server (the <emphasis role="bold">buserver</emphasis> process) maintains the Backup Database</para>
</listitem>
<listitem>
<para>The Protection Server (the <emphasis role="bold">ptserver</emphasis> process) maintains the Protection
Database</para>
</listitem>
<listitem>
<para>The Volume Location (VL) Server (the <emphasis role="bold">vlserver</emphasis> process) maintains the Volume
Location Database (VLDB)</para>
</listitem>
</itemizedlist></para>
<indexterm>
<primary>Kerberos</primary>
</indexterm>
<note>
<para>AFS ships with an additional database server named 'kaserver', which
was historically used to provide authentication services to AFS cells.
kaserver was based on <emphasis>Kerberos v4</emphasis>, as such, it is
not recommended for new cells. This guide assumes you have already
configured a Kerberos v5 realm for your site, and details the procedures
required to use AFS with this realm. If you do wish to use
<emphasis role="bold">kaserver</emphasis>, please see the modifications
to these instructions detailed in
<link linkend="KAS006">Starting the kaserver Database Server Process</link>
</para>
</note>
<para>The remaining instructions in this chapter include the <emphasis role="bold">-cell</emphasis> argument on all applicable
commands. Provide the cell name you assigned in <link linkend="HDRWQ51">Defining Cell Name and Membership for Server
Processes</link>. If a command appears on multiple lines, it is only for legibility. <indexterm>
<primary>commands</primary>
<secondary>bos create</secondary>
</indexterm> <indexterm>
<primary>bos commands</primary>
<secondary>create</secondary>
</indexterm> <orderedlist>
<listitem>
<para>Issue the <emphasis role="bold">bos create</emphasis> command to start the Backup Server. <programlisting>
# <emphasis role="bold">./bos create</emphasis> &lt;<replaceable>machine name</replaceable>&gt; <emphasis role="bold">buserver simple /usr/afs/bin/buserver</emphasis> \
<emphasis role="bold"> -cell</emphasis> &lt;<replaceable>cell name</replaceable>&gt; <emphasis role="bold">-noauth</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">bos create</emphasis> command to start the Protection Server. <programlisting>
# <emphasis role="bold">./bos create</emphasis> &lt;<replaceable>machine name</replaceable>&gt; <emphasis role="bold">ptserver simple /usr/afs/bin/ptserver</emphasis> \
<emphasis role="bold"> -cell</emphasis> &lt;<replaceable>cell name</replaceable>&gt; <emphasis role="bold">-noauth</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">bos create</emphasis> command to start the VL Server. <programlisting>
# <emphasis role="bold">./bos create</emphasis> &lt;<replaceable>machine name</replaceable>&gt; <emphasis role="bold">vlserver simple /usr/afs/bin/vlserver</emphasis> \
<emphasis role="bold"> -cell</emphasis> &lt;<replaceable>cell name</replaceable>&gt; <emphasis role="bold">-noauth</emphasis>
</programlisting></para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>admin account</primary>
<secondary>creating</secondary>
</indexterm>
<indexterm>
<primary>afs entry in Kerberos Database</primary>
</indexterm>
<indexterm>
<primary>Kerberos Database</primary>
</indexterm>
<indexterm>
<primary>creating</primary>
<secondary>afs entry in Kerberos Database</secondary>
</indexterm>
<indexterm>
<primary>creating</primary>
<secondary>admin account in Kerberos Database</secondary>
</indexterm>
<indexterm>
<primary>security</primary>
<secondary>initializing cell-wide</secondary>
</indexterm>
<indexterm>
<primary>cell</primary>
<secondary>initializing security mechanisms</secondary>
</indexterm>
<indexterm>
<primary>initializing</primary>
<secondary>cell security mechanisms</secondary>
</indexterm>
<indexterm>
<primary>usr/afs/etc/KeyFile</primary>
<see>KeyFile file</see>
</indexterm>
<indexterm>
<primary>KeyFile file</primary>
<secondary>first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>files</primary>
<secondary>KeyFile</secondary>
</indexterm>
<indexterm>
<primary>key</primary>
<see>server encryption key</see>
</indexterm>
<indexterm>
<primary>encryption key</primary>
<see>server encryption key</see>
</indexterm>
</sect1>
<sect1 id="HDRWQ53">
<title>Initializing Cell Security </title>
<para>If you are working with an existing cell which uses
<emphasis role="bold">kaserver</emphasis> or Kerberos v4 for authentication,
please see
<link linkend="HDRWQ53">Initializing Cell Security with kaserver</link>
for installation instructions which replace this section.</para>
<para>Now initialize the cell's security mechanisms. Begin by creating the following two entires in your site's Kerberos database: <itemizedlist>
<listitem>
<para>A generic administrative account, called <emphasis role="bold">admin</emphasis> by convention. If you choose to
assign a different name, substitute it throughout the remainder of this document.</para>
<para>After you complete the installation of the first machine, you can continue to have all administrators use the
<emphasis role="bold">admin</emphasis> account, or you can create a separate administrative account for each of them. The
latter scheme implies somewhat more overhead, but provides a more informative audit trail for administrative
operations.</para>
</listitem>
<listitem>
<para>The entry for AFS server processes, called either
<emphasis role="bold">afs</emphasis> or
<emphasis role="bold">afs/<replaceable>cell</replaceable></emphasis>.
No user logs in under this identity, but it is used to encrypt the
server tickets that granted to AFS clients for presentation to
server processes during mutual authentication. (The
chapter in the <emphasis>OpenAFS Administration Guide</emphasis> about cell configuration and administration describes the
role of server encryption keys in mutual authentication.)</para>
<para>In Step <link linkend="LIWQ58">7</link>, you also place the initial AFS server encryption key into the <emphasis
role="bold">/usr/afs/etc/KeyFile</emphasis> file. The AFS server processes refer to this file to learn the server
encryption key when they need to decrypt server tickets.</para>
</listitem>
</itemizedlist></para>
<para>You also issue several commands that enable the new <emphasis role="bold">admin</emphasis> user to issue privileged
commands in all of the AFS suites.</para>
<para>The following instructions do not configure all of the security mechanisms related to the AFS Backup System. See the
chapter in the <emphasis>OpenAFS Administration Guide</emphasis> about configuring the Backup System.</para>
<para>The examples below assume you are using MIT Kerberos. Please refer
to the documentation for your KDC's administrative interface if you are
using a different vendor</para>
<orderedlist>
<listitem>
<para>Enter <emphasis role="bold">kadmin</emphasis> interactive mode.
<programlisting>
# <emphasis role="bold">kadmin</emphasis>
Authenticating as principal <replaceable>you</replaceable>/admin@<replaceable>YOUR REALM</replaceable> with password
Password for <replaceable>you/admin@REALM</replaceable>: <replaceable>your_password</replaceable>
</programlisting> <indexterm>
<primary>server encryption key</primary>
<secondary>in Kerberos Database</secondary>
</indexterm> <indexterm>
<primary>creating</primary>
<secondary>server encryption key</secondary>
<tertiary>Kerberos Database</tertiary>
</indexterm></para>
</listitem>
<listitem>
<para><anchor id="LIWQ54" />Issue the
<emphasis role="bold">add_principal</emphasis> command to create
Kerberos Database entries called
<emphasis role="bold">admin</emphasis> and
<emphasis role="bold">afs/&lt;<replaceable>cell name</replaceable>&gt;</emphasis>.</para>
<para>You should make the <replaceable>admin_passwd</replaceable> as
long and complex as possible, but keep in mind that administrators
need to enter it often. It must be at least six characters long.</para>
<para>Note that when creating the
<emphasis role="bold">afs/&lt;<replaceable>cell name</replaceable>&gt;</emphasis>
entry, the encryption types should be restricted to des-cbc-crc:v4.
For more details regarding encryption types, see the documentation
for your Kerberos installation.
<programlisting>
kadmin: <emphasis role="bold">add_principal -randkey -e des-cbc-crc:v4 afs/</emphasis>&lt;<replaceable>cell name</replaceable>&gt;
Principal "afs/<replaceable>cell name</replaceable>@<replaceable>REALM</replaceable>" created.
kadmin: <emphasis role="bold">add_principal admin</emphasis>
Enter password for principal "admin@<replaceable>REALM</replaceable>": <emphasis role="bold"><replaceable>admin_password</replaceable></emphasis>
Principal "admin@<replaceable>REALM</replaceable>" created.
</programlisting>
</para>
<indexterm>
<primary>commands</primary>
<secondary>kas examine</secondary>
</indexterm>
<indexterm>
<primary>kas commands</primary>
<secondary>examine</secondary>
</indexterm>
<indexterm>
<primary>displaying</primary>
<secondary>server encryption key</secondary>
<tertiary>Authentication Database</tertiary>
</indexterm>
</listitem>
<listitem>
<para><anchor id="LIWQ55" />Issue the <emphasis role="bold">kadmin
get_principal</emphasis> command to display the <emphasis
role="bold">afs/</emphasis>&lt;<replaceable>cell name</replaceable>&gt; entry.
<programlisting>
kadmin: <emphasis role="bold">get_principal afs/&lt;<replaceable>cell name</replaceable>&gt;</emphasis>
Principal: afs/<replaceable>cell</replaceable>
[ ... ]
Key: vno 2, DES cbc mode with CRC-32, no salt
[ ... ]
</programlisting>
</para>
</listitem>
<listitem>
<para>Extract the newly created key for <emphasis role="bold">afs/<replaceable>cell</replaceable></emphasis> to a keytab on the local machine. We will use <emphasis role="bold">/etc/afs.keytab</emphasis> as the location for this keytab.</para>
<para>The keytab contains the key material that ensures the security of your AFS cell. You should ensure that it is kept in a secure location at all times.</para>
<programlisting>
kadmin: <emphasis role="bold">ktadd -k /etc/afs.keytab -e des-cbc-crc:v4 afs/&lt;<replaceable>cell name</replaceable>&gt;</emphasis>
Entry for principal afs/&lt;<replaceable>cell name</replaceable>&gt; with kvno 3, encryption type DES cbc mode with CRC-32 added to keytab WRFILE:/etc/afs.keytab
</programlisting>
<para>Make a note of the key version number (kvno) given in the
response, as you will need it to load the key into bos in a later
step</para>
<note><para>Note that each time you run
<emphasis role="bold">ktadd</emphasis> a new key is generated
for the item being extracted. This means that you cannot run ktadd
multiple times and end up with the same key material each time.
</para></note>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">kadmin quit</emphasis> command to leave <emphasis role="bold">kadmin</emphasis>
interactive mode. <programlisting>
kadmin: <emphasis role="bold">quit</emphasis>
</programlisting> <indexterm>
<primary>commands</primary>
<secondary>bos adduser</secondary>
</indexterm> <indexterm>
<primary>bos commands</primary>
<secondary>adduser</secondary>
</indexterm> <indexterm>
<primary>usr/afs/etc/UserList</primary>
<see>UserList file</see>
</indexterm> <indexterm>
<primary>UserList file</primary>
<secondary>first AFS machine</secondary>
</indexterm> <indexterm>
<primary>files</primary>
<secondary>UserList</secondary>
</indexterm> <indexterm>
<primary>creating</primary>
<secondary>UserList file entry</secondary>
</indexterm> <indexterm>
<primary>admin account</primary>
<secondary>adding</secondary>
<tertiary>to UserList file</tertiary>
</indexterm></para>
</listitem>
<listitem>
<para><anchor id="LIWQ57" />Issue the <emphasis role="bold">bos adduser</emphasis> command to add the <emphasis
role="bold">admin</emphasis> user to the <emphasis role="bold">/usr/afs/etc/UserList</emphasis> file. This enables the
<emphasis role="bold">admin</emphasis> user to issue privileged <emphasis role="bold">bos</emphasis> and <emphasis
role="bold">vos</emphasis> commands. <programlisting>
# <emphasis role="bold">./bos adduser</emphasis> &lt;<replaceable>machine name</replaceable>&gt; <emphasis role="bold">admin -cell</emphasis> &lt;<replaceable>cell name</replaceable>&gt; <emphasis
role="bold">-noauth</emphasis>
</programlisting>
<indexterm>
<primary>commands</primary>
<secondary>asetkey</secondary>
</indexterm>
<indexterm>
<primary>creating</primary>
<secondary>server encryption key</secondary>
<tertiary>KeyFile file</tertiary>
</indexterm>
<indexterm>
<primary>server encryption key</primary>
<secondary>in KeyFile file</secondary>
</indexterm></para>
</listitem>
<listitem>
<para><anchor id="LIWQ58" />Issue the
<emphasis role="bold">asetkey</emphasis> command to set the AFS
server encryption key in the
<emphasis role="bold">/usr/afs/etc/KeyFile</emphasis> file. This key
is created from the <emphasis role="bold">/etc/afs.keytab</emphasis>
file created earlier.</para>
<para>asetkey requires the key version number (or kvno) of the
<emphasis role="bold">afs/</emphasis><replaceable>cell</replaceable>
key. You should have noted this down when creating the key earlier.
The key version number can also be found by running the
<emphasis role="bold">kvno</emphasis> command</para>
<programlisting>
# <emphasis role="bold">kvno afs/</emphasis>&lt;<replaceable>cell name</replaceable>&gt;
</programlisting>
<para>Once the kvno is known, the key can then be extracted using
asetkey</para>
<programlisting>
# <emphasis role="bold">asetkey add</emphasis> &lt;<replaceable>kvno</replaceable>&gt; <emphasis role="bold">/etc/afs.keytab afs/</emphasis>&lt;<replaceable>cell name</replaceable>&gt;
</programlisting>
<indexterm>
<primary>commands</primary>
<secondary>bos listkeys</secondary>
</indexterm>
<indexterm>
<primary>bos commands</primary>
<secondary>listkeys</secondary>
</indexterm>
<indexterm>
<primary>displaying</primary>
<secondary>server encryption key</secondary>
<tertiary>KeyFile file</tertiary>
</indexterm>
</listitem>
<listitem>
<para><anchor id="LIWQ59" />Issue the
<emphasis role="bold">bos listkeys</emphasis> command to verify that
the key version number for the new key in the
<emphasis role="bold">KeyFile</emphasis> file is the same as the key
version number in the Authentication Database's
<emphasis role="bold">afs/<replaceable>cell name</replaceable></emphasis>
entry, which you displayed in Step <link linkend="LIWQ55">3</link>.
<programlisting>
# <emphasis role="bold">./bos listkeys</emphasis> &lt;<replaceable>machine name</replaceable>&gt; <emphasis role="bold">-cell</emphasis> &lt;<replaceable>cell name</replaceable>&gt; <emphasis
role="bold">-noauth</emphasis>
key 0 has cksum <replaceable>checksum</replaceable>
</programlisting></para>
<para>You can safely ignore any error messages indicating that <emphasis role="bold">bos</emphasis> failed to get tickets
or that authentication failed.</para>
</listitem>
</orderedlist>
</sect1>
<sect1 id="HDRWQ53a">
<title>Initializing the Protection Database</title>
<para>Now continue to configure your cell's security systems by
populating the Protection Database with the newly created
<emphasis role="bold">admin</emphasis> user, and permitting it
to issue priviledged commands on the AFS filesystem.</para>
<orderedlist>
<listitem>
<indexterm>
<primary>commands</primary>
<secondary>pts createuser</secondary>
</indexterm>
<indexterm>
<primary>pts commands</primary>
<secondary>createuser</secondary>
</indexterm>
<indexterm>
<primary>Protection Database</primary>
</indexterm>
<para>Issue the <emphasis role="bold">pts createuser</emphasis> command to create a Protection Database entry for the
<emphasis role="bold">admin</emphasis> user.</para>
<para>By default, the Protection Server assigns AFS UID 1 (one) to the <emphasis role="bold">admin</emphasis> user,
because it is the first user entry you are creating. If the local password file (<emphasis
role="bold">/etc/passwd</emphasis> or equivalent) already has an entry for <emphasis role="bold">admin</emphasis> that
assigns it a UNIX UID other than 1, it is best to use the <emphasis role="bold">-id</emphasis> argument to the <emphasis
role="bold">pts createuser</emphasis> command to make the new AFS UID match the existing UNIX UID. Otherwise, it is best
to accept the default.</para>
<programlisting>
# <emphasis role="bold">pts createuser -name admin -cell</emphasis> &lt;<replaceable>cell name</replaceable>&gt; [<emphasis
role="bold">-id</emphasis> &lt;<replaceable>AFS UID</replaceable>&gt;] <emphasis role="bold">-noauth</emphasis>
User admin has id <replaceable>AFS UID</replaceable>
</programlisting>
<indexterm>
<primary>commands</primary>
<secondary>pts adduser</secondary>
</indexterm>
<indexterm>
<primary>pts commands</primary>
<secondary>adduser</secondary>
</indexterm>
<indexterm>
<primary>system:administrators group</primary>
</indexterm>
<indexterm>
<primary>admin account</primary>
<secondary>adding</secondary>
<tertiary>to system:administrators group</tertiary>
</indexterm>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">pts adduser</emphasis> command to make the <emphasis role="bold">admin</emphasis>
user a member of the <emphasis role="bold">system:administrators</emphasis> group, and the <emphasis role="bold">pts
membership</emphasis> command to verify the new membership. Membership in the group enables the <emphasis
role="bold">admin</emphasis> user to issue privileged <emphasis role="bold">pts</emphasis> commands and some privileged
<emphasis role="bold">fs</emphasis> commands. <programlisting>
# <emphasis role="bold">./pts adduser admin system:administrators -cell</emphasis> &lt;<replaceable>cell name</replaceable>&gt; <emphasis
role="bold">-noauth</emphasis>
# <emphasis role="bold">./pts membership admin -cell</emphasis> &lt;<replaceable>cell name</replaceable>&gt; <emphasis
role="bold">-noauth</emphasis>
Groups admin (id: 1) is a member of:
system:administrators
</programlisting> <indexterm>
<primary>commands</primary>
<secondary>bos restart</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm> <indexterm>
<primary>bos commands</primary>
<secondary>restart</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm> <indexterm>
<primary>restarting server process</primary>
<secondary>on first AFS machine</secondary>
</indexterm> <indexterm>
<primary>server process</primary>
<secondary>restarting</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm></para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">bos restart</emphasis> command with the <emphasis role="bold">-all</emphasis> flag
to restart the database server processes, so that they start using the new server encryption key. <programlisting>
# <emphasis role="bold">./bos restart</emphasis> &lt;<replaceable>machine name</replaceable>&gt; <emphasis role="bold">-all -cell</emphasis> &lt;<replaceable>cell name</replaceable>&gt; <emphasis
role="bold">-noauth</emphasis>
</programlisting></para>
</listitem>
</orderedlist>
<indexterm>
<primary>File Server</primary>
<secondary>first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>fileserver process</primary>
<see>File Server</see>
</indexterm>
<indexterm>
<primary>starting</primary>
<secondary>File Server</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>File Server, fs process</secondary>
</indexterm>
<indexterm>
<primary>Volume Server</primary>
<secondary>first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>volserver process</primary>
<see>Volume Server</see>
</indexterm>
<indexterm>
<primary>starting</primary>
<secondary>Volume Server</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>Volume Server</secondary>
</indexterm>
<indexterm>
<primary>Salvager (salvager process)</primary>
<secondary>first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>fs process</primary>
<secondary>first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>starting</primary>
<secondary>fs process</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>Salvager</secondary>
</indexterm>
</sect1>
<sect1 id="HDRWQ60">
<title>Starting the File Server, Volume Server, and Salvager</title>
<para>Start the <emphasis role="bold">fs</emphasis> process, which consists of the File Server, Volume Server, and Salvager
(<emphasis role="bold">fileserver</emphasis>, <emphasis role="bold">volserver</emphasis> and <emphasis
role="bold">salvager</emphasis> processes). <orderedlist>
<listitem>
<para>Issue the <emphasis role="bold">bos create</emphasis> command to start the <emphasis role="bold">fs</emphasis>
process. The command appears here on multiple lines only for legibility. <programlisting>
# <emphasis role="bold">./bos create</emphasis> &lt;<replaceable>machine name</replaceable>&gt; <emphasis role="bold">fs fs /usr/afs/bin/fileserver</emphasis> \
<emphasis role="bold">/usr/afs/bin/volserver /usr/afs/bin/salvager</emphasis> \
<emphasis role="bold">-cell</emphasis> &lt;<replaceable>cell name</replaceable>&gt; <emphasis role="bold">-noauth</emphasis>
</programlisting></para>
<para>Sometimes a message about Volume Location Database (VLDB) initialization appears, along with one or more instances
of an error message similar to the following:</para>
<programlisting>
FSYNC_clientInit temporary failure (will retry)
</programlisting>
<para>This message appears when the <emphasis role="bold">volserver</emphasis> process tries to start before the <emphasis
role="bold">fileserver</emphasis> process has completed its initialization. Wait a few minutes after the last such message
before continuing, to guarantee that both processes have started successfully. <indexterm>
<primary>commands</primary>
<secondary>bos status</secondary>
</indexterm> <indexterm>
<primary>bos commands</primary>
<secondary>status</secondary>
</indexterm></para>
<para>You can verify that the <emphasis role="bold">fs</emphasis> process has started successfully by issuing the
<emphasis role="bold">bos status</emphasis> command. Its output mentions two <computeroutput>proc
starts</computeroutput>.</para>
<programlisting>
# <emphasis role="bold">./bos status</emphasis> &lt;<replaceable>machine name</replaceable>&gt; <emphasis role="bold">fs -long -noauth</emphasis>
</programlisting>
</listitem>
<listitem>
<para>Your next action depends on whether you have ever run AFS file server machines in the cell: <itemizedlist>
<indexterm>
<primary>commands</primary>
<secondary>vos create</secondary>
<tertiary>root.afs volume</tertiary>
</indexterm>
<indexterm>
<primary>vos commands</primary>
<secondary>create</secondary>
<tertiary>root.afs volume</tertiary>
</indexterm>
<indexterm>
<primary>root.afs volume</primary>
<secondary>creating</secondary>
</indexterm>
<indexterm>
<primary>volume</primary>
<secondary>creating</secondary>
<tertiary>root.afs</tertiary>
</indexterm>
<indexterm>
<primary>creating</primary>
<secondary>root.afs volume</secondary>
</indexterm>
<listitem>
<para>If you are installing the first AFS server machine ever in the cell (that is, you are not upgrading the AFS
software from a previous version), create the first AFS volume, <emphasis role="bold">root.afs</emphasis>.</para>
<para>For the <replaceable>partition name</replaceable> argument, substitute the name of one of the machine's AFS
server partitions (such as <emphasis role="bold">/vicepa</emphasis>).</para>
<programlisting>
# <emphasis role="bold">./vos create</emphasis> &lt;<replaceable>machine name</replaceable>&gt; &lt;<replaceable>partition name</replaceable>&gt; <emphasis
role="bold">root.afs</emphasis> \
<emphasis role="bold">-cell</emphasis> &lt;<replaceable>cell name</replaceable>&gt; <emphasis role="bold">-noauth</emphasis>
</programlisting>
<para>The Volume Server produces a message confirming that it created the volume on the specified partition. You can
ignore error messages indicating that tokens are missing, or that authentication failed. <indexterm>
<primary>commands</primary>
<secondary>vos syncvldb</secondary>
</indexterm> <indexterm>
<primary>vos commands</primary>
<secondary>syncvldb</secondary>
</indexterm> <indexterm>
<primary>commands</primary>
<secondary>vos syncserv</secondary>
</indexterm> <indexterm>
<primary>vos commands</primary>
<secondary>syncserv</secondary>
</indexterm></para>
</listitem>
<listitem>
<para>If there are existing AFS file server machines and volumes in the cell, issue the <emphasis role="bold">vos
syncvldb</emphasis> and <emphasis role="bold">vos syncserv</emphasis> commands to synchronize the VLDB with the
actual state of volumes on the local machine. To follow the progress of the synchronization operation, which can
take several minutes, use the <emphasis role="bold">-verbose</emphasis> flag. <programlisting>
# <emphasis role="bold">./vos syncvldb</emphasis> &lt;<replaceable>machine name</replaceable>&gt; <emphasis role="bold">-cell</emphasis> &lt;<replaceable>cell name</replaceable>&gt; <emphasis
role="bold">-verbose -noauth</emphasis>
# <emphasis role="bold">./vos syncserv</emphasis> &lt;<replaceable>machine name</replaceable>&gt; <emphasis role="bold">-cell</emphasis> &lt;<replaceable>cell name</replaceable>&gt; <emphasis
role="bold">-verbose -noauth</emphasis>
</programlisting></para>
<para>You can ignore error messages indicating that tokens are missing, or that authentication failed.</para>
</listitem>
</itemizedlist></para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>Update Server</primary>
<secondary>starting server portion</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>upserver process</primary>
<see>Update Server</see>
</indexterm>
<indexterm>
<primary>starting</primary>
<secondary>Update Server server portion</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>Update Server server portion</secondary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>defining</secondary>
<tertiary>as binary distribution machine</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>defining</secondary>
<tertiary>as system control machine</tertiary>
</indexterm>
<indexterm>
<primary>system control machine</primary>
</indexterm>
<indexterm>
<primary>binary distribution machine</primary>
</indexterm>
</sect1>
<sect1 id="HDRWQ61">
<title>Starting the Server Portion of the Update Server</title>
<para>Start the server portion of the Update Server (the <emphasis role="bold">upserver</emphasis> process), to distribute the
contents of directories on this machine to other server machines in the cell. It becomes active when you configure the client
portion of the Update Server on additional server machines.</para>
<para>Distributing the contents of its <emphasis role="bold">/usr/afs/etc</emphasis> directory makes this machine the cell's
<emphasis>system control machine</emphasis>. The other server machines in the cell run the <emphasis
role="bold">upclientetc</emphasis> process (an instance of the client portion of the Update Server) to retrieve the
configuration files. Use the <emphasis role="bold">-crypt</emphasis> argument to the <emphasis role="bold">upserver</emphasis>
initialization command to specify that the Update Server distributes the contents of the <emphasis
role="bold">/usr/afs/etc</emphasis> directory only in encrypted form, as shown in the following instruction. Several of the
files in the directory, particularly the <emphasis role="bold">KeyFile</emphasis> file, are crucial to cell security and so must
never cross the network unencrypted.</para>
<para>(You can choose not to configure a system control machine, in which case you must update the configuration files in each
server machine's <emphasis role="bold">/usr/afs/etc</emphasis> directory individually. The <emphasis role="bold">bos</emphasis>
commands used for this purpose also encrypt data before sending it across the network.)</para>
<para>Distributing the contents of its <emphasis role="bold">/usr/afs/bin</emphasis> directory to other server machines of its
system type makes this machine a <emphasis>binary distribution machine</emphasis>. The other server machines of its system type
run the <emphasis role="bold">upclientbin</emphasis> process (an instance of the client portion of the Update Server) to
retrieve the binaries. If your platform has a package management system,
such as 'rpm' or 'apt', running the Update Server to distribute binaries
may interfere with this system.</para>
<para>The binaries in the <emphasis role="bold">/usr/afs/bin</emphasis> directory are not sensitive, so it is not necessary to
encrypt them before transfer across the network. Include the <emphasis role="bold">-clear</emphasis> argument to the <emphasis
role="bold">upserver</emphasis> initialization command to specify that the Update Server distributes the contents of the
<emphasis role="bold">/usr/afs/bin</emphasis> directory in unencrypted form unless an <emphasis
role="bold">upclientbin</emphasis> process requests encrypted transfer.</para>
<para>Note that the server and client portions of the Update Server always mutually authenticate with one another, regardless of
whether you use the <emphasis role="bold">-clear</emphasis> or <emphasis role="bold">-crypt</emphasis> arguments. This protects
their communications from eavesdropping to some degree.</para>
<para>For more information on the <emphasis role="bold">upclient</emphasis> and <emphasis role="bold">upserver</emphasis>
processes, see their reference pages in the <emphasis>OpenAFS Administration Reference</emphasis>. The commands appear on
multiple lines here only for legibility. <orderedlist>
<listitem>
<para>Issue the <emphasis role="bold">bos create</emphasis> command to start the <emphasis role="bold">upserver</emphasis>
process. <programlisting>
# <emphasis role="bold">./bos create</emphasis> &lt;<replaceable>machine name&gt;</replaceable> <emphasis role="bold">upserver simple</emphasis> \
<emphasis role="bold">"/usr/afs/bin/upserver -crypt /usr/afs/etc</emphasis> \
<emphasis role="bold">-clear /usr/afs/bin" -cell</emphasis> &lt;<replaceable>cell name</replaceable>&gt; <emphasis
role="bold">-noauth</emphasis>
</programlisting></para>
</listitem>
</orderedlist></para>
</sect1>
<sect1 id="HDRWQ62">
<title>Starting the Controller for NTPD</title>
<para>Keeping the clocks on all server and client machines in your cell synchronized is crucial to several functions, and in
particular to the correct operation of AFS's distributed database technology, Ubik. The chapter in the <emphasis>OpenAFS
Administration Guide</emphasis> about administering server machines explains how time skew can disturb Ubik's performance and
cause service outages in your cell.</para>
<para>Historically, AFS used to distribute its own version of the Network
Time Protocol Daemon. Whilst this is still provided for existing sites, we
recommend that you configure and install your time service independently of
AFS. A reliable timeservice will also be required by your Kerberos realm,
and so may already be available at your site.</para>
<indexterm>
<primary>overview</primary>
<secondary>installing client functionality on first machine</secondary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>client functionality</secondary>
<tertiary>installing</tertiary>
</indexterm>
<indexterm>
<primary>installing</primary>
<secondary>client functionality</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
</sect1>
<sect1 id="HDRWQ63">
<title>Overview: Installing Client Functionality</title>
<para>The machine you are installing is now an AFS file server machine,
database server machine, system control machine, and binary distribution
machine. Now make it a client machine by completing the following tasks:
<orderedlist>
<listitem>
<para>Define the machine's cell membership for client processes</para>
</listitem>
<listitem>
<para>Create the client version of the <emphasis role="bold">CellServDB</emphasis> file</para>
</listitem>
<listitem>
<para>Define cache location and size</para>
</listitem>
<listitem>
<para>Create the <emphasis role="bold">/afs</emphasis> directory and start the Cache Manager</para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>Distribution</primary>
<secondary>copying client files from</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>copying</secondary>
<tertiary>client files to local disk</tertiary>
</indexterm>
<indexterm>
<primary>copying</primary>
<secondary>client files to local disk</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
</sect1>
<sect1 id="HDRWQ64">
<title>Copying Client Files to the Local Disk</title>
<para>You need only undertake the steps in this section, if you are using
a tar file distribution, or one built from scratch. Packaged distributions,
such as RPMs or DEBs will already have installed the necessary files in
the correct locations.</para>
<para>Before installing and configuring the AFS client, copy the necessary files from the tarball to the local <emphasis
role="bold">/usr/vice/etc</emphasis> directory. <orderedlist>
<listitem>
<para>If you have not already done so, unpack the distribution
tarball for this machine's system type into a suitable location on
the filesystem, such as <emphasis role="bold">/tmp/afsdist</emphasis>.
If you use a different location, substitue that in the examples that
follow.</para>
</listitem>
<listitem>
<para>Copy files to the local <emphasis role="bold">/usr/vice/etc</emphasis> directory.</para>
<para>This step places a copy of the AFS initialization script (and related files, if applicable) into the <emphasis
role="bold">/usr/vice/etc</emphasis> directory. In the preceding instructions for incorporating AFS into the kernel, you
copied the script directly to the operating system's conventional location for initialization files. When you incorporate
AFS into the machine's startup sequence in a later step, you can choose to link the two files.</para>
<para>On some system types that use a dynamic kernel loader program, you previously copied AFS library files into a
subdirectory of the <emphasis role="bold">/usr/vice/etc</emphasis> directory. On other system types, you copied the
appropriate AFS library file directly to the directory where the operating system accesses it. The following commands do
not copy or recopy the AFS library files into the <emphasis role="bold">/usr/vice/etc</emphasis> directory, because on
some system types the library files consume a large amount of space. If you want to copy them, add the <emphasis
role="bold">-r</emphasis> flag to the first <emphasis role="bold">cp</emphasis> command and skip the second <emphasis
role="bold">cp</emphasis> command.</para>
<programlisting>
# <emphasis role="bold">cd /tmp/afsdist/</emphasis><replaceable>sysname</replaceable><emphasis role="bold">/root.client/usr/vice/etc</emphasis>
# <emphasis role="bold">cp -p * /usr/vice/etc</emphasis>
# <emphasis role="bold">cp -rp C /usr/vice/etc</emphasis>
</programlisting>
</listitem>
</orderedlist></para>
<indexterm>
<primary>cell name</primary>
<secondary>setting in client ThisCell file</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>setting</primary>
<secondary>cell name in client ThisCell file</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>ThisCell file (client)</secondary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>cell membership, defining</secondary>
<tertiary>for client processes</tertiary>
</indexterm>
<indexterm>
<primary>usr/vice/etc/ThisCell</primary>
<see>ThisCell file (client)</see>
</indexterm>
<indexterm>
<primary>ThisCell file (client)</primary>
<secondary>first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>files</primary>
<secondary>ThisCell (client)</secondary>
</indexterm>
</sect1>
<sect1 id="HDRWQ65">
<title>Defining Cell Membership for Client Processes</title>
<para>Every AFS client machine has a copy of the <emphasis role="bold">/usr/vice/etc/ThisCell</emphasis> file on its local disk
to define the machine's cell membership for the AFS client programs that run on it. The <emphasis
role="bold">ThisCell</emphasis> file you created in the <emphasis role="bold">/usr/afs/etc</emphasis> directory (in <link
linkend="HDRWQ51">Defining Cell Name and Membership for Server Processes</link>) is used only by server processes.</para>
<para>Among other functions, the <emphasis role="bold">ThisCell</emphasis> file on a client machine determines the following:
<itemizedlist>
<listitem>
<para>The cell in which users gain tokens when they log onto the
machine, assuming it is using an AFS-modified login utility</para>
</listitem>
<listitem>
<para>The cell in which users gain tokens by default when they issue
the <emphasis role="bold">aklog</emphasis> command</para>
</listitem>
<listitem>
<para>The cell membership of the AFS server processes that the AFS
command interpreters on this machine contact by default</para>
</listitem>
</itemizedlist>
<orderedlist>
<listitem>
<para>Change to the <emphasis role="bold">/usr/vice/etc</emphasis> directory and remove the symbolic link created in <link
linkend="HDRWQ50">Starting the BOS Server</link>. <programlisting>
# <emphasis role="bold">cd /usr/vice/etc</emphasis>
# <emphasis role="bold">rm ThisCell</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Create the <emphasis role="bold">ThisCell</emphasis> file as a copy of the <emphasis
role="bold">/usr/afs/etc/ThisCell</emphasis> file. Defining the same local cell for both server and client processes leads
to the most consistent AFS performance. <programlisting>
# <emphasis role="bold">cp /usr/afs/etc/ThisCell ThisCell</emphasis>
</programlisting></para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>database server machine</primary>
<secondary>entry in client CellServDB file</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>usr/vice/etc/CellServDB</primary>
<see>CellServDB file (client)</see>
</indexterm>
<indexterm>
<primary>CellServDB file (client)</primary>
<secondary>creating</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>creating</primary>
<secondary>CellServDB file (client)</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>CellServDB file (client)</primary>
<secondary>required format</secondary>
</indexterm>
<indexterm>
<primary>requirements</primary>
<secondary>CellServDB file format (client version)</secondary>
</indexterm>
<indexterm>
<primary>files</primary>
<secondary>CellServDB (client)</secondary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>CellServDB file (client)</secondary>
</indexterm>
</sect1>
<sect1 id="HDRWQ66">
<title>Creating the Client CellServDB File</title>
<para>The <emphasis role="bold">/usr/vice/etc/CellServDB</emphasis> file on a client machine's local disk lists the database
server machines for each cell that the local Cache Manager can contact. If there is no entry in the file for a cell, or if the
list of database server machines is wrong, then users working on this machine cannot access the cell. The chapter in the
<emphasis>OpenAFS Administration Guide</emphasis> about administering client machines explains how to maintain the file after
creating it.</para>
<para>As the <emphasis role="bold">afsd</emphasis> program initializes the Cache Manager, it copies the contents of the
<emphasis role="bold">CellServDB</emphasis> file into kernel memory. The Cache Manager always consults the list in kernel memory
rather than the <emphasis role="bold">CellServDB</emphasis> file itself. Between reboots of the machine, you can use the
<emphasis role="bold">fs newcell</emphasis> command to update the list in kernel memory directly; see the chapter in the
<emphasis>OpenAFS Administration Guide</emphasis> about administering client machines.</para>
<para>The AFS distribution includes the file
<emphasis role="bold">CellServDB.dist</emphasis>. It includes an entry for
all AFS cells that agreed to share their database server machine
information at the time the distribution was
created. The definitive copy of this file is maintained at
grand.central.org, and updates may be obtained from
/afs/grand.central.org/service/CellServDB or
<ulink url="http://grand.central.org/dl/cellservdb/CellServDB">
http://grand.central.org/dl/cellservdb/CellServDB</ulink></para>
<para>The <emphasis role="bold">CellServDB.dist</emphasis> file can be a
good basis for the client <emphasis role="bold">CellServDB</emphasis> file,
because all of the entries in it use the correct format. You can add or
remove cell entries as you see fit. Depending on your cache manager
configuration, additional steps (as detailed in
<link linkend="HDRWQ91">Enabling Access to Foreign Cells</link>) may be
required to enable the Cache Manager to actually reach the cells.</para>
<para>In this section, you add an entry for the local cell to the local <emphasis role="bold">CellServDB</emphasis> file. The
current working directory is still <emphasis role="bold">/usr/vice/etc</emphasis>. <orderedlist>
<listitem>
<para>Remove the symbolic link created in <link linkend="HDRWQ50">Starting the BOS Server</link> and rename the <emphasis
role="bold">CellServDB.sample</emphasis> file to <emphasis role="bold">CellServDB</emphasis>. <programlisting>
# <emphasis role="bold">rm CellServDB</emphasis>
# <emphasis role="bold">mv CellServDB.sample CellServDB</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Add an entry for the local cell to the <emphasis role="bold">CellServDB</emphasis> file. One easy method is to use
the <emphasis role="bold">cat</emphasis> command to append the contents of the server <emphasis
role="bold">/usr/afs/etc/CellServDB</emphasis> file to the client version. <programlisting>
# <emphasis role="bold">cat /usr/afs/etc/CellServDB &gt;&gt; CellServDB</emphasis>
</programlisting></para>
<para>Then open the file in a text editor to verify that there are no blank lines, and that all entries have the required
format, which is described just following. The ordering of cells is not significant, but it can be convenient to have the
client machine's home cell at the top; move it there now if you wish. <itemizedlist>
<listitem>
<para>The first line of a cell's entry has the following format: <programlisting>
&gt;<replaceable>cell_name</replaceable> #<replaceable>organization</replaceable>
</programlisting></para>
<para>where <replaceable>cell_name</replaceable> is the cell's complete Internet domain name (for example, <emphasis
role="bold">example.com</emphasis>) and <replaceable>organization</replaceable> is an optional field that follows any
number of spaces and the number sign (<computeroutput>#</computeroutput>). By convention it names the organization
to which the cell corresponds (for example, the Example Corporation).</para>
</listitem>
<listitem>
<para>After the first line comes a separate line for each database server machine. Each line has the following
format: <programlisting>
<replaceable>IP_address</replaceable> #<replaceable>machine_name</replaceable>
</programlisting></para>
<para>where <replaceable>IP_address</replaceable> is the machine's IP address in dotted decimal format (for example,
192.12.105.3). Following any number of spaces and the number sign (<computeroutput>#</computeroutput>) is
<replaceable>machine_name</replaceable>, the machine's fully-qualified hostname (for example, <emphasis
role="bold">db1.example.com</emphasis>). In this case, the number sign does not indicate a comment;
<replaceable>machine_name</replaceable> is a required field.</para>
</listitem>
</itemizedlist></para>
</listitem>
<listitem>
<para>If the file includes cells that you do not wish users of this machine to access, remove their entries.</para>
</listitem>
</orderedlist></para>
<para>The following example shows entries for two cells, each of which has three database server machines:</para>
<programlisting>
&gt;example.com #Example Corporation (home cell)
192.12.105.3 #db1.example.com
192.12.105.4 #db2.example.com
192.12.105.55 #db3.example.com
&gt;stateu.edu #State University cell
138.255.68.93 #serverA.stateu.edu
138.255.68.72 #serverB.stateu.edu
138.255.33.154 #serverC.stateu.edu
</programlisting>
<indexterm>
<primary>cache</primary>
<secondary>configuring</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>configuring</primary>
<secondary>cache</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>setting</primary>
<secondary>cache size and location</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>cache size and location</secondary>
</indexterm>
</sect1>
<sect1 id="HDRWQ67">
<title>Configuring the Cache</title>
<para>The Cache Manager uses a cache on the local disk or in machine memory to store local copies of files fetched from file
server machines. As the <emphasis role="bold">afsd</emphasis> program initializes the Cache Manager, it sets basic cache
configuration parameters according to definitions in the local <emphasis role="bold">/usr/vice/etc/cacheinfo</emphasis> file.
The file has three fields: <orderedlist>
<listitem>
<para>The first field names the local directory on which to mount the AFS filespace. The conventional location is the
<emphasis role="bold">/afs</emphasis> directory.</para>
</listitem>
<listitem>
<para>The second field defines the local disk directory to use for the disk cache. The conventional location is the
<emphasis role="bold">/usr/vice/cache</emphasis> directory, but you can specify an alternate directory if another
partition has more space available. There must always be a value in this field, but the Cache Manager ignores it if the
machine uses a memory cache.</para>
</listitem>
<listitem>
<para>The third field specifies the number of kilobyte (1024 byte) blocks to allocate for the cache.</para>
</listitem>
</orderedlist></para>
<para>The values you define must meet the following requirements. <itemizedlist>
<listitem>
<para>On a machine using a disk cache, the Cache Manager expects always to be able to use the amount of space specified in
the third field. Failure to meet this requirement can cause serious problems, some of which can be repaired only by
rebooting. You must prevent non-AFS processes from filling up the cache partition. The simplest way is to devote a
partition to the cache exclusively.</para>
</listitem>
<listitem>
<para>The amount of space available in memory or on the partition housing the disk cache directory imposes an absolute
limit on cache size.</para>
</listitem>
<listitem>
<para>The maximum supported cache size can vary in each AFS release; see the <emphasis>OpenAFS Release Notes</emphasis>
for the current version.</para>
</listitem>
<listitem>
<para>For a disk cache, you cannot specify a value in the third field that exceeds 95% of the space available on the
partition mounted at the directory named in the second field. If you violate this restriction, the <emphasis
role="bold">afsd</emphasis> program exits without starting the Cache Manager and prints an appropriate message on the
standard output stream. A value of 90% is more appropriate on most machines. Some operating systems (such as AIX) do not
automatically reserve some space to prevent the partition from filling completely; for them, a smaller value (say, 80% to
85% of the space available) is more appropriate.</para>
</listitem>
<listitem>
<para>For a memory cache, you must leave enough memory for other processes and applications to run. If you try to allocate
more memory than is actually available, the <emphasis role="bold">afsd</emphasis> program exits without initializing the
Cache Manager and produces the following message on the standard output stream. <programlisting>
afsd: memCache allocation failure at <replaceable>number</replaceable> KB
</programlisting></para>
<para>The <replaceable>number</replaceable> value is how many kilobytes were allocated just before the failure, and so
indicates the approximate amount of memory available.</para>
</listitem>
</itemizedlist></para>
<para>Within these hard limits, the factors that determine appropriate cache size include the number of users working on the
machine, the size of the files with which they work, and (for a memory cache) the number of processes that run on the machine.
The higher the demand from these factors, the larger the cache needs to be to maintain good performance.</para>
<para>Disk caches smaller than 10 MB do not generally perform well. Machines serving multiple users usually perform better with
a cache of at least 60 to 70 MB. The point at which enlarging the cache further does not really improve performance depends on
the factors mentioned previously and is difficult to predict.</para>
<para>Memory caches smaller than 1 MB are nonfunctional, and the performance of caches smaller than 5 MB is usually
unsatisfactory. Suitable upper limits are similar to those for disk caches but are probably determined more by the demands on
memory from other sources on the machine (number of users and processes). Machines running only a few processes possibly can use
a smaller memory cache.</para>
<sect2 id="HDRWQ68">
<title>Configuring a Disk Cache</title>
<note>
<para>Not all file system types that an operating system supports are necessarily supported for use as the cache partition.
For possible restrictions, see the <emphasis>OpenAFS Release Notes</emphasis>.</para>
</note>
<para>To configure the disk cache, perform the following procedures: <orderedlist>
<listitem>
<para>Create the local directory to use for caching. The following instruction shows the conventional location,
<emphasis role="bold">/usr/vice/cache</emphasis>. If you are devoting a partition exclusively to caching, as
recommended, you must also configure it, make a file system on it, and mount it at the directory created in this step.
<programlisting>
# <emphasis role="bold">mkdir /usr/vice/cache</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Create the <emphasis role="bold">cacheinfo</emphasis> file to define the configuration parameters discussed
previously. The following instruction shows the standard mount location, <emphasis role="bold">/afs</emphasis>, and the
standard cache location, <emphasis role="bold">/usr/vice/cache</emphasis>. <programlisting>
# <emphasis role="bold">echo "/afs:/usr/vice/cache:</emphasis><replaceable>#blocks</replaceable><emphasis role="bold">" &gt; /usr/vice/etc/cacheinfo</emphasis>
</programlisting></para>
<para>The following example defines the disk cache size as 50,000 KB:</para>
<programlisting>
# <emphasis role="bold">echo "/afs:/usr/vice/cache:50000" &gt; /usr/vice/etc/cacheinfo</emphasis>
</programlisting>
</listitem>
</orderedlist></para>
</sect2>
<sect2 id="HDRWQ69">
<title>Configuring a Memory Cache</title>
<para>To configure a memory cache, create the <emphasis role="bold">cacheinfo</emphasis> file to define the configuration
parameters discussed previously. The following instruction shows the standard mount location, <emphasis
role="bold">/afs</emphasis>, and the standard cache location, <emphasis role="bold">/usr/vice/cache</emphasis> (though the
exact value of the latter is irrelevant for a memory cache).</para>
<programlisting>
# <emphasis role="bold">echo "/afs:/usr/vice/cache:</emphasis><replaceable>#blocks</replaceable><emphasis role="bold">" &gt; /usr/vice/etc/cacheinfo</emphasis>
</programlisting>
<para>The following example allocates 25,000 KB of memory for the cache.</para>
<programlisting>
# <emphasis role="bold">echo "/afs:/usr/vice/cache:25000" &gt; /usr/vice/etc/cacheinfo</emphasis>
</programlisting>
<indexterm>
<primary>Cache Manager</primary>
<secondary>first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>configuring</primary>
<secondary>Cache Manager</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>Cache Manager</secondary>
</indexterm>
<indexterm>
<primary>afs (/afs) directory</primary>
<secondary>creating</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>AFS initialization script</primary>
<secondary>setting afsd parameters</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>afsd command parameters</secondary>
</indexterm>
</sect2>
</sect1>
<sect1 id="HDRWQ70">
<title>Configuring the Cache Manager</title>
<para>By convention, the Cache Manager mounts the AFS filespace on the local <emphasis role="bold">/afs</emphasis> directory. In
this section you create that directory.</para>
<para>The <emphasis role="bold">afsd</emphasis> program sets several cache configuration parameters as it initializes the Cache
Manager, and starts daemons that improve performance. You can use the <emphasis role="bold">afsd</emphasis> command's arguments
to override the parameters' default values and to change the number of some of the daemons. Depending on the machine's cache
size, its amount of RAM, and how many people work on it, you can sometimes improve Cache Manager performance by overriding the
default values. For a discussion of all of the <emphasis role="bold">afsd</emphasis> command's arguments, see its reference page
in the <emphasis>OpenAFS Administration Reference</emphasis>.</para>
<para>On platforms using the standard 'afs' initialisation script (this does not apply to Fedora or RHEL based distributions), the <emphasis role="bold">afsd</emphasis> command line in the AFS initialization script on each system type includes an
<computeroutput>OPTIONS</computeroutput> variable. You can use it to set nondefault values for the command's arguments, in one
of the following ways: <itemizedlist>
<listitem>
<para>You can create an <emphasis role="bold">afsd</emphasis> <emphasis>options file</emphasis> that sets values for
arguments to the <emphasis role="bold">afsd</emphasis> command. If the file exists, its contents are automatically
substituted for the <computeroutput>OPTIONS</computeroutput> variable in the AFS initialization script. The AFS
distribution for some system types includes an options file; on other system types, you must create it.</para>
<para>You use two variables in the AFS initialization script to specify the path to the options file:
<computeroutput>CONFIG</computeroutput> and <computeroutput>AFSDOPT</computeroutput>. On system types that define a
conventional directory for configuration files, the <computeroutput>CONFIG</computeroutput> variable indicates it by
default; otherwise, the variable indicates an appropriate location.</para>
<para>List the desired <emphasis role="bold">afsd</emphasis> options on a single line in the options file, separating each
option with one or more spaces. The following example sets the <emphasis role="bold">-stat</emphasis> argument to 2500,
the <emphasis role="bold">-daemons</emphasis> argument to 4, and the <emphasis role="bold">-volumes</emphasis> argument to
100.</para>
<programlisting>
-stat 2500 -daemons 4 -volumes 100
</programlisting>
</listitem>
<listitem>
<para>On a machine that uses a disk cache, you can set the <computeroutput>OPTIONS</computeroutput> variable in the AFS
initialization script to one of <computeroutput>$SMALL</computeroutput>, <computeroutput>$MEDIUM</computeroutput>, or
<computeroutput>$LARGE</computeroutput>. The AFS initialization script uses one of these settings if the <emphasis
role="bold">afsd</emphasis> options file named by the <computeroutput>AFSDOPT</computeroutput> variable does not exist. In
the script as distributed, the <computeroutput>OPTIONS</computeroutput> variable is set to the value
<computeroutput>$MEDIUM</computeroutput>.</para>
<note>
<para>Do not set the <computeroutput>OPTIONS</computeroutput> variable to <computeroutput>$SMALL</computeroutput>,
<computeroutput>$MEDIUM</computeroutput>, or <computeroutput>$LARGE</computeroutput> on a machine that uses a memory
cache. The arguments it sets are appropriate only on a machine that uses a disk cache.</para>
</note>
<para>The script (or on some system types the <emphasis role="bold">afsd</emphasis> options file named by the
<computeroutput>AFSDOPT</computeroutput> variable) defines a value for each of <computeroutput>SMALL</computeroutput>,
<computeroutput>MEDIUM</computeroutput>, and <computeroutput>LARGE</computeroutput> that sets <emphasis
role="bold">afsd</emphasis> command arguments appropriately for client machines of different sizes: <itemizedlist>
<listitem>
<para><computeroutput>SMALL</computeroutput> is suitable for a small machine that serves one or two users and has
approximately 8 MB of RAM and a 20-MB cache</para>
</listitem>
<listitem>
<para><computeroutput>MEDIUM</computeroutput> is suitable for a medium-sized machine that serves two to six users
and has 16 MB of RAM and a 40-MB cache</para>
</listitem>
<listitem>
<para><computeroutput>LARGE</computeroutput> is suitable for a large machine that serves five to ten users and has
32 MB of RAM and a 100-MB cache</para>
</listitem>
</itemizedlist></para>
</listitem>
<listitem>
<para>You can choose not to create an <emphasis role="bold">afsd</emphasis> options file and to set the
<computeroutput>OPTIONS</computeroutput> variable in the initialization script to a null value rather than to the default
<computeroutput>$MEDIUM</computeroutput> value. You can then either set arguments directly on the <emphasis
role="bold">afsd</emphasis> command line in the script, or set no arguments (and so accept default values for all Cache
Manager parameters).</para>
</listitem>
</itemizedlist>
<note>
<para>If you are running on a Fedora or RHEL based system, the
openafs-client initialization script behaves differently from that
described above. It sources /etc/sysconfig/openafs, in which the
AFSD_ARGS variable may be set to contain any, or all, of the afsd options
detailed. Note that this script does not support setting an OPTIONS
variable, or the SMALL, MEDIUM and LARGE methods of defining cache size
</para>
</note>
<orderedlist>
<listitem>
<para>Create the local directory on which to mount the AFS filespace, by convention <emphasis role="bold">/afs</emphasis>.
If the directory already exists, verify that it is empty. <programlisting>
# <emphasis role="bold">mkdir /afs</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>On AIX systems, add the following line to the <emphasis role="bold">/etc/vfs</emphasis> file. It enables AIX to
unmount AFS correctly during shutdown. <programlisting>
afs 4 none none
</programlisting></para>
</listitem>
<listitem>
<para>On non-package based Linux systems, copy the <emphasis role="bold">afsd</emphasis> options file from the <emphasis
role="bold">/usr/vice/etc</emphasis> directory to the <emphasis role="bold">/etc/sysconfig</emphasis> directory, removing
the <emphasis role="bold">.conf</emphasis> extension as you do so. <programlisting>
# <emphasis role="bold">cp /usr/vice/etc/afs.conf /etc/sysconfig/afs</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Edit the machine's AFS initialization script or <emphasis role="bold">afsd</emphasis> options file to set
appropriate values for <emphasis role="bold">afsd</emphasis> command parameters. The script resides in the indicated
location on each system type: <itemizedlist>
<listitem>
<para>On AIX systems, <emphasis role="bold">/etc/rc.afs</emphasis></para>
</listitem>
<listitem>
<para>On HP-UX systems, <emphasis role="bold">/sbin/init.d/afs</emphasis></para>
</listitem>
<listitem>
<para>On IRIX systems, <emphasis role="bold">/etc/init.d/afs</emphasis></para>
</listitem>
<listitem>
<para>On Fedora and RHEL systems, <emphasis role="bold">/etc/sysconfg/openafs</emphasis></para>
</listitem>
<listitem>
<para>On non-package based Linux systems, <emphasis role="bold">/etc/sysconfig/afs</emphasis> (the <emphasis
role="bold">afsd</emphasis> options file)</para>
</listitem>
<listitem>
<para>On Solaris systems, <emphasis role="bold">/etc/init.d/afs</emphasis></para>
</listitem>
</itemizedlist></para>
<para>Use one of the methods described in the introduction to this section to add the following flags to the <emphasis
role="bold">afsd</emphasis> command line. If you intend for the machine to remain an AFS client, also set any
performance-related arguments you wish. <itemizedlist>
<listitem>
<para>Add the <emphasis role="bold">-memcache</emphasis> flag if the machine is to use a memory cache.</para>
</listitem>
<listitem>
<para>Add the <emphasis role="bold">-verbose</emphasis> flag to display a trace of the Cache Manager's
initialization on the standard output stream.</para>
</listitem>
</itemizedlist></para>
</listitem>
</orderedlist>
<note><para>In order to successfully complete the instructions in the
remainder of this guide, it is important that the machine does not have
a synthetic root (as discussed in <link linkend="HDRWQ91">Enabling Access
to Foreign Cells</link>). As some distributions ship with this enabled, it
may be necessary to remove any occurences of the
<emphasis role="bold">-dynroot</emphasis> and
<emphasis role="bold">-afsdb</emphasis> options from both the AFS
initialisation script and options file. If this functionality is
required it may be renabled as detailed in
<link linkend="HDRWQ91">Enabling Access to Foreign Cells</link>.
</para></note>
</para>
<indexterm>
<primary>overview</primary>
<secondary>completing installation of first machine</secondary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>completion of installation</secondary>
</indexterm>
</sect1>
<sect1 id="HDRWQ71">
<title>Overview: Completing the Installation of the First AFS Machine</title>
<para>The machine is now configured as an AFS file server and client machine. In this final phase of the installation, you
initialize the Cache Manager and then create the upper levels of your AFS filespace, among other procedures. The procedures are:
<orderedlist>
<listitem>
<para>Verify that the initialization script works correctly, and incorporate it into the operating system's startup and
shutdown sequence</para>
</listitem>
<listitem>
<para>Create and mount top-level volumes</para>
</listitem>
<listitem>
<para>Create and mount volumes to store system binaries in AFS</para>
</listitem>
<listitem>
<para>Enable access to foreign cells</para>
</listitem>
<listitem>
<para>Institute additional security measures</para>
</listitem>
<listitem>
<para>Remove client functionality if desired</para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>AFS initialization script</primary>
<secondary>verifying on first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>AFS initialization script</primary>
<secondary>running</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>AFS initialization script</secondary>
<tertiary>running/verifying</tertiary>
</indexterm>
<indexterm>
<primary>running AFS init. script</primary>
<secondary>first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>invoking AFS init. script</primary>
<see>running</see>
</indexterm>
</sect1>
<sect1 id="HDRWQ72">
<title>Verifying the AFS Initialization Script</title>
<para>At this point you run the AFS initialization script to verify that it correctly invokes all of the necessary programs and
AFS processes, and that they start correctly. The following are the relevant commands: <itemizedlist>
<listitem>
<para>The command that dynamically loads AFS modifications into the kernel, on some system types (not applicable if the
kernel has AFS modifications built in)</para>
</listitem>
<listitem>
<para>The <emphasis role="bold">bosserver</emphasis> command, which starts the BOS Server; it in turn starts the server
processes for which you created entries in the <emphasis role="bold">/usr/afs/local/BosConfig</emphasis> file</para>
</listitem>
<listitem>
<para>The <emphasis role="bold">afsd</emphasis> command, which initializes the Cache Manager</para>
</listitem>
</itemizedlist></para>
<para>On system types that use a dynamic loader program, you must reboot the machine before running the initialization script,
so that it can freshly load AFS modifications into the kernel.</para>
<para>If there are problems during the initialization, attempt to resolve them. The OpenAFS mailing lists can provide assistance if necessary.
<orderedlist>
<indexterm>
<primary>commands</primary>
<secondary>bos shutdown</secondary>
</indexterm>
<indexterm>
<primary>bos commands</primary>
<secondary>shutdown</secondary>
</indexterm>
<listitem>
<para>Issue the <emphasis role="bold">bos shutdown</emphasis> command to shut down the AFS server processes other than the
BOS Server. Include the <emphasis role="bold">-wait</emphasis> flag to delay return of the command shell prompt until all
processes shut down completely. <programlisting>
# <emphasis role="bold">/usr/afs/bin/bos shutdown</emphasis> &lt;<replaceable>machine name</replaceable>&gt; <emphasis
role="bold">-wait</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">ps</emphasis> command to learn the <emphasis role="bold">bosserver</emphasis>
process's process ID number (PID), and then the <emphasis role="bold">kill</emphasis> command to stop it. <programlisting>
# <emphasis role="bold">ps</emphasis> <replaceable>appropriate_ps_options</replaceable> <emphasis role="bold">| grep bosserver</emphasis>
# <emphasis role="bold">kill</emphasis> <replaceable>bosserver_PID</replaceable>
</programlisting></para>
</listitem>
<listitem>
<para>Issue the appropriate commands to run the AFS initialization script for this system type.</para>
<indexterm>
<primary>AIX</primary>
<secondary>AFS initialization script</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
<para><emphasis role="bold">On AIX systems:</emphasis> <orderedlist>
<listitem>
<para>Reboot the machine and log in again as the local superuser <emphasis role="bold">root</emphasis>.
<programlisting>
# <emphasis role="bold">cd /</emphasis>
# <emphasis role="bold">shutdown -r now</emphasis>
login: <emphasis role="bold">root</emphasis>
Password: <replaceable>root_password</replaceable>
</programlisting></para>
</listitem>
<listitem>
<para>Run the AFS initialization script. <programlisting>
# <emphasis role="bold">/etc/rc.afs</emphasis>
</programlisting></para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>HP-UX</primary>
<secondary>AFS initialization script</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
<para><emphasis role="bold">On HP-UX systems:</emphasis> <orderedlist>
<listitem>
<para>Run the AFS initialization script. <programlisting>
# <emphasis role="bold">/sbin/init.d/afs start</emphasis>
</programlisting></para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>IRIX</primary>
<secondary>AFS initialization script</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>afsclient variable (IRIX)</primary>
<secondary>first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>variables</primary>
<secondary>afsclient (IRIX)</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>IRIX</primary>
<secondary>afsclient variable</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>afsserver variable (IRIX)</primary>
<secondary>first AFS machine</secondary>
</indexterm>
<indexterm>
<primary>variables</primary>
<secondary>afsserver (IRIX)</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>IRIX</primary>
<secondary>afsserver variable</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<para><emphasis role="bold">On IRIX systems:</emphasis> <orderedlist>
<listitem>
<para>If you have configured the machine to use the <emphasis role="bold">ml</emphasis> dynamic loader program,
reboot the machine and log in again as the local superuser <emphasis role="bold">root</emphasis>. <programlisting>
# <emphasis role="bold">cd /</emphasis>
# <emphasis role="bold">shutdown -i6 -g0 -y</emphasis>
login: <emphasis role="bold">root</emphasis>
Password: <replaceable>root_password</replaceable>
</programlisting></para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">chkconfig</emphasis> command to activate the <emphasis
role="bold">afsserver</emphasis> and <emphasis role="bold">afsclient</emphasis> configuration variables.
<programlisting>
# <emphasis role="bold">/etc/chkconfig -f afsserver on</emphasis>
# <emphasis role="bold">/etc/chkconfig -f afsclient on</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Run the AFS initialization script. <programlisting>
# <emphasis role="bold">/etc/init.d/afs start</emphasis>
</programlisting></para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>Linux</primary>
<secondary>AFS initialization script</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
<para><emphasis role="bold">On Linux systems:</emphasis> <orderedlist>
<listitem>
<para>Reboot the machine and log in again as the local superuser <emphasis role="bold">root</emphasis>.
<programlisting>
# <emphasis role="bold">cd /</emphasis>
# <emphasis role="bold">shutdown -r now</emphasis>
login: <emphasis role="bold">root</emphasis>
Password: <replaceable>root_password</replaceable>
</programlisting></para>
</listitem>
<listitem>
<para>Run the AFS initialization scripts.
<programlisting>
# <emphasis role="bold">/etc/rc.d/init.d/openafs-client start</emphasis>
# <emphasis role="bold">/etc/rc.d/init.d/openafs-server start</emphasis>
</programlisting></para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>Solaris</primary>
<secondary>AFS initialization script</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
<para><emphasis role="bold">On Solaris systems:</emphasis> <orderedlist>
<listitem>
<para>Reboot the machine and log in again as the local superuser <emphasis role="bold">root</emphasis>.
<programlisting>
# <emphasis role="bold">cd /</emphasis>
# <emphasis role="bold">shutdown -i6 -g0 -y</emphasis>
login: <emphasis role="bold">root</emphasis>
Password: <replaceable>root_password</replaceable>
</programlisting></para>
</listitem>
<listitem>
<para>Run the AFS initialization script. <programlisting>
# <emphasis role="bold">/etc/init.d/afs start</emphasis>
</programlisting></para>
</listitem>
</orderedlist></para>
</listitem>
<listitem>
<para>Wait for the message that confirms that Cache Manager initialization is complete.</para>
<para>On machines that use a disk cache, it can take a while to initialize the Cache Manager for the first time, because
the <emphasis role="bold">afsd</emphasis> program must create all of the <emphasis
role="bold">V</emphasis><replaceable>n</replaceable> files in the cache directory. Subsequent Cache Manager
initializations do not take nearly as long, because the <emphasis role="bold">V</emphasis><replaceable>n</replaceable>
files already exist.</para>
</listitem>
<listitem>
<indexterm>
<primary>commands</primary>
<secondary>aklog</secondary>
</indexterm>
<indexterm>
<primary>aklog command</primary>
</indexterm>
<para>If you are working with an existing cell which uses
<emphasis role="bold">kaserver</emphasis> for authentication,
please recall the note in
<link linkend="KAS003">Using this Appendix</link> detailing the
substitution of <emphasis role="bold">kinit</emphasis> and
<emphasis role="bold">aklog</emphasis> with
<emphasis role="bold">klog</emphasis>.</para>
<para>As a basic test of correct AFS functioning, issue the
<emphasis role="bold">kinit</emphasis> and
<emphasis role="bold">aklog</emphasis> commands to authenticate
as the <emphasis role="bold">admin</emphasis> user.
Provide the password (<replaceable>admin_passwd</replaceable>) you
defined in <link linkend="HDRWQ53">Initializing Cell Security</link>.</para>
<programlisting>
# <emphasis role="bold">kinit admin</emphasis>
Password: <replaceable>admin_passwd</replaceable>
# <emphasis role="bold">aklog</emphasis>
</programlisting>
<indexterm>
<primary>commands</primary>
<secondary>tokens</secondary>
</indexterm>
<indexterm>
<primary>tokens command</primary>
</indexterm>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">tokens</emphasis> command to
verify that the <emphasis role="bold">aklog</emphasis>
command worked correctly. If it did, the output looks similar to the following example for the <emphasis
role="bold">example.com</emphasis> cell, where <emphasis role="bold">admin</emphasis>'s AFS UID is 1. If the output does not
seem correct, resolve the problem. Changes to the AFS initialization script are possibly necessary. The OpenAFS mailing lists can provide assistance as necessary. <programlisting>
# <emphasis role="bold">tokens</emphasis>
Tokens held by the Cache Manager:
User's (AFS ID 1) tokens for afs@example.com [Expires May 22 11:52]
--End of list--
</programlisting></para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">bos status</emphasis> command to verify that the output for each process reads
<computeroutput>Currently running normally</computeroutput>. <programlisting>
# <emphasis role="bold">/usr/afs/bin/bos status</emphasis> &lt;<replaceable>machine name</replaceable>&gt;
</programlisting> <indexterm>
<primary>fs commands</primary>
<secondary>checkvolumes</secondary>
</indexterm> <indexterm>
<primary>commands</primary>
<secondary>fs checkvolumes</secondary>
</indexterm></para>
</listitem>
<listitem>
<para>Change directory to the local file system root (<emphasis role="bold">/</emphasis>) and issue the <emphasis
role="bold">fs checkvolumes</emphasis> command. <programlisting>
# <emphasis role="bold">cd /</emphasis>
# <emphasis role="bold">/usr/afs/bin/fs checkvolumes</emphasis>
</programlisting></para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>AFS initialization script</primary>
<secondary>adding to machine startup sequence</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>installing</primary>
<secondary>AFS initialization script</secondary>
<tertiary>first AFS machine</tertiary>
</indexterm>
<indexterm>
<primary>first AFS machine</primary>
<secondary>AFS initialization script</secondary>
<tertiary>activating</tertiary>
</indexterm>
<indexterm>
<primary>activating AFS init. script</primary>
<see>installing</see>
</indexterm>
</sect1>
<sect1 id="HDRWQ73">
<title>Activating the AFS Initialization Script</title>
<para>Now that you have confirmed that the AFS initialization script works correctly, take the action necessary to have it run
automatically at each reboot. Proceed to the instructions for your system type: <itemizedlist>
<listitem>
<para><link linkend="HDRWQ74">Activating the Script on AIX Systems</link></para>
</listitem>
<listitem>
<para><link linkend="HDRWQ76">Activating the Script on HP-UX Systems</link></para>
</listitem>
<listitem>
<para><link linkend="HDRWQ77">Activating the Script on IRIX Systems</link></para>
</listitem>
<listitem>
<para><link linkend="HDRWQ78">Activating the Script on Linux Systems</link></para>
</listitem>
<listitem>
<para><link linkend="HDRWQ79">Activating the Script on Solaris Systems</link></para>
</listitem>
</itemizedlist></para>
<indexterm>
<primary>AIX</primary>
<secondary>AFS initialization script</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
<sect2 id="HDRWQ74">
<title>Activating the Script on AIX Systems</title>
<orderedlist>
<listitem>
<para>Edit the AIX initialization file, <emphasis role="bold">/etc/inittab</emphasis>, adding the following line to invoke
the AFS initialization script. Place it just after the line that starts NFS daemons. <programlisting>
rcafs:2:wait:/etc/rc.afs &gt; /dev/console 2&gt;&amp;1 # Start AFS services
</programlisting></para>
</listitem>
<listitem>
<para><emphasis role="bold">(Optional)</emphasis> There are now copies of the AFS initialization file in both the
<emphasis role="bold">/usr/vice/etc</emphasis> and <emphasis role="bold">/etc</emphasis> directories. If you want to avoid
potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the
original script from the AFS CD-ROM if necessary. <programlisting>
# <emphasis role="bold">cd /usr/vice/etc</emphasis>
# <emphasis role="bold">rm rc.afs</emphasis>
# <emphasis role="bold">ln -s /etc/rc.afs</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Proceed to <link linkend="HDRWQ80">Configuring the Top Levels of the AFS Filespace</link>.</para>
</listitem>
</orderedlist>
<indexterm>
<primary>HP-UX</primary>
<secondary>AFS initialization script</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
</sect2>
<sect2 id="HDRWQ76">
<title>Activating the Script on HP-UX Systems</title>
<orderedlist>
<listitem>
<para>Change to the <emphasis role="bold">/sbin/init.d</emphasis> directory and issue the <emphasis role="bold">ln
-s</emphasis> command to create symbolic links that incorporate the AFS initialization script into the HP-UX startup and
shutdown sequence. <programlisting>
# <emphasis role="bold">cd /sbin/init.d</emphasis>
# <emphasis role="bold">ln -s ../init.d/afs /sbin/rc2.d/S460afs</emphasis>
# <emphasis role="bold">ln -s ../init.d/afs /sbin/rc2.d/K800afs</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para><emphasis role="bold">(Optional)</emphasis> There are now copies of the AFS initialization file in both the
<emphasis role="bold">/usr/vice/etc</emphasis> and <emphasis role="bold">/sbin/init.d</emphasis> directories. If you want
to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always
retrieve the original script from the AFS CD-ROM if necessary. <programlisting>
# <emphasis role="bold">cd /usr/vice/etc</emphasis>
# <emphasis role="bold">rm afs.rc</emphasis>
# <emphasis role="bold">ln -s /sbin/init.d/afs afs.rc</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Proceed to <link linkend="HDRWQ80">Configuring the Top Levels of the AFS Filespace</link>.</para>
</listitem>
</orderedlist>
<indexterm>
<primary>IRIX</primary>
<secondary>AFS initialization script</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
</sect2>
<sect2 id="HDRWQ77">
<title>Activating the Script on IRIX Systems</title>
<orderedlist>
<listitem>
<para>Change to the <emphasis role="bold">/etc/init.d</emphasis> directory and issue the <emphasis role="bold">ln
-s</emphasis> command to create symbolic links that incorporate the AFS initialization script into the IRIX startup and
shutdown sequence. <programlisting>
# <emphasis role="bold">cd /etc/init.d</emphasis>
# <emphasis role="bold">ln -s ../init.d/afs /etc/rc2.d/S35afs</emphasis>
# <emphasis role="bold">ln -s ../init.d/afs /etc/rc0.d/K35afs</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para><emphasis role="bold">(Optional)</emphasis> There are now copies of the AFS initialization file in both the
<emphasis role="bold">/usr/vice/etc</emphasis> and <emphasis role="bold">/etc/init.d</emphasis> directories. If you want
to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always
retrieve the original script from the AFS CD-ROM if necessary. <programlisting>
# <emphasis role="bold">cd /usr/vice/etc</emphasis>
# <emphasis role="bold">rm afs.rc</emphasis>
# <emphasis role="bold">ln -s /etc/init.d/afs afs.rc</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Proceed to <link linkend="HDRWQ80">Configuring the Top Levels of the AFS Filespace</link>.</para>
</listitem>
</orderedlist>
<indexterm>
<primary>Linux</primary>
<secondary>AFS initialization script</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
</sect2>
<sect2 id="HDRWQ78">
<title>Activating the Script on Linux Systems</title>
<orderedlist>
<listitem>
<para>Issue the <emphasis role="bold">chkconfig</emphasis> command to activate the <emphasis role="bold">openafs-client</emphasis> and <emphasis role="bold">openafs-server</emphasis>
configuration variables. Based on the instruction in the AFS initialization file that begins with the string
<computeroutput>#chkconfig</computeroutput>, the command automatically creates the symbolic links that incorporate the
script into the Linux startup and shutdown sequence. <programlisting>
# <emphasis role="bold">/sbin/chkconfig --add openafs-client</emphasis>
# <emphasis role="bold">/sbin/chkconfig --add openafs-server</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para><emphasis role="bold">(Optional)</emphasis> There are now copies of the AFS initialization file in both the
<emphasis role="bold">/usr/vice/etc</emphasis> and <emphasis role="bold">/etc/rc.d/init.d</emphasis> directories, and
copies of the <emphasis role="bold">afsd</emphasis> options file in both the <emphasis
role="bold">/usr/vice/etc</emphasis> and <emphasis role="bold">/etc/sysconfig</emphasis> directories. If you want to avoid
potential confusion by guaranteeing that the two copies of each file are always the same, create a link between them. You
can always retrieve the original script or options file from the AFS CD-ROM if necessary. <programlisting>
# <emphasis role="bold">cd /usr/vice/etc</emphasis>
# <emphasis role="bold">rm afs.rc afs.conf</emphasis>
# <emphasis role="bold">ln -s /etc/rc.d/init.d/afs afs.rc</emphasis>
# <emphasis role="bold">ln -s /etc/sysconfig/afs afs.conf</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Proceed to <link linkend="HDRWQ80">Configuring the Top Levels of the AFS Filespace</link>.</para>
</listitem>
</orderedlist>
<indexterm>
<primary>Solaris</primary>
<secondary>AFS initialization script</secondary>
<tertiary>on first AFS machine</tertiary>
</indexterm>
</sect2>
<sect2 id="HDRWQ79">
<title>Activating the Script on Solaris Systems</title>
<orderedlist>
<listitem>
<para>Change to the <emphasis role="bold">/etc/init.d</emphasis> directory and issue the <emphasis role="bold">ln
-s</emphasis> command to create symbolic links that incorporate the AFS initialization script into the Solaris startup and
shutdown sequence. <programlisting>
# <emphasis role="bold">cd /etc/init.d</emphasis>
# <emphasis role="bold">ln -s ../init.d/afs /etc/rc3.d/S99afs</emphasis>
# <emphasis role="bold">ln -s ../init.d/afs /etc/rc0.d/K66afs</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para><emphasis role="bold">(Optional)</emphasis> There are now copies of the AFS initialization file in both the
<emphasis role="bold">/usr/vice/etc</emphasis> and <emphasis role="bold">/etc/init.d</emphasis> directories. If you want
to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always
retrieve the original script from the AFS CD-ROM if necessary. <programlisting>
# <emphasis role="bold">cd /usr/vice/etc</emphasis>
# <emphasis role="bold">rm afs.rc</emphasis>
# <emphasis role="bold">ln -s /etc/init.d/afs afs.rc</emphasis>
</programlisting></para>
</listitem>
</orderedlist>
<indexterm>
<primary>AFS filespace</primary>
<secondary>configuring top levels</secondary>
</indexterm>
<indexterm>
<primary>configuring</primary>
<secondary>AFS filespace (top levels)</secondary>
</indexterm>
</sect2>
</sect1>
<sect1 id="HDRWQ80">
<title>Configuring the Top Levels of the AFS Filespace</title>
<para>If you have not previously run AFS in your cell, you now configure the top levels of your cell's AFS filespace. If you
have run a previous version of AFS, the filespace is already configured. Proceed to <link linkend="HDRWQ83">Storing AFS Binaries
in AFS</link>. <indexterm>
<primary>root.cell volume</primary>
<secondary>creating and replicating</secondary>
</indexterm> <indexterm>
<primary>volume</primary>
<secondary>creating</secondary>
<tertiary>root.cell</tertiary>
</indexterm> <indexterm>
<primary>creating</primary>
<secondary>root.cell volume</secondary>
</indexterm></para>
<para>You created the <emphasis role="bold">root.afs</emphasis> volume in <link linkend="HDRWQ60">Starting the File Server,
Volume Server, and Salvager</link>, and the Cache Manager mounted it automatically on the local <emphasis
role="bold">/afs</emphasis> directory when you ran the AFS initialization script in <link linkend="HDRWQ72">Verifying the AFS
Initialization Script</link>. You now set the access control list (ACL) on the <emphasis role="bold">/afs</emphasis> directory;
creating, mounting, and setting the ACL are the three steps required when creating any volume.</para>
<para>After setting the ACL on the <emphasis role="bold">root.afs</emphasis> volume, you create your cell's <emphasis
role="bold">root.cell</emphasis> volume, mount it as a subdirectory of the <emphasis role="bold">/afs</emphasis> directory, and
set the ACL. Create both a read/write and a regular mount point for the <emphasis role="bold">root.cell</emphasis> volume. The
read/write mount point enables you to access the read/write version of replicated volumes when necessary. Creating both mount
points essentially creates separate read-only and read-write copies of your filespace, and enables the Cache Manager to traverse
the filespace on a read-only path or read/write path as appropriate. For further discussion of these concepts, see the chapter
in the <emphasis>OpenAFS Administration Guide</emphasis> about administering volumes. <indexterm>
<primary>root.afs volume</primary>
<secondary>replicating</secondary>
</indexterm> <indexterm>
<primary>volume</primary>
<secondary>replicating root.afs and root.cell</secondary>
</indexterm> <indexterm>
<primary>replicating volumes</primary>
</indexterm></para>
<para>Then replicate both the <emphasis role="bold">root.afs</emphasis> and <emphasis role="bold">root.cell</emphasis> volumes.
This is required if you want to replicate any other volumes in your cell, because all volumes mounted above a replicated volume
must themselves be replicated in order for the Cache Manager to access the replica.</para>
<para>When the <emphasis role="bold">root.afs</emphasis> volume is replicated, the Cache Manager is programmed to access its
read-only version (<emphasis role="bold">root.afs.readonly</emphasis>) whenever possible. To make changes to the contents of the
<emphasis role="bold">root.afs</emphasis> volume (when, for example, you mount another cell's <emphasis
role="bold">root.cell</emphasis> volume at the second level in your filespace), you must mount the <emphasis
role="bold">root.afs</emphasis> volume temporarily, make the changes, release the volume and remove the temporary mount point.
For instructions, see <link linkend="HDRWQ91">Enabling Access to Foreign Cells</link>. <indexterm>
<primary>fs commands</primary>
<secondary>setacl</secondary>
</indexterm> <indexterm>
<primary>commands</primary>
<secondary>fs setacl</secondary>
</indexterm> <indexterm>
<primary>access control list (ACL), setting</primary>
</indexterm> <indexterm>
<primary>setting</primary>
<secondary>ACL</secondary>
</indexterm> <orderedlist>
<listitem>
<para>Issue the <emphasis role="bold">fs setacl</emphasis> command to edit the ACL on the <emphasis
role="bold">/afs</emphasis> directory. Add an entry that grants the <emphasis role="bold">l</emphasis> (<emphasis
role="bold">lookup</emphasis>) and <emphasis role="bold">r</emphasis> (<emphasis role="bold">read</emphasis>) permissions
to the <emphasis role="bold">system:anyuser</emphasis> group, to enable all AFS users who can reach your cell to traverse
through the directory. If you prefer to enable access only to locally authenticated users, substitute the <emphasis
role="bold">system:authuser</emphasis> group.</para>
<para>Note that there is already an ACL entry that grants all seven access rights to the <emphasis
role="bold">system:administrators</emphasis> group. It is a default entry that AFS places on every new volume's root
directory.</para>
<programlisting>
# <emphasis role="bold">/usr/afs/bin/fs setacl /afs system:anyuser rl</emphasis>
</programlisting>
<indexterm>
<primary>commands</primary>
<secondary>vos create</secondary>
<tertiary>root.cell volume</tertiary>
</indexterm>
<indexterm>
<primary>vos commands</primary>
<secondary>create</secondary>
<tertiary>root.cell volume</tertiary>
</indexterm>
<indexterm>
<primary>fs commands</primary>
<secondary>mkmount</secondary>
</indexterm>
<indexterm>
<primary>commands</primary>
<secondary>fs mkmount</secondary>
</indexterm>
<indexterm>
<primary>mount point</primary>
</indexterm>
<indexterm>
<primary>creating</primary>
<secondary>mount point</secondary>
</indexterm>
<indexterm>
<primary>volume</primary>
<secondary>mounting</secondary>
</indexterm>
</listitem>
<listitem>
<para><anchor id="LIWQ81" />Issue the <emphasis role="bold">vos create</emphasis> command to create the <emphasis
role="bold">root.cell</emphasis> volume. Then issue the <emphasis role="bold">fs mkmount</emphasis> command to mount it as
a subdirectory of the <emphasis role="bold">/afs</emphasis> directory, where it serves as the root of your cell's local
AFS filespace. Finally, issue the <emphasis role="bold">fs setacl</emphasis> command to create an ACL entry for the
<emphasis role="bold">system:anyuser</emphasis> group (or <emphasis role="bold">system:authuser</emphasis> group).</para>
<para>For the <replaceable>partition name</replaceable> argument, substitute the name of one of the machine's AFS server
partitions (such as <emphasis role="bold">/vicepa</emphasis>). For the <replaceable>cellname</replaceable> argument,
substitute your cell's fully-qualified Internet domain name (such as <emphasis role="bold">abc.com</emphasis>).</para>
<programlisting>
# <emphasis role="bold">/usr/afs/bin/vos create</emphasis> &lt;<replaceable>machine name</replaceable>&gt; &lt;<replaceable>partition name</replaceable>&gt; <emphasis
role="bold">root.cell</emphasis>
# <emphasis role="bold">/usr/afs/bin/fs mkmount /afs/</emphasis><replaceable>cellname</replaceable> <emphasis role="bold">root.cell</emphasis>
# <emphasis role="bold">/usr/afs/bin/fs setacl /afs/</emphasis><replaceable>cellname</replaceable> <emphasis role="bold">system:anyuser rl</emphasis>
</programlisting>
<indexterm>
<primary>creating</primary>
<secondary>symbolic link</secondary>
<tertiary>for abbreviated cell name</tertiary>
</indexterm>
<indexterm>
<primary>symbolic link</primary>
<secondary>for abbreviated cell name</secondary>
</indexterm>
<indexterm>
<primary>cell name</primary>
<secondary>symbolic link for abbreviated</secondary>
</indexterm>
</listitem>
<listitem>
<para><emphasis role="bold">(Optional)</emphasis> Create a symbolic link to a shortened cell name, to reduce the length of
pathnames for users in the local cell. For example, in the <emphasis role="bold">abc.com</emphasis> cell, <emphasis
role="bold">/afs/abc</emphasis> is a link to <emphasis role="bold">/afs/abc.com</emphasis>. <programlisting>
# <emphasis role="bold">cd /afs</emphasis>
# <emphasis role="bold">ln -s</emphasis> <replaceable>full_cellname</replaceable> <replaceable>short_cellname</replaceable>
</programlisting> <indexterm>
<primary>read/write mount point for root.afs volume</primary>
</indexterm> <indexterm>
<primary>root.afs volume</primary>
<secondary>read/write mount point</secondary>
</indexterm> <indexterm>
<primary>creating</primary>
<secondary>read/write mount point</secondary>
</indexterm></para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">fs mkmount</emphasis> command to create a read/write mount point for the <emphasis
role="bold">root.cell</emphasis> volume (you created a regular mount point in Step <link
linkend="LIWQ81">2</link>).</para>
<para>By convention, the name of a read/write mount point begins with a period, both to distinguish it from the regular
mount point and to make it visible only when the <emphasis role="bold">-a</emphasis> flag is used on the <emphasis
role="bold">ls</emphasis> command.</para>
<para>Change directory to <emphasis role="bold">/usr/afs/bin</emphasis> to make it easier to access the command
binaries.</para>
<programlisting>
# <emphasis role="bold">cd /usr/afs/bin</emphasis>
# <emphasis role="bold">./fs mkmount /afs/.</emphasis><replaceable>cellname</replaceable> <emphasis role="bold">root.cell -rw</emphasis>
</programlisting>
<indexterm>
<primary>commands</primary>
<secondary>vos addsite</secondary>
</indexterm>
<indexterm>
<primary>vos commands</primary>
<secondary>addsite</secondary>
</indexterm>
<indexterm>
<primary>volume</primary>
<secondary>defining replication site</secondary>
</indexterm>
<indexterm>
<primary>defining</primary>
<secondary>replication site for volume</secondary>
</indexterm>
</listitem>
<listitem>
<para><anchor id="LIWQ82" />Issue the <emphasis role="bold">vos addsite</emphasis> command to define a replication site
for both the <emphasis role="bold">root.afs</emphasis> and <emphasis role="bold">root.cell</emphasis> volumes. In each
case, substitute for the <replaceable>partition name</replaceable> argument the partition where the volume's read/write
version resides. When you install additional file server machines, it is a good idea to create replication sites on them
as well. <programlisting>
# <emphasis role="bold">./vos addsite</emphasis> &lt;<replaceable>machine name</replaceable>&gt; &lt;<replaceable>partition name</replaceable>&gt; <emphasis
role="bold">root.afs</emphasis>
# <emphasis role="bold">./vos addsite</emphasis> &lt;<replaceable>machine name</replaceable>&gt; &lt;<replaceable>partition name</replaceable>&gt; <emphasis
role="bold">root.cell</emphasis>
</programlisting> <indexterm>
<primary>fs commands</primary>
<secondary>examine</secondary>
</indexterm> <indexterm>
<primary>commands</primary>
<secondary>fs examine</secondary>
</indexterm></para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">fs examine</emphasis> command to verify that the Cache Manager can access both the
<emphasis role="bold">root.afs</emphasis> and <emphasis role="bold">root.cell</emphasis> volumes, before you attempt to
replicate them. The output lists each volume's name, volumeID number, quota, size, and the size of the partition that
houses them. If you get an error message instead, do not continue before taking corrective action. <programlisting>
# <emphasis role="bold">./fs examine /afs</emphasis>
# <emphasis role="bold">./fs examine /afs/</emphasis><replaceable>cellname</replaceable>
</programlisting> <indexterm>
<primary>commands</primary>
<secondary>vos release</secondary>
</indexterm> <indexterm>
<primary>vos commands</primary>
<secondary>release</secondary>
</indexterm> <indexterm>
<primary>volume</primary>
<secondary>releasing replicated</secondary>
</indexterm> <indexterm>
<primary>releasing replicated volume</primary>
</indexterm></para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">vos release</emphasis> command to release a replica of the <emphasis
role="bold">root.afs</emphasis> and <emphasis role="bold">root.cell</emphasis> volumes to the sites you defined in Step
<link linkend="LIWQ82">5</link>. <programlisting>
# <emphasis role="bold">./vos release root.afs</emphasis>
# <emphasis role="bold">./vos release root.cell</emphasis>
</programlisting> <indexterm>
<primary>fs commands</primary>
<secondary>checkvolumes</secondary>
</indexterm> <indexterm>
<primary>commands</primary>
<secondary>fs checkvolumes</secondary>
</indexterm></para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">fs checkvolumes</emphasis> to force the Cache Manager to notice that you have
released read-only versions of the volumes, then issue the <emphasis role="bold">fs examine</emphasis> command again. This
time its output mentions the read-only version of the volumes (<emphasis role="bold">root.afs.readonly</emphasis> and
<emphasis role="bold">root.cell.readonly</emphasis>) instead of the read/write versions, because of the Cache Manager's
bias to access the read-only version of the <emphasis role="bold">root.afs</emphasis> volume if it exists.
<programlisting>
# <emphasis role="bold">./fs checkvolumes</emphasis>
# <emphasis role="bold">./fs examine /afs</emphasis>
# <emphasis role="bold">./fs examine /afs/</emphasis><replaceable>cellname</replaceable>
</programlisting></para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>storing</primary>
<secondary>AFS binaries in volumes</secondary>
</indexterm>
<indexterm>
<primary>creating</primary>
<secondary>volume</secondary>
<tertiary>for AFS binaries</tertiary>
</indexterm>
<indexterm>
<primary>volume</primary>
<secondary>for AFS binaries</secondary>
</indexterm>
<indexterm>
<primary>binaries</primary>
<secondary>storing AFS in volume</secondary>
</indexterm>
<indexterm>
<primary>usr/afsws directory</primary>
</indexterm>
<indexterm>
<primary>directories</primary>
<secondary>/usr/afsws</secondary>
</indexterm>
</sect1>
<sect1 id="HDRWQ83">
<title>Storing AFS Binaries in AFS</title>
<note><para>Sites with existing binary distribution mechanisms, including
those which use packaging systems such as RPM, may wish to skip this step,
and use tools native to their operating system to manage AFS configuration
information.</para></note>
<para>In the conventional configuration, you make AFS client binaries and configuration files available in the subdirectories of
the <emphasis role="bold">/usr/afsws</emphasis> directory on client machines (<emphasis role="bold">afsws</emphasis> is an
acronym for <emphasis role="bold">AFS w</emphasis><emphasis>ork</emphasis><emphasis
role="bold">s</emphasis><emphasis>tation</emphasis>). You can conserve local disk space by creating <emphasis
role="bold">/usr/afsws</emphasis> as a link to an AFS volume that houses the AFS client binaries and configuration files for
this system type.</para>
<para>In this section you create the necessary volumes. The conventional location to which to link <emphasis
role="bold">/usr/afsws</emphasis> is <emphasis role="bold">/afs/</emphasis><replaceable>cellname</replaceable><emphasis
role="bold">/</emphasis><replaceable>sysname</replaceable><emphasis role="bold">/usr/afsws</emphasis>, where
<replaceable>sysname</replaceable> is the appropriate system type name as specified in the <emphasis>OpenAFS Release
Notes</emphasis>. The instructions in <link linkend="HDRWQ133">Installing Additional Client Machines</link> assume that you have
followed the instructions in this section.</para>
<para>If you have previously run AFS in the cell, the volumes possibly already exist. If so, you need to perform Step <link
linkend="LIWQ86">8</link> only.</para>
<para>The current working directory is still <emphasis role="bold">/usr/afs/bin</emphasis>, which houses the <emphasis
role="bold">fs</emphasis> and <emphasis role="bold">vos</emphasis> command suite binaries. In the following commands, it is
possible you still need to specify the pathname to the commands, depending on how your PATH environment variable is set.
<orderedlist>
<indexterm>
<primary>commands</primary>
<secondary>vos create</secondary>
<tertiary>volume for AFS binaries</tertiary>
</indexterm>
<indexterm>
<primary>vos commands</primary>
<secondary>create</secondary>
<tertiary>volume for AFS binaries</tertiary>
</indexterm>
<listitem>
<para><anchor id="LIWQ84" />Issue the <emphasis role="bold">vos create</emphasis> command to create volumes for storing
the AFS client binaries for this system type. The following example instruction creates volumes called
<replaceable>sysname</replaceable>, <replaceable>sysname</replaceable>.<emphasis role="bold">usr</emphasis>, and
<replaceable>sysname</replaceable>.<emphasis role="bold">usr.afsws</emphasis>. Refer to the <emphasis>OpenAFS Release
Notes</emphasis> to learn the proper value of <replaceable>sysname</replaceable> for this system type. <programlisting>
# <emphasis role="bold">vos create</emphasis> &lt;<replaceable>machine name</replaceable>&gt; &lt;<replaceable>partition name</replaceable>&gt; <replaceable>sysname</replaceable>
# <emphasis role="bold">vos create</emphasis> &lt;<replaceable>machine name</replaceable>&gt; &lt;<replaceable>partition name</replaceable>&gt; <replaceable>sysname</replaceable><emphasis
role="bold">.usr</emphasis>
# <emphasis role="bold">vos create</emphasis> &lt;<replaceable>machine name</replaceable>&gt; &lt;<replaceable>partition name</replaceable>&gt; <replaceable>sysname</replaceable><emphasis
role="bold">.usr.afsws</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">fs mkmount</emphasis> command to mount the newly created volumes. Because the
<emphasis role="bold">root.cell</emphasis> volume is replicated, you must precede the <emphasis>cellname</emphasis> part
of the pathname with a period to specify the read/write mount point, as shown. Then issue the <emphasis role="bold">vos
release</emphasis> command to release a new replica of the <emphasis role="bold">root.cell</emphasis> volume, and the
<emphasis role="bold">fs checkvolumes</emphasis> command to force the local Cache Manager to access them. <programlisting>
# <emphasis role="bold">fs mkmount -dir /afs/.</emphasis><replaceable>cellname</replaceable><emphasis role="bold">/</emphasis><replaceable>sysname</replaceable> <emphasis
role="bold">-vol</emphasis> <replaceable>sysname</replaceable>
# <emphasis role="bold">fs mkmount -dir /afs/.</emphasis><replaceable>cellname</replaceable><emphasis role="bold">/</emphasis><replaceable>sysname</replaceable><emphasis
role="bold">/usr</emphasis> <emphasis role="bold">-vol</emphasis> <replaceable>sysname</replaceable><emphasis
role="bold">.usr</emphasis>
# <emphasis role="bold">fs mkmount -dir /afs/.</emphasis><replaceable>cellname</replaceable><emphasis role="bold">/</emphasis><replaceable>sysname</replaceable><emphasis
role="bold">/usr/afsws</emphasis> <emphasis role="bold">-vol</emphasis> <replaceable>sysname</replaceable><emphasis
role="bold">.usr.afsws</emphasis>
# <emphasis role="bold">vos release root.cell</emphasis>
# <emphasis role="bold">fs checkvolumes</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">fs setacl</emphasis> command to grant the <emphasis role="bold">l</emphasis>
(<emphasis role="bold">lookup</emphasis>) and <emphasis role="bold">r</emphasis> (<emphasis role="bold">read</emphasis>)
permissions to the <emphasis role="bold">system:anyuser</emphasis> group on each new directory's ACL. <programlisting>
# <emphasis role="bold">cd /afs/.</emphasis><replaceable>cellname</replaceable><emphasis role="bold">/</emphasis><replaceable>sysname</replaceable>
# <emphasis role="bold">fs setacl -dir . usr usr/afsws -acl system:anyuser rl</emphasis>
</programlisting> <indexterm>
<primary>commands</primary>
<secondary>fs setquota</secondary>
</indexterm> <indexterm>
<primary>fs commands</primary>
<secondary>setquota</secondary>
</indexterm> <indexterm>
<primary>quota for volume</primary>
</indexterm> <indexterm>
<primary>volume</primary>
<secondary>setting quota</secondary>
</indexterm> <indexterm>
<primary>setting</primary>
<secondary>volume quota</secondary>
</indexterm></para>
</listitem>
<listitem>
<para><anchor id="LIWQ85" />Issue the <emphasis role="bold">fs setquota</emphasis> command to set an unlimited quota on
the volume mounted at the <emphasis role="bold">/afs/</emphasis><replaceable>cellname</replaceable><emphasis
role="bold">/</emphasis><replaceable>sysname</replaceable><emphasis role="bold">/usr/afsws</emphasis> directory. This
enables you to copy all of the appropriate files from the CD-ROM into the volume without exceeding the volume's
quota.</para>
<para>If you wish, you can set the volume's quota to a finite value after you complete the copying operation. At that
point, use the <emphasis role="bold">vos examine</emphasis> command to determine how much space the volume is occupying.
Then issue the <emphasis role="bold">fs setquota</emphasis> command to set a quota that is slightly larger.</para>
<programlisting>
# <emphasis role="bold">fs setquota /afs/.</emphasis><replaceable>cellname</replaceable><emphasis role="bold">/</emphasis><replaceable>sysname</replaceable><emphasis
role="bold">/usr/afsws 0</emphasis>
</programlisting>
</listitem>
<listitem>
<para>Unpack the distribution tarball into the <emphasis role="bold">/tmp/afsdist</emphasis> directory,
if it is not already. <indexterm>
<primary>copying</primary>
<secondary>AFS binaries into volume</secondary>
</indexterm> <indexterm>
<primary>CD-ROM</primary>
<secondary>copying AFS binaries into volume</secondary>
</indexterm> <indexterm>
<primary>first AFS machine</primary>
<secondary>copying</secondary>
<tertiary>AFS binaries into volume</tertiary>
</indexterm></para>
</listitem>
<listitem>
<para>Copy the contents of the indicated directories from the
distribution into the <emphasis
role="bold">/afs/</emphasis><replaceable>cellname</replaceable><emphasis
role="bold">/</emphasis><replaceable>sysname</replaceable><emphasis role="bold">/usr/afsws</emphasis> directory.
<programlisting>
# <emphasis role="bold">cd /afs/.</emphasis><replaceable>cellname</replaceable><emphasis role="bold">/</emphasis><replaceable>sysname</replaceable><emphasis
role="bold">/usr/afsws</emphasis>
# <emphasis role="bold">cp -rp /tmp/afsdist/</emphasis><replaceable>sysname</replaceable><emphasis role="bold">/bin .</emphasis>
# <emphasis role="bold">cp -rp /tmp/afsdist/</emphasis><replaceable>sysname</replaceable><emphasis role="bold">/etc .</emphasis>
# <emphasis role="bold">cp -rp /tmp/afsdist/</emphasis><replaceable>sysname</replaceable><emphasis role="bold">/include .</emphasis>
# <emphasis role="bold">cp -rp /tmp/afsdist/</emphasis><replaceable>sysname</replaceable><emphasis role="bold">/lib .</emphasis>
</programlisting>
<indexterm>
<primary>creating</primary>
<secondary>symbolic link</secondary>
<tertiary>to AFS binaries</tertiary>
</indexterm> <indexterm>
<primary>symbolic link</primary>
<secondary>to AFS binaries from local disk</secondary>
</indexterm></para>
</listitem>
<listitem>
<para><anchor id="LIWQ86" />Create <emphasis role="bold">/usr/afsws</emphasis> on the local disk as a symbolic link to the
directory <emphasis role="bold">/afs/</emphasis><replaceable>cellname</replaceable><emphasis
role="bold">/@sys/usr/afsws</emphasis>. You can specify the actual system name instead of <emphasis
role="bold">@sys</emphasis> if you wish, but the advantage of using <emphasis role="bold">@sys</emphasis> is that it
remains valid if you upgrade this machine to a different system type. <programlisting>
# <emphasis role="bold">ln -s /afs/</emphasis><replaceable>cellname</replaceable><emphasis role="bold">/@sys/usr/afsws /usr/afsws</emphasis>
</programlisting> <indexterm>
<primary>PATH environment variable for users</primary>
</indexterm> <indexterm>
<primary>variables</primary>
<secondary>PATH, setting for users</secondary>
</indexterm></para>
</listitem>
<listitem>
<para><emphasis role="bold">(Optional)</emphasis> To enable users to issue commands from the AFS suites (such as <emphasis
role="bold">fs</emphasis>) without having to specify a pathname to their binaries, include the <emphasis
role="bold">/usr/afsws/bin</emphasis> and <emphasis role="bold">/usr/afsws/etc</emphasis> directories in the PATH
environment variable you define in each user's shell initialization file (such as <emphasis
role="bold">.cshrc</emphasis>).</para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>storing</primary>
<secondary>AFS documentation in volumes</secondary>
</indexterm>
<indexterm>
<primary>creating</primary>
<secondary>volume</secondary>
<tertiary>for AFS documentation</tertiary>
</indexterm>
<indexterm>
<primary>volume</primary>
<secondary>for AFS documentation</secondary>
</indexterm>
<indexterm>
<primary>documentation, creating volume for AFS</primary>
</indexterm>
<indexterm>
<primary>usr/afsdoc directory</primary>
</indexterm>
<indexterm>
<primary>directories</primary>
<secondary>/usr/afsdoc</secondary>
</indexterm>
</sect1>
<sect1 id="HDRWQ87">
<title>Storing AFS Documents in AFS</title>
<para>The AFS distribution includes the following documents: <itemizedlist>
<listitem>
<para><emphasis>OpenAFS Release Notes</emphasis></para>
</listitem>
<listitem>
<para><emphasis>OpenAFS Quick Beginnings</emphasis></para>
</listitem>
<listitem>
<para><emphasis>OpenAFS User Guide</emphasis></para>
</listitem>
<listitem>
<para><emphasis>OpenAFS Administration Reference</emphasis></para>
</listitem>
<listitem>
<para><emphasis>OpenAFS Administration Guide</emphasis></para>
</listitem>
</itemizedlist></para>
<note><para>OpenAFS Documentation is not currently provided with all
distributions, but may be downloaded separately from the OpenAFS
website</para></note>
<para>The OpenAFS Documentation Distribution has a directory for each
document format provided. The different formats are suitable for online
viewing, printing, or both.</para>
<para>This section explains how to create and mount a volume to house the documents, making them available to your users. The
recommended mount point for the volume is <emphasis role="bold">/afs/</emphasis><replaceable>cellname</replaceable><emphasis
role="bold">/afsdoc</emphasis>. If you wish, you can create a link to the mount point on each client machine's local disk,
called <emphasis role="bold">/usr/afsdoc</emphasis>. Alternatively, you can create a link to the mount point in each user's home
directory. You can also choose to permit users to access only certain documents (most probably, the <emphasis>OpenAFS User
Guide</emphasis>) by creating different mount points or setting different ACLs on different document directories.</para>
<para>The current working directory is still <emphasis role="bold">/usr/afs/bin</emphasis>, which houses the <emphasis
role="bold">fs</emphasis> and <emphasis role="bold">vos</emphasis> command suite binaries you use to create and mount volumes.
In the following commands, it is possible you still need to specify the pathname to the commands, depending on how your PATH
environment variable is set. <orderedlist>
<indexterm>
<primary>commands</primary>
<secondary>vos create</secondary>
<tertiary>volume for AFS documentation</tertiary>
</indexterm>
<indexterm>
<primary>vos commands</primary>
<secondary>create</secondary>
<tertiary>volume for AFS documentation</tertiary>
</indexterm>
<listitem>
<para>Issue the <emphasis role="bold">vos create</emphasis> command to create a volume for storing the AFS documentation.
Include the <emphasis role="bold">-maxquota</emphasis> argument to set an unlimited quota on the volume. This enables you
to copy all of the appropriate files from the CD-ROM into the volume without exceeding the volume's quota.</para>
<para>If you wish, you can set the volume's quota to a finite value after you complete the copying operations. At that
point, use the <emphasis role="bold">vos examine</emphasis> command to determine how much space the volume is occupying.
Then issue the <emphasis role="bold">fs setquota</emphasis> command to set a quota that is slightly larger.</para>
<programlisting>
# <emphasis role="bold">vos create</emphasis> &lt;<replaceable>machine name</replaceable>&gt; &lt;<replaceable>partition name</replaceable>&gt; <emphasis
role="bold">afsdoc -maxquota 0</emphasis>
</programlisting>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">fs mkmount</emphasis> command to mount the new volume. Because the <emphasis
role="bold">root.cell</emphasis> volume is replicated, you must precede the <emphasis>cellname</emphasis> with a period to
specify the read/write mount point, as shown. Then issue the <emphasis role="bold">vos release</emphasis> command to
release a new replica of the <emphasis role="bold">root.cell</emphasis> volume, and the <emphasis role="bold">fs
checkvolumes</emphasis> command to force the local Cache Manager to access them. <programlisting>
# <emphasis role="bold">fs mkmount -dir /afs/.</emphasis><replaceable>cellname</replaceable><emphasis role="bold">/afsdoc</emphasis> <emphasis
role="bold">-vol</emphasis> <emphasis role="bold">afsdoc</emphasis>
# <emphasis role="bold">vos release root.cell</emphasis>
# <emphasis role="bold">fs checkvolumes</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">fs setacl</emphasis> command to grant the <emphasis role="bold">rl</emphasis>
permissions to the <emphasis role="bold">system:anyuser</emphasis> group on the new directory's ACL. <programlisting>
# <emphasis role="bold">cd /afs/.</emphasis><replaceable>cellname</replaceable><emphasis role="bold">/afsdoc</emphasis>
# <emphasis role="bold">fs setacl . system:anyuser rl</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Unpack the OpenAFS documentation distribution into the
<emphasis role="bold">/tmp/afsdocs</emphasis> directory. You may use
a different directory, in which case the location you use should be
subsituted in the following examples. For instructions on unpacking
the distribution, consult the documentation for your operating
system's <emphasis role="bold">tar</emphasis> command.
<indexterm>
<primary>copying</primary>
<secondary>AFS documentation from distribution</secondary>
</indexterm> <indexterm>
<primary>OpenAFS Distribution</primary>
<secondary>copying AFS documentation from</secondary>
</indexterm> <indexterm>
<primary>first AFS machine</primary>
<secondary>copying</secondary>
<tertiary>AFS documentation from OpenAFS distribution</tertiary>
</indexterm> <indexterm>
<primary>index.htm file</primary>
</indexterm> <indexterm>
<primary>files</primary>
<secondary>index.htm</secondary>
</indexterm></para>
</listitem>
<listitem>
<para>Copy the AFS documents in one or more formats from the unpacked distribution into subdirectories of the <emphasis
role="bold">/afs/</emphasis><replaceable>cellname</replaceable><emphasis role="bold">/afsdoc</emphasis> directory. Repeat
the commands for each format. <programlisting>
# <emphasis role="bold">mkdir</emphasis> <replaceable>format_name</replaceable>
# <emphasis role="bold">cd</emphasis> <replaceable>format_name</replaceable>
# <emphasis role="bold">cp -rp /tmp/afsdocs/</emphasis><replaceable>format</replaceable> <emphasis role="bold">.</emphasis>
</programlisting></para>
<para>If you choose to store the HTML version of the documents in AFS, note that in addition to a subdirectory for each
document there are several files with a <emphasis role="bold">.gif</emphasis> extension, which enable readers to move
easily between sections of a document. The file called <emphasis role="bold">index.htm</emphasis> is an introductory HTML
page that contains a hyperlink to each of the documents. For online viewing to work properly, these files must remain in
the top-level HTML directory (the one named, for example, <emphasis
role="bold">/afs/</emphasis><replaceable>cellname</replaceable><emphasis role="bold">/afsdoc/html</emphasis>).</para>
</listitem>
<listitem>
<para><emphasis role="bold">(Optional)</emphasis> If you believe it is helpful to your users to access the AFS documents
in a certain format via a local disk directory, create <emphasis role="bold">/usr/afsdoc</emphasis> on the local disk as a
symbolic link to the documentation directory in AFS (<emphasis
role="bold">/afs/</emphasis><replaceable>cellname</replaceable><emphasis
role="bold">/afsdoc/</emphasis><replaceable>format_name</replaceable>). <programlisting>
# <emphasis role="bold">ln -s /afs/</emphasis><replaceable>cellname</replaceable><emphasis role="bold">/afsdoc/</emphasis><replaceable>format_name</replaceable> <emphasis
role="bold">/usr/afsdoc</emphasis>
</programlisting></para>
<para>An alternative is to create a link in each user's home directory to the <emphasis
role="bold">/afs/</emphasis><replaceable>cellname</replaceable><emphasis
role="bold">/afsdoc/</emphasis><replaceable>format_name</replaceable> directory.</para>
</listitem>
</orderedlist></para>
<indexterm>
<primary>storing</primary>
<secondary>system binaries in volumes</secondary>
</indexterm>
<indexterm>
<primary>creating</primary>
<secondary>volume</secondary>
<tertiary>for system binaries</tertiary>
</indexterm>
<indexterm>
<primary>volume</primary>
<secondary>for system binaries</secondary>
</indexterm>
<indexterm>
<primary>binaries</primary>
<secondary>storing system in volumes</secondary>
</indexterm>
</sect1>
<sect1 id="HDRWQ88">
<title>Storing System Binaries in AFS</title>
<para>You can also choose to store other system binaries in AFS volumes, such as the standard UNIX programs conventionally
located in local disk directories such as <emphasis role="bold">/etc</emphasis>, <emphasis role="bold">/bin</emphasis>, and
<emphasis role="bold">/lib</emphasis>. Storing such binaries in an AFS volume not only frees local disk space, but makes it
easier to update binaries on all client machines.</para>
<para>The following is a suggested scheme for storing system binaries in AFS. It does not include instructions, but you can use
the instructions in <link linkend="HDRWQ83">Storing AFS Binaries in AFS</link> (which are for AFS-specific binaries) as a
template.</para>
<para>Some files must remain on the local disk for use when AFS is inaccessible (during bootup and file server or network
outages). The required binaries include the following: <itemizedlist>
<listitem>
<para>A text editor, network commands, and so on</para>
</listitem>
<listitem>
<para>Files used during the boot sequence before the <emphasis role="bold">afsd</emphasis> program runs, such as
initialization and configuration files, and binaries for commands that mount file systems</para>
</listitem>
<listitem>
<para>Files used by dynamic kernel loader programs</para>
</listitem>
</itemizedlist></para>
<para>In most cases, it is more secure to enable only locally authenticated users to access system binaries, by granting the
<emphasis role="bold">l</emphasis> (<emphasis role="bold">lookup</emphasis>) and <emphasis role="bold">r</emphasis> (<emphasis
role="bold">read</emphasis>) permissions to the <emphasis role="bold">system:authuser</emphasis> group on the ACLs of
directories that contain the binaries. If users need to access a binary while unauthenticated, however, the ACL on its directory
must grant those permissions to the <emphasis role="bold">system:anyuser</emphasis> group.</para>
<para>The following chart summarizes the suggested volume and mount point names for storing system binaries. It uses a separate
volume for each directory. You already created a volume called <replaceable>sysname</replaceable> for this machine's system type
when you followed the instructions in <link linkend="HDRWQ83">Storing AFS Binaries in AFS</link>.</para>
<para>You can name volumes in any way you wish, and mount them at other locations than those suggested here. However, this
scheme has several advantages: <itemizedlist>
<listitem>
<para>Volume names clearly identify volume contents</para>
</listitem>
<listitem>
<para>Using the <replaceable>sysname</replaceable> prefix on every volume makes it is easy to back up all of the volumes
together, because the AFS Backup System enables you to define sets of volumes based on a string included in all of their
names</para>
</listitem>
<listitem>
<para>It makes it easy to track related volumes, keeping them together on the same file server machine if desired</para>
</listitem>
<listitem>
<para>There is a clear relationship between volume name and mount point name</para>
</listitem>
</itemizedlist></para>
<informaltable frame="none">
<tgroup cols="2">
<colspec colwidth="50*" />
<colspec colwidth="50*" />
<thead>
<row>
<entry><emphasis role="bold">Volume Name</emphasis></entry>
<entry><emphasis role="bold">Mount Point</emphasis></entry>
</row>
</thead>
<tbody>
<row>
<entry><replaceable>sysname</replaceable></entry>
<entry><emphasis
role="bold">/afs/</emphasis><replaceable>cellname</replaceable>/<replaceable>sysname</replaceable></entry>
</row>
<row>
<entry><replaceable>sysname</replaceable>.<emphasis role="bold">bin</emphasis></entry>
<entry><emphasis
role="bold">/afs/</emphasis><replaceable>cellname</replaceable>/<replaceable>sysname</replaceable><emphasis
role="bold">/bin</emphasis></entry>
</row>
<row>
<entry><replaceable>sysname</replaceable>.<emphasis role="bold">etc</emphasis></entry>
<entry><emphasis
role="bold">/afs/</emphasis><replaceable>cellname</replaceable>/<replaceable>sysname</replaceable><emphasis
role="bold">/etc</emphasis></entry>
</row>
<row>
<entry><replaceable>sysname</replaceable>.<emphasis role="bold">usr</emphasis></entry>
<entry><emphasis
role="bold">/afs/</emphasis><replaceable>cellname</replaceable>/<replaceable>sysname</replaceable><emphasis
role="bold">/usr</emphasis></entry>
</row>
<row>
<entry><replaceable>sysname</replaceable>.<emphasis role="bold">usr.afsws</emphasis></entry>
<entry><emphasis
role="bold">/afs/</emphasis><replaceable>cellname</replaceable>/<replaceable>sysname</replaceable><emphasis
role="bold">/usr/afsws</emphasis></entry>
</row>
<row>
<entry><replaceable>sysname</replaceable>.<emphasis role="bold">usr.bin</emphasis></entry>
<entry><emphasis
role="bold">/afs/</emphasis><replaceable>cellname</replaceable>/<replaceable>sysname</replaceable><emphasis
role="bold">/usr/bin</emphasis></entry>
</row>
<row>
<entry><replaceable>sysname</replaceable>.<emphasis role="bold">usr.etc</emphasis></entry>
<entry><emphasis
role="bold">/afs/</emphasis><replaceable>cellname</replaceable>/<replaceable>sysname</replaceable><emphasis
role="bold">/usr/etc</emphasis></entry>
</row>
<row>
<entry><replaceable>sysname</replaceable>.<emphasis role="bold">usr.inc</emphasis></entry>
<entry><emphasis
role="bold">/afs/</emphasis><replaceable>cellname</replaceable>/<replaceable>sysname</replaceable><emphasis
role="bold">/usr/include</emphasis></entry>
</row>
<row>
<entry><replaceable>sysname</replaceable>.<emphasis role="bold">usr.lib</emphasis></entry>
<entry><emphasis
role="bold">/afs/</emphasis><replaceable>cellname</replaceable>/<replaceable>sysname</replaceable><emphasis
role="bold">/usr/lib</emphasis></entry>
</row>
<row>
<entry><replaceable>sysname</replaceable>.<emphasis role="bold">usr.loc</emphasis></entry>
<entry><emphasis
role="bold">/afs/</emphasis><replaceable>cellname</replaceable>/<replaceable>sysname</replaceable><emphasis
role="bold">/usr/local</emphasis></entry>
</row>
<row>
<entry><replaceable>sysname</replaceable>.<emphasis role="bold">usr.man</emphasis></entry>
<entry><emphasis
role="bold">/afs/</emphasis><replaceable>cellname</replaceable>/<replaceable>sysname</replaceable><emphasis
role="bold">/usr/man</emphasis></entry>
</row>
<row>
<entry><replaceable>sysname</replaceable>.<emphasis role="bold">usr.sys</emphasis></entry>
<entry><emphasis
role="bold">/afs/</emphasis><replaceable>cellname</replaceable>/<replaceable>sysname</replaceable><emphasis
role="bold">/usr/sys</emphasis></entry>
</row>
</tbody>
</tgroup>
</informaltable>
<indexterm>
<primary>foreign cell, enabling access</primary>
</indexterm>
<indexterm>
<primary>cell</primary>
<secondary>enabling access to foreign</secondary>
</indexterm>
<indexterm>
<primary>access</primary>
<secondary>to local and foreign cells</secondary>
</indexterm>
<indexterm>
<primary>AFS filespace</primary>
<secondary>enabling access to foreign cells</secondary>
</indexterm>
<indexterm>
<primary>root.cell volume</primary>
<secondary>mounting for foreign cells in local filespace</secondary>
</indexterm>
<indexterm>
<primary>database server machine</primary>
<secondary>entry in client CellServDB file</secondary>
<tertiary>for foreign cell</tertiary>
</indexterm>
<indexterm>
<primary>CellServDB file (client)</primary>
<secondary>adding entry</secondary>
<tertiary>for foreign cell</tertiary>
</indexterm>
</sect1>
<sect1 id="HDRWQ91">
<title>Enabling Access to Foreign Cells</title>
<para>With current OpenAFS releases, there exist a number of mechanisms for
providing access to foreign cells. You may add mount points in your AFS
filespace for each foreign cell you wish users to access, or you can
enable a 'synthetic' AFS root, which contains mountpoints for either all
AFS cells defined in the client machine's local
<emphasis role="bold">/usr/vice/etc/CellServDB</emphasis>, or for all cells
providing location information in the DNS.
</para>
<sect2>
<title>Enabling a Synthetic AFS root</title>
<para>When a synthetic root is enabled, the client cache machine creates its
own root.afs volume, rather than using the one provided with your cell. This
allows clients to access all cells in the
<emphasis role="bold">CellServDB</emphasis> file and, optionally, all cells
registered in the DNS, without requiring system administrator action to
enable this access. Using a synthetic root has the additional advantage that
it allows a client to start its AFS service without a network available, as
it is no longer necessary to contact a fileserver to obtain the root volume.
</para>
<para>OpenAFS supports two complimentary mechanisms for creating the
synthetic root. Starting the cache manager with the
<emphasis role="bold">-dynroot</emphasis> option adds all cells listed
in <emphasis role="bold">/usr/vice/etc/CellServDB</emphasis> to the client's
AFS root. Adding the <emphasis role="bold">-afsdb</emphasis> option in
addition to this enables DNS lookups for any cells that are not found in
the client's CellServDB file. Both of these options are added to the AFS
initialisation script, or options file, as detailed in
<link linkend="HDRWQ70">Configuring the Cache Manager</link>.</para>
</sect2>
<sect2>
<title>Adding foreign cells to a conventional root volume</title>
<para>In this section you create a mount point in your AFS filespace for the <emphasis role="bold">root.cell</emphasis> volume
of each foreign cell that you want to enable your users to access. For users working on a client machine to access the cell,
there must in addition be an entry for it in the client machine's local <emphasis
role="bold">/usr/vice/etc/CellServDB</emphasis> file. (The instructions in <link linkend="HDRWQ66">Creating the Client
CellServDB File</link> suggest that you use the <emphasis role="bold">CellServDB.sample</emphasis> file included in the AFS
distribution as the basis for your cell's client <emphasis role="bold">CellServDB</emphasis> file. The sample file lists all of
the cells that had agreed to participate in the AFS global namespace at the time your AFS CD-ROM was created. As mentioned in
that section, the AFS Product Support group also maintains a copy of the file, updating it as necessary.)</para>
<para>The chapter in the <emphasis>OpenAFS Administration Guide</emphasis> about cell administration and configuration issues
discusses the implications of participating in the global AFS namespace. The chapter about administering client machines
explains how to maintain knowledge of foreign cells on client machines, and includes suggestions for maintaining a central
version of the file in AFS. <orderedlist>
<listitem>
<para>Issue the <emphasis role="bold">fs mkmount</emphasis> command to mount each foreign cell's <emphasis
role="bold">root.cell</emphasis> volume on a directory called <emphasis
role="bold">/afs/</emphasis><replaceable>foreign_cell</replaceable>. Because the <emphasis role="bold">root.afs</emphasis>
volume is replicated, you must create a temporary mount point for its read/write version in a directory to which you have
write access (such as your cell's <emphasis role="bold">/afs/.</emphasis><replaceable>cellname</replaceable> directory).
Create the mount points, issue the <emphasis role="bold">vos release</emphasis> command to release new replicas to the
read-only sites for the <emphasis role="bold">root.afs</emphasis> volume, and issue the <emphasis role="bold">fs
checkvolumes</emphasis> command to force the local Cache Manager to access the new replica.</para>
<note>
<para>You need to issue the <emphasis role="bold">fs mkmount</emphasis> command only once for each foreign cell's
<emphasis role="bold">root.cell</emphasis> volume. You do not need to repeat the command on each client machine.</para>
</note>
<para>Substitute your cell's name for <replaceable>cellname</replaceable>.</para>
<programlisting>
# <emphasis role="bold">cd /afs/.</emphasis><replaceable>cellname</replaceable>
# <emphasis role="bold">/usr/afs/bin/fs mkmount temp root.afs</emphasis>
</programlisting>
<para>Repeat the <emphasis role="bold">fs mkmount</emphasis> command for each foreign cell you wish to mount at this
time.</para>
<programlisting>
# <emphasis role="bold">/usr/afs/bin/fs mkmount temp/</emphasis><replaceable>foreign_cell</replaceable> <emphasis role="bold">root.cell -c</emphasis> <replaceable>foreign_cell</replaceable>
</programlisting>
<para>Issue the following commands only once.</para>
<programlisting>
# <emphasis role="bold">/usr/afs/bin/fs rmmount temp</emphasis>
# <emphasis role="bold">/usr/afs/bin/vos release root.afs</emphasis>
# <emphasis role="bold">/usr/afs/bin/fs checkvolumes</emphasis>
</programlisting>
<indexterm>
<primary>fs commands</primary>
<secondary>newcell</secondary>
</indexterm>
<indexterm>
<primary>commands</primary>
<secondary>fs newcell</secondary>
</indexterm>
</listitem>
<listitem>
<para><anchor id="LIWQ92" />If this machine is going to remain an AFS client after you complete the installation, verify
that the local <emphasis role="bold">/usr/vice/etc/CellServDB</emphasis> file includes an entry for each foreign
cell.</para>
<para>For each cell that does not already have an entry, complete the following instructions: <orderedlist>
<listitem>
<para>Create an entry in the <emphasis role="bold">CellServDB</emphasis> file. Be sure to comply with the formatting
instructions in <link linkend="HDRWQ66">Creating the Client CellServDB File</link>.</para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">fs newcell</emphasis> command to add an entry for the cell directly to the
list that the Cache Manager maintains in kernel memory. Provide each database server machine's fully qualified
hostname. <programlisting>
# <emphasis role="bold">/usr/afs/bin/fs newcell</emphasis> &lt;<replaceable>foreign_cell</replaceable>&gt; &lt;<replaceable>dbserver1&gt;</replaceable> \
[&lt;<replaceable>dbserver2&gt;</replaceable>] [&lt;<replaceable>dbserver3&gt;</replaceable>]
</programlisting></para>
</listitem>
<listitem>
<para>If you plan to maintain a central version of the <emphasis role="bold">CellServDB</emphasis> file (the
conventional location is <emphasis role="bold">/afs/</emphasis><replaceable>cellname</replaceable><emphasis
role="bold">/common/etc/CellServDB</emphasis>), create it now as a copy of the local <emphasis
role="bold">/usr/vice/etc/CellServDB</emphasis> file. Verify that it includes an entry for each foreign cell you
want your users to be able to access. <programlisting>
# <emphasis role="bold">mkdir common</emphasis>
# <emphasis role="bold">mkdir common/etc</emphasis>
# <emphasis role="bold">cp /usr/vice/etc/CellServDB common/etc</emphasis>
# <emphasis role="bold">/usr/afs/bin/vos release root.cell</emphasis>
</programlisting></para>
</listitem>
</orderedlist></para>
</listitem>
<listitem>
<para>Issue the <emphasis role="bold">ls</emphasis> command to verify that the new cell's mount point is visible in your
filespace. The output lists the directories at the top level of the new cell's AFS filespace. <programlisting>
# <emphasis role="bold">ls /afs/</emphasis><replaceable>foreign_cell</replaceable>
</programlisting></para>
</listitem>
<listitem>
<para>If you wish to participate in the global AFS namespace, and only
intend running one database server, please
register your cell with grand.central.org at this time.
To do so, email the <emphasis role="bold">CellServDB</emphasis> fragment
describing your cell, together with a contact name and email address
for any queries, to cellservdb@grand.central.org. If you intend
on deploying multiple database servers, please wait until you have
installed all of them before registering your cell.</para>
</listitem>
<listitem>
<para>If you wish to allow your cell to be located through DNS lookups,
at this time you should also add the necessary configuration to your
DNS.</para>
<para>AFS database servers may be located by creating AFSDB records
in the DNS for the domain name corresponding to the name of your cell.
It's outside the scope of this guide to give an indepth description of
managing, or configuring, your site's DNS. You should consult the
documentation for your DNS server for further details on AFSDB
records.</para>
</listitem>
</orderedlist></para>
</sect2>
</sect1>
<sect1 id="HDRWQ93">
<title>Improving Cell Security</title>
<indexterm>
<primary>cell</primary>
<secondary>improving security</secondary>
</indexterm>
<indexterm>
<primary>security</primary>
<secondary>improving</secondary>
</indexterm>
<indexterm>
<primary>root superuser</primary>
<secondary>controlling access</secondary>
</indexterm>
<indexterm>
<primary>access</primary>
<secondary>to root and admin accounts</secondary>
</indexterm>
<indexterm>
<primary>admin account</primary>
<secondary>controlling access to</secondary>
</indexterm>
<indexterm>
<primary>AFS filespace</primary>
<secondary>controlling access by root superuser</secondary>
</indexterm>
<para>This section discusses ways to improve the security of AFS data
in your cell. Also see the chapter in the <emphasis>OpenAFS
Administration Guide</emphasis> about configuration and administration
issues.</para>
<sect2 id="HDRWQ94">
<title>Controlling root Access</title>
<para>As on any machine, it is important to prevent unauthorized users from logging onto an AFS server or client machine as
the local superuser <emphasis role="bold">root</emphasis>. Take care to keep the <emphasis role="bold">root</emphasis>
password secret.</para>
<para>The local <emphasis role="bold">root</emphasis> superuser does not have special access to AFS data through the Cache
Manager (as members of the <emphasis role="bold">system:administrators</emphasis> group do), but it does have the following
privileges: <itemizedlist>
<listitem>
<para>On client machines, the ability to issue commands from the <emphasis role="bold">fs</emphasis> suite that affect
AFS performance</para>
</listitem>
<listitem>
<para>On server machines, the ability to disable authorization checking, or to install rogue process binaries</para>
</listitem>
</itemizedlist></para>
</sect2>
<sect2 id="HDRWQ95">
<title>Controlling System Administrator Access</title>
<para>Following are suggestions for managing AFS administrative privilege: <itemizedlist>
<listitem>
<para>Create an administrative account for each administrator named
something like
<replaceable>username</replaceable><emphasis role="bold">.admin</emphasis>.
Administrators authenticate under these identities only when
performing administrative tasks, and destroy the administrative
tokens immediately after finishing the task (either by issuing the
<emphasis role="bold">unlog</emphasis> command, or the
<emphasis role="bold">kinit</emphasis> and
<emphasis role="bold">aklog</emphasis> commands to adopt their
regular identity).</para>
</listitem>
<listitem>
<para>Set a short ticket lifetime for administrator accounts (for example, 20 minutes) by using the
facilities of your KDC. For instance, with a MIT Kerberos KDC, this
can be performed using the
<emphasis role="bold">--max-ticket-life</emphasis> argument to
the <emphasis role="bold">kadmin modify_principal</emphasis>
command. Do not however, use a short lifetime for users
who issue long-running <emphasis role="bold">backup</emphasis> commands.</para>
</listitem>
<listitem>
<para>Limit the number of system administrators in your cell, especially those who belong to the <emphasis
role="bold">system:administrators</emphasis> group. By default they have all ACL rights on all directories in the local
AFS filespace, and therefore must be trusted not to examine private files.</para>
</listitem>
<listitem>
<para>Limit the use of system administrator accounts on machines in public areas. It is especially important not to
leave such machines unattended without first destroying the administrative tokens.</para>
</listitem>
<listitem>
<para>Limit the use by administrators of standard UNIX commands that make connections to remote machines (such as the
<emphasis role="bold">telnet</emphasis> utility). Many of these programs send passwords across the network without
encrypting them.</para>
</listitem>
</itemizedlist></para>
<indexterm>
<primary>BOS Server</primary>
<secondary>checking mode bits on AFS directories</secondary>
</indexterm>
<indexterm>
<primary>mode bits on local AFS directories</primary>
</indexterm>
<indexterm>
<primary>UNIX mode bits on local AFS directories</primary>
</indexterm>
</sect2>
<sect2 id="HDRWQ96">
<title>Protecting Sensitive AFS Directories</title>
<para>Some subdirectories of the <emphasis role="bold">/usr/afs</emphasis> directory contain files crucial to cell security.
Unauthorized users must not read or write to these files because of the potential for misuse of the information they
contain.</para>
<para>As the BOS Server initializes for the first time on a server machine, it creates several files and directories (as
mentioned in <link linkend="HDRWQ50">Starting the BOS Server</link>). It sets their owner to the local superuser <emphasis
role="bold">root</emphasis> and sets their mode bits to enable writing by the owner only; in some cases, it also restricts
reading.</para>
<para>At each subsequent restart, the BOS Server checks that the owner and mode bits on these files are still set
appropriately. If they are not, it write the following message to the <emphasis role="bold">/usr/afs/logs/BosLog</emphasis>
file:</para>
<programlisting>
Bosserver reports inappropriate access on server directories
</programlisting>
<para>The BOS Server does not reset the mode bits, which enables you to set alternate values if you wish.</para>
<para>The following charts lists the expected mode bit settings. A question mark indicates that the BOS Server does not check
that mode bit.</para>
<informaltable frame="none">
<tgroup cols="2">
<colspec colwidth="30*" />
<colspec colwidth="70*" />
<tbody>
<row>
<entry><emphasis role="bold">/usr/afs</emphasis></entry>
<entry><computeroutput>drwxr</computeroutput>?<computeroutput>xr-x</computeroutput></entry>
</row>
<row>
<entry><emphasis role="bold">/usr/afs/backup</emphasis></entry>
<entry><computeroutput>drwx</computeroutput>???<computeroutput>---</computeroutput></entry>
</row>
<row>
<entry><emphasis role="bold">/usr/afs/bin</emphasis></entry>
<entry><computeroutput>drwxr</computeroutput>?<computeroutput>xr-x</computeroutput></entry>
</row>
<row>
<entry><emphasis role="bold">/usr/afs/db</emphasis></entry>
<entry><computeroutput>drwx</computeroutput>???<computeroutput>---</computeroutput></entry>
</row>
<row>
<entry><emphasis role="bold">/usr/afs/etc</emphasis></entry>
<entry><computeroutput>drwxr</computeroutput>?<computeroutput>xr-x</computeroutput></entry>
</row>
<row>
<entry><emphasis role="bold">/usr/afs/etc/KeyFile</emphasis></entry>
<entry><computeroutput>-rw</computeroutput>????<computeroutput>---</computeroutput></entry>
</row>
<row>
<entry><emphasis role="bold">/usr/afs/etc/UserList</emphasis></entry>
<entry><computeroutput>-rw</computeroutput>?????<computeroutput>--</computeroutput></entry>
</row>
<row>
<entry><emphasis role="bold">/usr/afs/local</emphasis></entry>
<entry><computeroutput>drwx</computeroutput>???<computeroutput>---</computeroutput></entry>
</row>
<row>
<entry><emphasis role="bold">/usr/afs/logs</emphasis></entry>
<entry><computeroutput>drwxr</computeroutput>?<computeroutput>xr-x</computeroutput></entry>
</row>
</tbody>
</tgroup>
</informaltable>
<indexterm>
<primary>first AFS machine</primary>
<secondary>client functionality</secondary>
<tertiary>removing</tertiary>
</indexterm>
<indexterm>
<primary>removing</primary>
<secondary>client functionality from first AFS machine</secondary>
</indexterm>
</sect2>
</sect1>
<sect1 id="HDRWQ98">
<title>Removing Client Functionality</title>
<para>Follow the instructions in this section only if you do not wish this machine to remain an AFS client. Removing client
functionality means that you cannot use this machine to access AFS files. <orderedlist>
<listitem>
<para>Remove the files from the <emphasis role="bold">/usr/vice/etc</emphasis> directory. The command does not remove the
directory for files used by the dynamic kernel loader program, if it exists on this system type. Those files are still
needed on a server-only machine. <programlisting>
# <emphasis role="bold">cd /usr/vice/etc</emphasis>
# <emphasis role="bold">rm *</emphasis>
# <emphasis role="bold">rm -rf C</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Create symbolic links to the <emphasis role="bold">ThisCell</emphasis> and <emphasis
role="bold">CellServDB</emphasis> files in the <emphasis role="bold">/usr/afs/etc</emphasis> directory. This makes it
possible to issue commands from the AFS command suites (such as <emphasis role="bold">bos</emphasis> and <emphasis
role="bold">fs</emphasis>) on this machine. <programlisting>
# <emphasis role="bold">ln -s /usr/afs/etc/ThisCell ThisCell</emphasis>
# <emphasis role="bold">ln -s /usr/afs/etc/CellServDB CellServDB</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>On IRIX systems, issue the <emphasis role="bold">chkconfig</emphasis> command to deactivate the <emphasis
role="bold">afsclient</emphasis> configuration variable. <programlisting>
# <emphasis role="bold">/etc/chkconfig -f afsclient off</emphasis>
</programlisting></para>
</listitem>
<listitem>
<para>Reboot the machine. Most system types use the <emphasis role="bold">shutdown</emphasis> command, but the appropriate
options vary. <programlisting>
# <emphasis role="bold">cd /</emphasis>
# <emphasis role="bold">shutdown</emphasis> <replaceable>appropriate_options</replaceable>
</programlisting></para>
</listitem>
</orderedlist></para>
</sect1>
</chapter>