openafs/doc/html/QuickStartUnix/auqbg006.htm
Derrick Brashear d7da1acc31 initial-html-documentation-20010606
pull in all documentation from IBM
2001-06-06 19:09:07 +00:00

2271 lines
91 KiB
HTML

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 4//EN">
<HTML><HEAD>
<TITLE>Quick Beginnings</TITLE>
<!-- Begin Header Records ========================================== -->
<!-- /tmp/idwt3574/auqbg000.scr converted by idb2h R4.2 (359) ID -->
<!-- Workbench Version (AIX) on 2 Oct 2000 at 12:25:35 -->
<META HTTP-EQUIV="updated" CONTENT="Mon, 02 Oct 2000 12:25:35">
<META HTTP-EQUIV="review" CONTENT="Tue, 02 Oct 2001 12:25:35">
<META HTTP-EQUIV="expires" CONTENT="Wed, 02 Oct 2002 12:25:35">
</HEAD><BODY>
<!-- (C) IBM Corporation 2000. All Rights Reserved -->
<BODY bgcolor="ffffff">
<!-- End Header Records ============================================ -->
<A NAME="Top_Of_Page"></A>
<H1>Quick Beginnings</H1>
<HR><P ALIGN="center"> <A HREF="../index.htm"><IMG SRC="../books.gif" BORDER="0" ALT="[Return to Library]"></A> <A HREF="auqbg002.htm#ToC"><IMG SRC="../toc.gif" BORDER="0" ALT="[Contents]"></A> <A HREF="auqbg005.htm"><IMG SRC="../prev.gif" BORDER="0" ALT="[Previous Topic]"></A> <A HREF="#Bot_Of_Page"><IMG SRC="../bot.gif" BORDER="0" ALT="[Bottom of Topic]"></A> <A HREF="auqbg007.htm"><IMG SRC="../next.gif" BORDER="0" ALT="[Next Topic]"></A> <A HREF="auqbg009.htm#HDRINDEX"><IMG SRC="../index.gif" BORDER="0" ALT="[Index]"></A> <P>
<P>
<A NAME="IDX2688"></A>
<A NAME="IDX2689"></A>
<A NAME="IDX2690"></A>
<HR><H1><A NAME="HDRWQ99" HREF="auqbg002.htm#ToC_97">Installing Additional Server Machines</A></H1>
<P>Instructions for the following procedures appear in the
indicated section of this chapter.
<UL>
<P><LI><A HREF="#HDRWQ100">Installing an Additional File Server Machine</A>
<P><LI><A HREF="#HDRWQ114">Installing Database Server Functionality</A>
<P><LI><A HREF="#HDRWQ125">Removing Database Server Functionality</A>
</UL>
<P>The instructions make the following assumptions.
<UL>
<P><LI>You have already installed your cell's first file server machine by
following the instructions in <A HREF="auqbg005.htm#HDRWQ17">Installing the First AFS Machine</A>
<P><LI>You are logged in as the local superuser <B>root</B>
<P><LI>You are working at the console
<P><LI>A standard version of one of the operating systems supported by the
current version of AFS is running on the machine
<P><LI>You can access the data on the AFS CD-ROMs, either through a local CD-ROM
drive or via an NFS mount of a CD-ROM drive attached to a machine that is
accessible by network
</UL>
<A NAME="IDX2691"></A>
<HR><H2><A NAME="HDRWQ100" HREF="auqbg002.htm#ToC_98">Installing an Additional File Server Machine</A></H2>
<P>The procedure for installing a new file server machine is
similar to installing the first file server machine in your cell. There
are a few parts of the installation that differ depending on whether the
machine is the same AFS system type as an existing file server machine or is
the first file server machine of its system type in your cell. The
differences mostly concern the source for the needed binaries and files, and
what portions of the Update Server you install:
<UL>
<P><LI>On a new system type, you must load files and binaries from the AFS
CD-ROM. You install the server portion of the Update Server to make
this machine the binary distribution machine for its system type.
<P><LI>On an existing system type, you can copy files and binaries from a
previously installed file server machine, rather than from the CD-ROM.
You install the client portion of the Update Server to accept updates of
binaries, because a previously installed machine of this type was installed as
the binary distribution machine.
</UL>
<P>These instructions are brief; for more detailed information, refer to
the corresponding steps in <A HREF="auqbg005.htm#HDRWQ17">Installing the First AFS Machine</A>.
<A NAME="IDX2692"></A>
<P>To install a new file server machine, perform the following
procedures:
<OL TYPE=1>
<P><LI>Copy needed binaries and files onto this machine's local disk
<P><LI>Incorporate AFS modifications into the kernel
<P><LI>Configure partitions for storing volumes
<P><LI>Replace the standard <B>fsck</B> utility with the AFS-modified version
on some system types
<P><LI>Start the Basic OverSeer (BOS) Server
<P><LI>Start the appropriate portion of the Update Server
<P><LI>Start the <B>fs</B> process, which incorporates three component
processes: the File Server, Volume Server, and Salvager
<P><LI>Start the controller process (called <B>runntp</B>) for the Network
Time Protocol Daemon, which synchronizes clocks
</OL>
<P>After completing the instructions in this section, you can install database
server functionality on the machine according to the instructions in <A HREF="#HDRWQ114">Installing Database Server Functionality</A>.
<A NAME="IDX2693"></A>
<A NAME="IDX2694"></A>
<A NAME="IDX2695"></A>
<A NAME="IDX2696"></A>
<A NAME="IDX2697"></A>
<A NAME="IDX2698"></A>
<A NAME="IDX2699"></A>
<A NAME="IDX2700"></A>
<A NAME="IDX2701"></A>
<A NAME="IDX2702"></A>
<A NAME="IDX2703"></A>
<A NAME="IDX2704"></A>
<A NAME="IDX2705"></A>
<P><H3><A NAME="Header_99" HREF="auqbg002.htm#ToC_99">Creating AFS Directories and Performing Platform-Specific Procedures</A></H3>
<P>Create the <B>/usr/afs</B> and <B>/usr/vice/etc</B> directories
on the local disk. Subsequent instructions copy files from the AFS
distribution CD-ROM into them, at the appropriate point for each system
type.
<PRE>
# <B>mkdir /usr/afs</B>
# <B>mkdir /usr/afs/bin</B>
# <B>mkdir /usr/vice</B>
# <B>mkdir /usr/vice/etc</B>
# <B>mkdir /cdrom</B>
</PRE>
<P>As on the first file server machine, the initial procedures in installing
an additional file server machine vary a good deal from platform to
platform. For convenience, the following sections group together all of
the procedures for a system type. Most of the remaining procedures are
the same on every system type, but differences are noted as
appropriate. The initial procedures are the following.
<UL>
<P><LI>Incorporate AFS modifications into the kernel, either by using a dynamic
kernel loader program or by building a new static kernel
<P><LI>Configure server partitions to house AFS volumes
<P><LI>Replace the operating system vendor's <B>fsck</B> program with a
version that recognizes AFS data
<A NAME="IDX2706"></A>
<P><LI>If the machine is to remain an AFS client machine, modify the
machine's authentication system so that users obtain an AFS token as they
log into the local file system. (For this procedure only, the
instructions direct you to the platform-specific section in <A HREF="auqbg005.htm#HDRWQ17">Installing the First AFS Machine</A>.)
</UL>
<P>To continue, proceed to the section for this system type:
<UL>
<P><LI><A HREF="#HDRWQ101">Getting Started on AIX Systems</A>
<P><LI><A HREF="#HDRWQ102">Getting Started on Digital UNIX Systems</A>
<P><LI><A HREF="#HDRWQ103">Getting Started on HP-UX Systems</A>
<P><LI><A HREF="#HDRWQ104">Getting Started on IRIX Systems</A>
<P><LI><A HREF="#HDRWQ106">Getting Started on Linux Systems</A>
<P><LI><A HREF="#HDRWQ107">Getting Started on Solaris Systems</A>
</UL>
<P><H4><A NAME="HDRWQ101">Getting Started on AIX Systems</A></H4>
<P>Begin by running the AFS initialization script to call the
AIX kernel extension facility, which dynamically loads AFS modifications into
the kernel. Then configure partitions and replace the AIX
<B>fsck</B> program with a version that correctly handles AFS
volumes.
<OL TYPE=1>
<A NAME="IDX2707"></A>
<A NAME="IDX2708"></A>
<A NAME="IDX2709"></A>
<A NAME="IDX2710"></A>
<P><LI>Mount the AFS CD-ROM for AIX on the local <B>/cdrom</B>
directory. For instructions on mounting CD-ROMs (either locally or
remotely via NFS), see your AIX documentation. Then change directory as
indicated.
<PRE>
# <B>cd /cdrom/rs_aix42/root.client/usr/vice/etc</B>
</PRE>
<P><LI>Copy the AFS kernel library files to the local
<B>/usr/vice/etc/dkload</B> directory, and the AFS initialization script
to the <B>/etc</B> directory.
<PRE>
# <B>cp -rp dkload /usr/vice/etc</B>
# <B>cp -p rc.afs /etc/rc.afs</B>
</PRE>
<P><LI>Edit the <B>/etc/rc.afs</B> script, setting the <TT>NFS</TT>
variable as indicated.
<P>If the machine is not to function as an NFS/AFS Translator, set the
<TT>NFS</TT> variable as follows.
<PRE>
NFS=$NFS_NONE
</PRE>
<P>If the machine is to function as an NFS/AFS Translator and is running AIX
4.2.1 or higher, set the <TT>NFS</TT> variable as
follows. Note that NFS must already be loaded into the kernel, which
happens automatically on systems running AIX 4.1.1 and later, as
long as the file <B>/etc/exports</B> exists.
<PRE>
NFS=$NFS_IAUTH
</PRE>
<P><LI>Invoke the <B>/etc/rc.afs</B> script to load AFS modifications
into the kernel. You can ignore any error messages about the inability
to start the BOS Server or the Cache Manager or AFS client.
<PRE>
# <B>/etc/rc.afs</B>
</PRE>
<A NAME="IDX2711"></A>
<A NAME="IDX2712"></A>
<A NAME="IDX2713"></A>
<A NAME="IDX2714"></A>
<P><LI>Create a directory called <B>/vicep</B><VAR>xx</VAR> for each AFS server
partition you are configuring (there must be at least one). Repeat the
command for each partition.
<PRE>
# <B>mkdir /vicep</B><VAR>xx</VAR>
</PRE>
<P><LI>Use the <B>SMIT</B> program to create a journaling file system on each
partition to be configured as an AFS server partition.
<P><LI>Mount each partition at one of the <B>/vicep</B><VAR>xx</VAR>
directories. Choose one of the following three methods:
<UL>
<P><LI>Use the <B>SMIT</B> program
<P><LI>Use the <B>mount -a</B> command to mount all partitions at once
<P><LI>Use the <B>mount</B> command on each partition in turn
</UL>
<P>Also configure the partitions so that they are mounted automatically at
each reboot. For more information, refer to the AIX
documentation.
<A NAME="IDX2715"></A>
<A NAME="IDX2716"></A>
<A NAME="IDX2717"></A>
<A NAME="IDX2718"></A>
<P><LI>Move the AIX <B>fsck</B> program helper to a safe location and install
the version from the AFS distribution in its place. The AFS CD-ROM must
still be mounted at the <B>/cdrom</B> directory.
<PRE>
# <B>cd /sbin/helpers</B>
# <B>mv v3fshelper v3fshelper.noafs</B>
# <B>cp -p /cdrom/rs_aix42/root.server/etc/v3fshelper v3fshelper</B>
</PRE>
<P><LI>If the machine is to remain an AFS client, incorporate AFS into its
authentication system, following the instructions in <A HREF="auqbg005.htm#HDRWQ25">Enabling AFS Login on AIX Systems</A>.
<P><LI>Proceed to <A HREF="#HDRWQ108">Starting Server Programs</A>.
</OL>
<P><H4><A NAME="HDRWQ102">Getting Started on Digital UNIX Systems</A></H4>
<P>Begin by building AFS modifications into the kernel, then
configure server partitions and replace the Digital UNIX <B>fsck</B>
program with a version that correctly handles AFS volumes.
<P>If the machine's hardware and software configuration exactly matches
another Digital UNIX machine on which AFS is already built into the kernel,
you can copy the kernel from that machine to this one. In general,
however, it is better to build AFS modifications into the kernel on each
machine according to the following instructions.
<OL TYPE=1>
<A NAME="IDX2719"></A>
<A NAME="IDX2720"></A>
<A NAME="IDX2721"></A>
<A NAME="IDX2722"></A>
<P><LI>Create a copy called <B>AFS</B> of the basic kernel configuration file
included in the Digital UNIX distribution as
<B>/usr/sys/conf/</B><VAR>machine_name</VAR>, where <VAR>machine_name</VAR> is
the machine's hostname in all uppercase letters.
<PRE>
# <B>cd /usr/sys/conf</B>
# <B>cp</B> <VAR>machine_name</VAR> <B>AFS</B>
</PRE>
<P><LI>Add AFS to the list of options in the configuration file you created in
the previous step, so that the result looks like the following:
<PRE> . .
. .
options UFS
options NFS
options AFS
. .
. .
</PRE>
<P><LI>Add an entry for AFS to two places in the file
<B>/usr/sys/conf/files</B>.
<UL>
<P><LI>Add a line for AFS to the list of <TT>OPTIONS</TT>, so that the result
looks like the following:
<PRE> . . .
. . .
OPTIONS/nfs optional nfs
OPTIONS/afs optional afs
OPTIONS/nfs_server optional nfs_server
. . .
. . .
</PRE>
<P><LI>Add an entry for AFS to the list of <TT>MODULES</TT>, so that the result
looks like the following:
<PRE> . . . .
. . . .
#
MODULE/nfs_server optional nfs_server Binary
nfs/nfs_server.c module nfs_server optimize -g3
nfs/nfs3_server.c module nfs_server optimize -g3
#
MODULE/afs optional afs Binary
afs/libafs.c module afs
#
</PRE>
</UL>
<P><LI>Add an entry for AFS to two places in the file
<B>/usr/sys/vfs/vfs_conf.c</B>.
<UL>
<P><LI>Add AFS to the list of defined file systems, so that the result looks like
the following:
<PRE> . .
. .
#include &lt;afs.h>
#if defined(AFS) &amp;&amp; AFS
extern struct vfsops afs_vfsops;
#endif
. .
. .
</PRE>
<P><LI>Put a declaration for AFS in the <B>vfssw[]</B> table's
MOUNT_ADDON slot, so that the result looks like the following:
<PRE> . . .
. . .
&amp;fdfs_vfsops, "fdfs", /* 12 = MOUNT_FDFS */
#if defined(AFS)
&amp;afs_vfsops, "afs",
#else
(struct vfsops *)0, "", /* 13 = MOUNT_ADDON */
#endif
#if NFS &amp;&amp; INFS_DYNAMIC
&amp;nfs3_vfsops, "nfsv3", /* 14 = MOUNT_NFS3 */
</PRE>
</UL>
<P><LI>Mount the AFS CD-ROM for Digital UNIX on the local <B>/cdrom</B>
directory. For instructions on mounting CD-ROMs (either locally or
remotely via NFS), see your Digital UNIX documentation. Then change
directory as indicated.
<PRE>
# <B>cd /cdrom/alpha_dux40/root.client</B>
</PRE>
<P><LI>Copy the AFS initialization script to the local directory for
initialization files (by convention, <B>/sbin/init.d</B> on Digital
UNIX machines). Note the removal of the <B>.rc</B> extension
as you copy the script.
<PRE>
# <B>cp usr/vice/etc/afs.rc /sbin/init.d/afs</B>
</PRE>
<P><LI>Copy the AFS kernel module to the local <B>/usr/sys/BINARY</B>
directory.
<P>If the machine's kernel supports NFS server functionality:
<PRE>
# <B>cp bin/libafs.o /usr/sys/BINARY/afs.mod</B>
</PRE>
<P>If the machine's kernel does not support NFS server
functionality:
<PRE>
# <B>cp bin/libafs.nonfs.o /usr/sys/BINARY/afs.mod</B>
</PRE>
<P><LI>Configure and build the kernel. Respond to any prompts by pressing
&lt;<B>Return</B>>. The resulting kernel resides in the file
<B>/sys/AFS/vmunix</B>.
<PRE>
# <B>doconfig -c AFS</B>
</PRE>
<P><LI>Rename the existing kernel file and copy the new, AFS-modified file to the
standard location.
<PRE>
# <B>mv /vmunix /vmunix_noafs</B>
# <B>cp /sys/AFS/vmunix /vmunix</B>
</PRE>
<P><LI>Reboot the machine to start using the new kernel, and login again as the
superuser <B>root</B>.
<PRE>
# <B>cd /</B>
# <B>shutdown -r now</B>
login: <B>root</B>
Password: <VAR>root_password</VAR>
</PRE>
<A NAME="IDX2723"></A>
<A NAME="IDX2724"></A>
<A NAME="IDX2725"></A>
<A NAME="IDX2726"></A>
<P><LI>Create a directory called <B>/vicep</B><VAR>xx</VAR> for each AFS server
partition you are configuring (there must be at least one). Repeat the
command for each partition.
<PRE>
# <B>mkdir /vicep</B><VAR>xx</VAR>
</PRE>
<P><LI>Add a line with the following format to the file systems registry file,
<B>/etc/fstab</B>, for each directory just created. The entry maps
the directory name to the disk partition to be mounted on it.
<PRE>
/dev/<VAR>disk</VAR> /vicep<VAR>xx</VAR> ufs rw 0 2
</PRE>
<P>The following is an example for the first partition being
configured.
<PRE>
/dev/rz3a /vicepa ufs rw 0 2
</PRE>
<P><LI>Create a file system on each partition that is to be mounted at a
<B>/vicep</B><VAR>xx</VAR> directory. The following command is
probably appropriate, but consult the Digital UNIX documentation for more
information.
<PRE>
#<B> newfs -v /dev/</B><VAR>disk</VAR>
</PRE>
<P><LI>Mount each partition by issuing either the <B>mount -a</B> command to
mount all partitions at once or the <B>mount</B> command to mount each
partition in turn.
<A NAME="IDX2727"></A>
<A NAME="IDX2728"></A>
<A NAME="IDX2729"></A>
<A NAME="IDX2730"></A>
<P><LI>Install the <B>vfsck</B> binary to the <B>/sbin</B> and
<B>/usr/sbin</B> directories. The AFS CD-ROM must still be mounted
at the <B>/cdrom</B> directory.
<PRE>
# <B>cd /cdrom/alpha_dux40/root.server/etc</B>
# <B>cp vfsck /sbin/vfsck</B>
# <B>cp vfsck /usr/sbin/vfsck</B>
</PRE>
<P><LI>Rename the Digital UNIX <B>fsck</B> binaries and create symbolic links
to the <B>vfsck</B> program.
<PRE>
# <B>cd /sbin</B>
# <B>mv ufs_fsck ufs_fsck.noafs</B>
# <B>ln -s vfsck ufs_fsck</B>
# <B>cd /usr/sbin</B>
# <B>mv ufs_fsck ufs_fsck.noafs</B>
# <B>ln -s vfsck ufs_fsck</B>
</PRE>
<P><LI>If the machine is to remain an AFS client, incorporate AFS into its
authentication system, following the instructions in <A HREF="auqbg005.htm#HDRWQ30">Enabling AFS Login on Digital UNIX Systems</A>.
<P><LI>Proceed to <A HREF="#HDRWQ108">Starting Server Programs</A>.
</OL>
<P><H4><A NAME="HDRWQ103">Getting Started on HP-UX Systems</A></H4>
<P>Begin by building AFS modifications into the kernel, then
configure server partitions and replace the HP-UX <B>fsck</B> program with
a version that correctly handles AFS volumes.
<P>If the machine's hardware and software configuration exactly matches
another HP-UX machine on which AFS is already built into the kernel, you can
copy the kernel from that machine to this one. In general, however, it
is better to build AFS modifications into the kernel on each machine according
to the following instructions.
<OL TYPE=1>
<A NAME="IDX2731"></A>
<A NAME="IDX2732"></A>
<A NAME="IDX2733"></A>
<A NAME="IDX2734"></A>
<P><LI>Move the existing kernel-related files to a safe location.
<PRE>
# <B>cp /stand/vmunix /stand/vmunix.noafs</B>
# <B>cp /stand/system /stand/system.noafs</B>
</PRE>
<P><LI>Mount the AFS CD-ROM for HP-UX on the local <B>/cdrom</B>
directory. For instructions on mounting CD-ROMs (either locally or
remotely via NFS), see your HP-UX documentation. Then change directory
as indicated.
<PRE>
# <B>cd /cdrom/hp_ux110/root.client</B>
</PRE>
<P><LI>Copy the AFS initialization file to the local directory for initialization
files (by convention, <B>/sbin/init.d</B> on HP-UX
machines). Note the removal of the <B>.rc</B> extension as
you copy the file.
<PRE>
# <B>cp usr/vice/etc/afs.rc /sbin/init.d/afs</B>
</PRE>
<P><LI>Copy the file <B>afs.driver</B> to the local
<B>/usr/conf/master.d</B> directory, changing its name to
<B>afs</B> as you do.
<PRE>
# <B>cp usr/vice/etc/afs.driver /usr/conf/master.d/afs</B>
</PRE>
<P><LI>Copy the AFS kernel module to the local <B>/usr/conf/lib</B>
directory.
<P>If the machine's kernel supports NFS server functionality:
<PRE>
# <B>cp bin/libafs.a /usr/conf/lib</B>
</PRE>
<P>If the machine's kernel does not support NFS server functionality,
change the file's name as you copy it:
<PRE>
# <B>cp bin/libafs.nonfs.a /usr/conf/lib/libafs.a</B>
</PRE>
<P><LI>Incorporate the AFS driver into the kernel, either using the
<B>SAM</B> program or a series of individual commands.
<UL>
<P><LI>To use the <B>SAM</B> program:
<OL TYPE=a>
<P><LI>Invoke the <B>SAM</B> program, specifying the hostname of the local
machine as <VAR>local_hostname</VAR>. The <B>SAM</B> graphical user
interface pops up.
<PRE>
# <B>sam -display</B> <VAR>local_hostname</VAR><B>:0</B>
</PRE>
<P><LI>Choose the <B>Kernel Configuration</B> icon, then the
<B>Drivers</B> icon. From the list of drivers, select
<B>afs</B>.
<P><LI>Open the pull-down <B>Actions</B> menu and choose the <B>Add Driver
to Kernel</B> option.
<P><LI>Open the <B>Actions</B> menu again and choose the <B>Create a New
Kernel</B> option.
<P><LI>Confirm your choices by choosing <B>Yes</B> and <B>OK</B> when
prompted by subsequent pop-up windows. The <B>SAM</B> program
builds the kernel and reboots the system.
<P><LI>Login again as the superuser <B>root</B>.
<PRE>
login: <B>root</B>
Password: <VAR>root_password</VAR>
</PRE>
</OL>
<P><LI>To use individual commands:
<OL TYPE=a>
<P><LI>Edit the file <B>/stand/system</B>, adding an entry for <B>afs</B>
to the <TT>Subsystems</TT> section.
<P><LI>Change to the <B>/stand/build</B> directory and issue the
<B>mk_kernel</B> command to build the kernel.
<PRE>
# <B>cd /stand/build</B>
# <B>mk_kernel</B>
</PRE>
<P><LI>Move the new kernel to the standard location (<B>/stand/vmunix</B>),
reboot the machine to start using it, and login again as the superuser
<B>root</B>.
<PRE>
# <B>mv /stand/build/vmunix_test /stand/vmunix</B>
# <B>cd /</B>
# <B>shutdown -r now</B>
login: <B>root</B>
Password: <VAR>root_password</VAR>
</PRE>
</OL>
</UL>
<A NAME="IDX2735"></A>
<A NAME="IDX2736"></A>
<A NAME="IDX2737"></A>
<A NAME="IDX2738"></A>
<P><LI>Create a directory called <B>/vicep</B><VAR>xx</VAR> for each AFS server
partition you are configuring (there must be at least one). Repeat the
command for each partition.
<PRE>
# <B>mkdir /vicep</B><VAR>xx</VAR>
</PRE>
<P><LI>Use the <B>SAM</B> program to create a file system on each
partition. For instructions, consult the HP-UX documentation.
<P><LI>On some HP-UX systems that use logical volumes, the <B>SAM</B> program
automatically mounts the partitions. If it has not, mount each
partition by issuing either the <B>mount -a</B> command to mount all
partitions at once or the <B>mount</B> command to mount each partition in
turn.
<A NAME="IDX2739"></A>
<A NAME="IDX2740"></A>
<A NAME="IDX2741"></A>
<A NAME="IDX2742"></A>
<P><LI>Create the command configuration file
<B>/sbin/lib/mfsconfig.d/afs</B>. Use a text editor to place
the indicated two lines in it:
<PRE>
format_revision 1
fsck 0 m,P,p,d,f,b:c:y,n,Y,N,q,
</PRE>
<P><LI>Create and change directory to an AFS-specific command directory called
<B>/sbin/fs/afs</B>.
<PRE>
# <B>mkdir /sbin/fs/afs</B>
# <B>cd /sbin/fs/afs</B>
</PRE>
<P><LI>Copy the AFS-modified version of the <B>fsck</B> program (the
<B>vfsck</B> binary) and related files from the distribution directory to
the new AFS-specific command directory.
<PRE>
# <B>cp -p /cdrom/hp_ux110/root.server/etc/* .</B>
</PRE>
<P><LI>Change the <B>vfsck</B> binary's name to <B>fsck</B> and set
the mode bits appropriately on all of the files in the <B>/sbin/fs/afs</B>
directory.
<PRE>
# <B>mv vfsck fsck</B>
# <B>chmod 755 *</B>
</PRE>
<P><LI>Edit the <B>/etc/fstab</B> file, changing the file system type for
each AFS server partition from <TT>hfs</TT> to <TT>afs</TT>. This
ensures that the AFS-modified <B>fsck</B> program runs on the appropriate
partitions.
<P>The sixth line in the following example of an edited file shows an AFS
server partition, <B>/vicepa</B>.
<PRE>
/dev/vg00/lvol1 / hfs defaults 0 1
/dev/vg00/lvol4 /opt hfs defaults 0 2
/dev/vg00/lvol5 /tmp hfs defaults 0 2
/dev/vg00/lvol6 /usr hfs defaults 0 2
/dev/vg00/lvol8 /var hfs defaults 0 2
/dev/vg00/lvol9 /vicepa afs defaults 0 2
/dev/vg00/lvol7 /usr/vice/cache hfs defaults 0 2
</PRE>
<P><LI>If the machine is to remain an AFS client, incorporate AFS into its
authentication system, following the instructions in <A HREF="auqbg005.htm#HDRWQ35">Enabling AFS Login on HP-UX Systems</A>.
<P><LI>Proceed to <A HREF="#HDRWQ108">Starting Server Programs</A>.
</OL>
<P><H4><A NAME="HDRWQ104">Getting Started on IRIX Systems</A></H4>
<P>Begin by incorporating AFS modifications into the
kernel. Either use the <B>ml</B> dynamic loader program, or build a
static kernel. Then configure partitions to house AFS volumes.
AFS supports use of both EFS and XFS partitions for housing AFS
volumes. SGI encourages use of XFS partitions.
<A NAME="IDX2743"></A>
<A NAME="IDX2744"></A>
<P>You do not need to replace IRIX <B>fsck</B> program, because the
version that SGI distributes handles AFS volumes properly.
<OL TYPE=1>
<A NAME="IDX2745"></A>
<A NAME="IDX2746"></A>
<A NAME="IDX2747"></A>
<P><LI>Prepare for incorporating AFS into the kernel by performing the following
procedures.
<OL TYPE=a>
<P><LI>Mount the AFS CD-ROM for IRIX on the <B>/cdrom</B> directory.
For instructions on mounting CD-ROMs (either locally or remotely via NFS), see
your IRIX documentation. Then change directory as indicated.
<PRE>
# <B>cd /cdrom/sgi_65/root.client</B>
</PRE>
<P><LI>Copy the AFS initialization script to the local directory for
initialization files (by convention, <B>/etc/init.d</B> on IRIX
machines). Note the removal of the <B>.rc</B> extension as
you copy the script.
<PRE>
# <B>cp -p usr/vice/etc/afs.rc /etc/init.d/afs</B>
</PRE>
<P><LI>Issue the <B>uname -m</B> command to determine the machine's CPU
board type. The <B>IP</B><VAR>xx</VAR> value in the output must match
one of the supported CPU board types listed in the <I>IBM AFS Release
Notes</I> for the current version of AFS.
<PRE>
# <B>uname -m</B>
</PRE>
</OL>
<P><LI>Incorporate AFS into the kernel, either using the <B>ml</B> program or
by building AFS modifications into a static kernel.
<UL>
<A NAME="IDX2748"></A>
<P><LI>To use the <B>ml</B> program:
<A NAME="IDX2749"></A>
<A NAME="IDX2750"></A>
<A NAME="IDX2751"></A>
<A NAME="IDX2752"></A>
<A NAME="IDX2753"></A>
<A NAME="IDX2754"></A>
<OL TYPE=a>
<P><LI>Create the local <B>/usr/vice/etc/sgiload</B> directory to house the
AFS kernel library file.
<PRE>
# <B>mkdir /usr/vice/etc/sgiload</B>
</PRE>
<P><LI>Copy the appropriate AFS kernel library file to the
<B>/usr/vice/etc/sgiload</B> directory. The
<B>IP</B><VAR>xx</VAR> portion of the library file name must match the value
previously returned by the <B>uname -m</B> command. Also choose the
file appropriate to whether the machine's kernel supports NFS server
functionality (NFS must be supported for the machine to act as an NFS/AFS
Translator). Single- and multiprocessor machines use the same library
file.
<P>(You can choose to copy all of the kernel library files into the <B>
/usr/vice/etc/sgiload</B> directory, but they require a significant amount
of space.)
<P>If the machine's kernel supports NFS server functionality:
<PRE>
# <B>cp -p usr/vice/etc/sgiload/libafs.IP</B><VAR>xx</VAR><B>.o /usr/vice/etc/sgiload</B>
</PRE>
<P>If the machine's kernel does not support NFS server
functionality:
<PRE>
# <B>cp -p usr/vice/etc/sgiload/libafs.IP</B><VAR>xx</VAR><B>.nonfs.o</B> \
<B>/usr/vice/etc/sgiload</B>
</PRE>
<P><LI>Issue the <B>chkconfig</B> command to activate the <B>afsml</B>
configuration variable.
<PRE>
# <B>/etc/chkconfig -f afsml on</B>
</PRE>
<P>If the machine is to function as an NFS/AFS Translator and the kernel
supports NFS server functionality, activate the <B>afsxnfs</B>
variable.
<PRE>
# <B>/etc/chkconfig -f afsxnfs on</B>
</PRE>
<P><LI>Run the <B>/etc/init.d/afs</B> script to load AFS extensions
into the kernel. The script invokes the <B>ml</B> command,
automatically determining which kernel library file to use based on this
machine's CPU type and the activation state of the <B>afsxnfs</B>
variable.
<P>You can ignore any error messages about the inability to start the BOS
Server or the Cache Manager or AFS client.
<PRE>
# <B>/etc/init.d/afs start</B>
</PRE>
<P><LI>Proceed to Step <A HREF="#LIWQ105">3</A>.
</OL>
<A NAME="IDX2755"></A>
<P><LI>If you prefer to build a kernel, and the machine's hardware and
software configuration exactly matches another IRIX machine on which AFS is
already built into the kernel, you can copy the kernel from that machine to
this one. In general, however, it is better to build AFS modifications
into the kernel on each machine according to the following
instructions.
<OL TYPE=a>
<P><LI>Copy the kernel initialization file <B>afs.sm</B> to the local
<B>/var/sysgen/system</B> directory, and the kernel master file
<B>afs</B> to the local <B>/var/sysgen/master.d</B>
directory.
<PRE>
# <B>cp -p bin/afs.sm /var/sysgen/system</B>
# <B>cp -p bin/afs /var/sysgen/master.d</B>
</PRE>
<P><LI>Copy the appropriate AFS kernel library file to the local file
<B>/var/sysgen/boot/afs.a</B>; the <B>IP</B><VAR>xx</VAR>
portion of the library file name must match the value previously returned by
the <B>uname -m</B> command. Also choose the file appropriate to
whether the machine's kernel supports NFS server functionality (NFS must
be supported for the machine to act as an NFS/AFS Translator). Single-
and multiprocessor machines use the same library file.
<P>If the machine's kernel supports NFS server functionality:
<PRE>
# <B>cp -p bin/libafs.IP</B><VAR>xx</VAR><B>.a /var/sysgen/boot/afs.a</B>
</PRE>
<P>If the machine's kernel does not support NFS server
functionality:
<PRE>
# <B>cp -p bin/libafs.IP</B><VAR>xx</VAR><B>.nonfs.a /var/sysgen/boot/afs.a</B>
</PRE>
<P><LI>Issue the <B>chkconfig</B> command to deactivate the <B>afsml</B>
configuration variable.
<PRE>
# <B>/etc/chkconfig -f afsml off</B>
</PRE>
<P>If the machine is to function as an NFS/AFS Translator and the kernel
supports NFS server functionality, activate the <B>afsxnfs</B>
variable.
<PRE>
# <B>/etc/chkconfig -f afsxnfs on</B>
</PRE>
<P><LI>Copy the existing kernel file, <B>/unix</B>, to a safe
location. Compile the new kernel, which is created in the file
<B>/unix.install</B>. It overwrites the existing
<B>/unix</B> file when the machine reboots in the next step.
<PRE>
# <B>cp /unix /unix_noafs</B>
# <B>autoconfig</B>
</PRE>
<P><LI>Reboot the machine to start using the new kernel, and login again as the
superuser <B>root</B>.
<PRE>
# <B>cd /</B>
# <B>shutdown -i6 -g0 -y</B>
login: <B>root</B>
Password: <VAR>root_password</VAR>
</PRE>
</OL>
</UL>
<A NAME="IDX2756"></A>
<A NAME="IDX2757"></A>
<A NAME="IDX2758"></A>
<A NAME="IDX2759"></A>
<P><LI><A NAME="LIWQ105"></A>Create a directory called <B>/vicep</B><VAR>xx</VAR> for each
AFS server partition you are configuring (there must be at least one).
Repeat the command for each partition.
<PRE>
# <B>mkdir /vicep</B><VAR>xx</VAR>
</PRE>
<P><LI>Add a line with the following format to the file systems registry file,
<B>/etc/fstab</B>, for each partition (or logical volume created with the
XLV volume manager) to be mounted on one of the directories created in the
previous step.
<P>For an XFS partition or logical volume:
<PRE>
/dev/dsk/<VAR>disk</VAR> /vicep<VAR>xx</VAR> xfs rw,raw=/dev/rdsk/<VAR>disk</VAR> 0 0
</PRE>
<P>For an EFS partition:
<PRE>
/dev/dsk/<VAR>disk</VAR> /vicep<VAR>xx</VAR> efs rw,raw=/dev/rdsk/<VAR>disk</VAR> 0 0
</PRE>
<P>The following are examples of an entry for each file system type:
<PRE>
/dev/dsk/dks0d2s6 /vicepa xfs rw,raw=/dev/rdsk/dks0d2s6 0 0
/dev/dsk/dks0d3s1 /vicepb efs rw,raw=/dev/rdsk/dks0d3s1 0 0
</PRE>
<P><LI>Create a file system on each partition that is to be mounted on a
<B>/vicep</B><VAR>xx</VAR> directory. The following commands are
probably appropriate, but consult the IRIX documentation for more
information. In both cases, <VAR>raw_device</VAR> is a raw device name
like <B>/dev/rdsk/dks0d0s0</B> for a single disk partition or
<B>/dev/rxlv/xlv0</B> for a logical volume.
<P>For XFS file systems, include the indicated options to configure the
partition or logical volume with inodes large enough to accommodate
AFS-specific information:
<PRE>
# <B>mkfs -t xfs -i size=512 -l size=4000b</B> <VAR>raw_device</VAR>
</PRE>
<P>For EFS file systems:
<PRE>
# <B>mkfs -t efs</B> <VAR>raw_device</VAR>
</PRE>
<P><LI>Mount each partition by issuing either the <B>mount -a</B> command to
mount all partitions at once or the <B>mount</B> command to mount each
partition in turn.
<P><LI><B>(Optional)</B> If you have configured partitions or logical volumes
to use XFS, issue the following command to verify that the inodes are
configured properly (are large enough to accommodate AFS-specific
information). If the configuration is correct, the command returns no
output. Otherwise, it specifies the command to run in order to
configure each partition or logical volume properly.
<PRE>
# <B>/usr/afs/bin/xfs_size_check</B>
</PRE>
<P><LI>If the machine is to remain an AFS client, incorporate AFS into its
authentication system, following the instructions in <A HREF="auqbg005.htm#HDRWQ40">Enabling AFS Login on IRIX Systems</A>.
<P><LI>Proceed to <A HREF="#HDRWQ108">Starting Server Programs</A>.
</OL>
<P><H4><A NAME="HDRWQ106">Getting Started on Linux Systems</A></H4>
<A NAME="IDX2760"></A>
<A NAME="IDX2761"></A>
<P>Begin by running the AFS initialization script to call the
<B>insmod</B> program, which dynamically loads AFS modifications into the
kernel. Then create partitions for storing AFS volumes. You do
not need to replace the Linux <B>fsck</B> program.
<OL TYPE=1>
<A NAME="IDX2762"></A>
<A NAME="IDX2763"></A>
<A NAME="IDX2764"></A>
<A NAME="IDX2765"></A>
<P><LI>Mount the AFS CD-ROM for Linux on the local <B>/cdrom</B>
directory. For instructions on mounting CD-ROMs (either locally or
remotely via NFS), see your Linux documentation. Then change directory
as indicated.
<PRE>
# <B>cd /cdrom/i386_linux22/root.client/usr/vice/etc</B>
</PRE>
<P><LI>Copy the AFS kernel library files to the local
<B>/usr/vice/etc/modload</B> directory. The filenames for the
libraries have the format
<B>libafs-</B><VAR>version</VAR><B>.o</B>, where <VAR>version</VAR>
indicates the kernel build level. The string <B>.mp</B> in
the <VAR>version</VAR> indicates that the file is appropriate for machines
running a multiprocessor kernel.
<PRE>
# <B>cp -rp modload /usr/vice/etc</B>
</PRE>
<P><LI>Copy the AFS initialization script to the local directory for
initialization files (by convention, <B>/etc/rc.d/init.d</B>
on Linux machines). Note the removal of the <B>.rc</B>
extension as you copy the script.
<PRE>
# <B>cp -p afs.rc /etc/rc.d/init.d/afs</B>
</PRE>
<P><LI>Run the AFS initialization script to load AFS extensions into the
kernel. You can ignore any error messages about the inability to start
the BOS Server or the Cache Manager or AFS client.
<PRE>
# <B>/etc/rc.d/init.d/afs start</B>
</PRE>
<A NAME="IDX2766"></A>
<A NAME="IDX2767"></A>
<A NAME="IDX2768"></A>
<A NAME="IDX2769"></A>
<P><LI>Create a directory called <B>/vicep</B><VAR>xx</VAR> for each AFS server
partition you are configuring (there must be at least one). Repeat the
command for each partition.
<PRE>
# <B>mkdir /vicep</B><VAR>xx</VAR>
</PRE>
<P><LI>Add a line with the following format to the file systems registry file,
<B>/etc/fstab</B>, for each directory just created. The entry maps
the directory name to the disk partition to be mounted on it.
<PRE>
/dev/<VAR>disk</VAR> /vicep<VAR>xx</VAR> ext2 defaults 0 2
</PRE>
<P>The following is an example for the first partition being
configured.
<PRE>
/dev/sda8 /vicepa ext2 defaults 0 2
</PRE>
<P><LI>Create a file system on each partition that is to be mounted at a
<B>/vicep</B><VAR>xx</VAR> directory. The following command is
probably appropriate, but consult the Linux documentation for more
information.
<PRE>
#<B> mkfs -v /dev/</B><VAR>disk</VAR>
</PRE>
<P><LI>Mount each partition by issuing either the <B>mount -a</B> command to
mount all partitions at once or the <B>mount</B> command to mount each
partition in turn.
<P><LI>If the machine is to remain an AFS client, incorporate AFS into its
authentication system, following the instructions in <A HREF="auqbg005.htm#HDRWQ44">Enabling AFS Login on Linux Systems</A>.
<P><LI>Proceed to <A HREF="#HDRWQ108">Starting Server Programs</A>.
</OL>
<P><H4><A NAME="HDRWQ107">Getting Started on Solaris Systems</A></H4>
<P>Begin by running the AFS initialization script to call the
<B>modload</B> program, which dynamically loads AFS modifications into the
kernel. Then configure partitions and replace the Solaris
<B>fsck</B> program with a version that correctly handles AFS
volumes.
<OL TYPE=1>
<A NAME="IDX2770"></A>
<A NAME="IDX2771"></A>
<A NAME="IDX2772"></A>
<A NAME="IDX2773"></A>
<P><LI>Mount the AFS CD-ROM for Solaris on the <B>/cdrom</B>
directory. For instructions on mounting CD-ROMs (either locally or
remotely via NFS), see your Solaris documentation. Then change
directory as indicated.
<PRE>
# <B>cd /cdrom/sun4x_56/root.client/usr/vice/etc</B>
</PRE>
<P><LI>Copy the AFS initialization script to the local directory for
initialization files (by convention, <B>/etc/init.d</B> on Solaris
machines). Note the removal of the <B>.rc</B> extension as
you copy the script.
<PRE>
# <B>cp -p afs.rc /etc/init.d/afs</B>
</PRE>
<P><LI>Copy the appropriate AFS kernel library file to the local file
<B>/kernel/fs/afs</B>.
<P>If the machine is running Solaris 2.6 or the 32-bit version of
Solaris 7, its kernel supports NFS server functionality, and the
<B>nfsd</B> process is running:
<PRE>
# <B>cp -p modload/libafs.o /kernel/fs/afs</B>
</PRE>
<P>If the machine is running Solaris 2.6 or the 32-bit version of
Solaris 7, and its kernel does not support NFS server functionality or the
<B>nfsd</B> process is not running:
<PRE>
# <B>cp -p modload/libafs.nonfs.o /kernel/fs/afs</B>
</PRE>
<P>If the machine is running the 64-bit version of Solaris 7, its kernel
supports NFS server functionality, and the <B>nfsd</B> process is
running:
<PRE>
# <B>cp -p modload/libafs64.o /kernel/fs/sparcv9/afs</B>
</PRE>
<P>If the machine is running the 64-bit version of Solaris 7, and its
kernel does not support NFS server functionality or the <B>nfsd</B>
process is not running:
<PRE>
# <B>cp -p modload/libafs64.nonfs.o /kernel/fs/sparcv9/afs</B>
</PRE>
<P><LI>Run the AFS initialization script to load AFS modifications into the
kernel. You can ignore any error messages about the inability to start
the BOS Server or the Cache Manager or AFS client.
<PRE>
# <B>/etc/init.d/afs start</B>
</PRE>
<P>When an entry called <TT>afs</TT> does not already exist in the local
<B>/etc/name_to_sysnum</B> file, the script automatically creates it and
reboots the machine to start using the new version of the file. If this
happens, log in again as the superuser <B>root</B> after the reboot and
run the initialization script again. This time the required entry
exists in the <B>/etc/name_to_sysnum</B> file, and the <B>modload</B>
program runs.
<PRE>
login: <B>root</B>
Password: <VAR>root_password</VAR>
# <B>/etc/init.d/afs start</B>
</PRE>
<A NAME="IDX2774"></A>
<A NAME="IDX2775"></A>
<A NAME="IDX2776"></A>
<A NAME="IDX2777"></A>
<P><LI>Create the <B>/usr/lib/fs/afs</B> directory to house the AFS-modified
<B>fsck</B> program and related files.
<PRE>
# <B>mkdir /usr/lib/fs/afs</B>
# <B>cd /usr/lib/fs/afs</B>
</PRE>
<P><LI>Copy the <B>vfsck</B> binary to the newly created directory, changing
the name as you do so.
<PRE>
# <B>cp /cdrom/sun4x_56/root.server/etc/vfsck fsck</B>
</PRE>
<P><LI>Working in the <B>/usr/lib/fs/afs</B> directory, create the following
links to Solaris libraries:
<PRE>
# <B>ln -s /usr/lib/fs/ufs/clri</B>
# <B>ln -s /usr/lib/fs/ufs/df</B>
# <B>ln -s /usr/lib/fs/ufs/edquota</B>
# <B>ln -s /usr/lib/fs/ufs/ff</B>
# <B>ln -s /usr/lib/fs/ufs/fsdb</B>
# <B>ln -s /usr/lib/fs/ufs/fsirand</B>
# <B>ln -s /usr/lib/fs/ufs/fstyp</B>
# <B>ln -s /usr/lib/fs/ufs/labelit</B>
# <B>ln -s /usr/lib/fs/ufs/lockfs</B>
# <B>ln -s /usr/lib/fs/ufs/mkfs</B>
# <B>ln -s /usr/lib/fs/ufs/mount</B>
# <B>ln -s /usr/lib/fs/ufs/ncheck</B>
# <B>ln -s /usr/lib/fs/ufs/newfs</B>
# <B>ln -s /usr/lib/fs/ufs/quot</B>
# <B>ln -s /usr/lib/fs/ufs/quota</B>
# <B>ln -s /usr/lib/fs/ufs/quotaoff</B>
# <B>ln -s /usr/lib/fs/ufs/quotaon</B>
# <B>ln -s /usr/lib/fs/ufs/repquota</B>
# <B>ln -s /usr/lib/fs/ufs/tunefs</B>
# <B>ln -s /usr/lib/fs/ufs/ufsdump</B>
# <B>ln -s /usr/lib/fs/ufs/ufsrestore</B>
# <B>ln -s /usr/lib/fs/ufs/volcopy</B>
</PRE>
<P><LI>Append the following line to the end of the file
<B>/etc/dfs/fstypes</B>.
<PRE>
afs AFS Utilities
</PRE>
<P><LI>Edit the <B>/sbin/mountall</B> file, making two changes.
<UL>
<P><LI>Add an entry for AFS to the <TT>case</TT> statement for option 2, so
that it reads as follows:
<PRE>
case "$2" in
ufs) foptions="-o p"
;;
afs) foptions="-o p"
;;
s5) foptions="-y -t /var/tmp/tmp$$ -D"
;;
*) foptions="-y"
;;
</PRE>
<P><LI>Edit the file so that all AFS and UFS partitions are checked in
parallel. Replace the following section of code:
<PRE>
# For fsck purposes, we make a distinction between ufs and
# other file systems
#
if [ "$fstype" = "ufs" ]; then
ufs_fscklist="$ufs_fscklist $fsckdev"
saveentry $fstype "$OPTIONS" $special $mountp
continue
fi
</PRE>
<P>with the following section of code:
<PRE>
# For fsck purposes, we make a distinction between ufs/afs
# and other file systems.
#
if [ "$fstype" = "ufs" -o "$fstype" = "afs" ]; then
ufs_fscklist="$ufs_fscklist $fsckdev"
saveentry $fstype "$OPTIONS" $special $mountp
continue
fi
</PRE>
</UL>
<A NAME="IDX2778"></A>
<A NAME="IDX2779"></A>
<A NAME="IDX2780"></A>
<A NAME="IDX2781"></A>
<P><LI>Create a directory called <B>/vicep</B><VAR>xx</VAR> for each AFS server
partition you are configuring (there must be at least one). Repeat the
command for each partition.
<PRE>
# <B>mkdir /vicep</B><VAR>xx</VAR>
</PRE>
<P><LI>Add a line with the following format to the file systems registry file,
<B>/etc/vfstab</B>, for each partition to be mounted on a directory
created in the previous step. Note the value <TT>afs</TT> in the
fourth field, which tells Solaris to use the AFS-modified <B>fsck</B>
program on this partition.
<PRE>
/dev/dsk/<VAR>disk</VAR> /dev/rdsk/<VAR>disk</VAR> /vicep<VAR>xx</VAR> afs <VAR>boot_order</VAR> yes
</PRE>
<P>The following is an example for the first partition being
configured.
<PRE>
/dev/dsk/c0t6d0s1 /dev/rdsk/c0t6d0s1 /vicepa afs 3 yes
</PRE>
<P><LI>Create a file system on each partition that is to be mounted at a
<B>/vicep</B><VAR>xx</VAR> directory. The following command is
probably appropriate, but consult the Solaris documentation for more
information.
<PRE>
# <B>newfs -v /dev/rdsk/</B><VAR>disk</VAR>
</PRE>
<P><LI>Issue the <B>mountall</B> command to mount all partitions at
once.
<P><LI>If the machine is to remain an AFS client, incorporate AFS into its
authentication system, following the instructions in <A HREF="auqbg005.htm#HDRWQ49">Enabling AFS Login and Editing the File Systems Clean-up Script on Solaris Systems</A>.
<P><LI>Proceed to <A HREF="#HDRWQ108">Starting Server Programs</A>.
</OL>
<A NAME="IDX2782"></A>
<A NAME="IDX2783"></A>
<P><H3><A NAME="HDRWQ108" HREF="auqbg002.htm#ToC_106">Starting Server Programs</A></H3>
<P>In this section you initialize the BOS Server, the Update
Server, the controller process for NTPD, and the <B>fs</B> process.
You begin by copying the necessary server files to the local disk.
<OL TYPE=1>
<A NAME="IDX2784"></A>
<A NAME="IDX2785"></A>
<A NAME="IDX2786"></A>
<P><LI>Copy file server binaries to the local <B>/usr/afs/bin</B>
directory.
<UL>
<P><LI>On a machine of an existing system type, you can either load files from
the AFS CD-ROM or use a remote file transfer protocol to copy files from an
existing server machine of the same system type. To load from the
CD-ROM, see the instructions just following for a machine of a new system
type. If using a remote file transfer protocol, copy the complete
contents of the existing server machine's <B>/usr/afs/bin</B>
directory.
<P><LI>On a machine of a new system type, you must use the following instructions
to copy files from the AFS CD-ROM.
<OL TYPE=a>
<P><LI>On the local <B>/cdrom</B> directory, mount the AFS CD-ROM for this
machine's system type, if it is not already. For instructions on
mounting CD-ROMs (either locally or remotely via NFS), consult the operating
system documentation.
<P><LI>Copy files from the CD-ROM to the local <B>/usr/afs</B>
directory.
<PRE>
# <B>cd /cdrom/</B><VAR>sysname</VAR><B>/root.server/usr/afs</B>
# <B>cp -rp * /usr/afs</B>
</PRE>
</OL>
</UL>
<A NAME="IDX2787"></A>
<A NAME="IDX2788"></A>
<A NAME="IDX2789"></A>
<A NAME="IDX2790"></A>
<A NAME="IDX2791"></A>
<A NAME="IDX2792"></A>
<A NAME="IDX2793"></A>
<A NAME="IDX2794"></A>
<A NAME="IDX2795"></A>
<A NAME="IDX2796"></A>
<A NAME="IDX2797"></A>
<A NAME="IDX2798"></A>
<P><LI>Copy the contents of the <B>/usr/afs/etc</B> directory from an
existing file server machine, using a remote file transfer protocol such as
<B>ftp</B> or NFS. If you use a system control machine, it is best
to copy the contents of its <B>/usr/afs/etc</B> directory. If you
choose not to run a system control machine, copy the directory's contents
from any existing file server machine.
<A NAME="IDX2799"></A>
<A NAME="IDX2800"></A>
<A NAME="IDX2801"></A>
<A NAME="IDX2802"></A>
<A NAME="IDX2803"></A>
<A NAME="IDX2804"></A>
<P><LI>Change to the <B>/usr/afs/bin</B> directory and start the BOS Server
(<B>bosserver</B> process). Include the <B>-noauth</B> flag to
prevent the AFS processes from performing authorization checking. This
is a grave compromise of security; finish the remaining instructions in
this section in an uninterrupted pass.
<PRE>
# <B>cd /usr/afs/bin</B>
# <B>./bosserver -noauth &amp;</B>
</PRE>
<A NAME="IDX2805"></A>
<A NAME="IDX2806"></A>
<A NAME="IDX2807"></A>
<A NAME="IDX2808"></A>
<A NAME="IDX2809"></A>
<A NAME="IDX2810"></A>
<P><LI><A NAME="LIWQ109"></A>If you run a system control machine, create the
<B>upclientetc</B> process as an instance of the client portion of the
Update Server. It accepts updates of the common configuration files
stored in the system control machine's <B>/usr/afs/etc</B> directory
from the <B>upserver</B> process (server portion of the Update Server)
running on that machine. The cell's first file server machine was
installed as the system control machine in <A HREF="auqbg005.htm#HDRWQ61">Starting the Server Portion of the Update Server</A>. (If you do not run a system control machine, you
must update the contents of the <B>/usr/afs/etc</B> directory on each file
server machine, using the appropriate <B>bos</B> commands.)
<P>By default, the Update Server performs updates every 300 seconds (five
minutes). Use the <B>-t</B> argument to specify a different number
of seconds. For the <VAR>machine&nbsp;name</VAR> argument, substitute the
name of the machine you are installing. The command appears on multiple
lines here only for legibility reasons.
<PRE>
# <B>./bos create</B> &lt;<VAR>machine&nbsp;name</VAR>> <B>upclientetc simple</B> \
<B>"/usr/afs/bin/upclient</B> &lt;<VAR>system&nbsp;control&nbsp;machine</VAR>> \
[<B>-t</B> &lt;<VAR>time</VAR>>] <B>/usr/afs/etc" -cell</B> &lt;<VAR>cell&nbsp;name</VAR>> <B>-noauth</B>
</PRE>
<A NAME="IDX2811"></A>
<A NAME="IDX2812"></A>
<A NAME="IDX2813"></A>
<P><LI><A NAME="LIWQ110"></A>Create an instance of the Update Server to handle distribution
of the file server binaries stored in the <B>/usr/afs/bin</B>
directory.
<UL>
<P><LI>If this is the first file server machine of its AFS system type, create
the <B>upserver</B> process as an instance of the server portion of the
Update Server. It distributes its copy of the file server process
binaries to the other file server machines of this system type that you
install in future. Creating this process makes this machine the binary
distribution machine for its type.
<PRE>
# <B>./bos create </B> &lt;<VAR>machine&nbsp;name</VAR>> <B>upserver simple</B> \
<B>"/usr/afs/bin/upserver -clear /usr/afs/bin" </B> \
<B>-cell</B> &lt;<VAR>cell&nbsp;name</VAR>> <B>-noauth</B>
</PRE>
<P><LI>If this machine is an existing system type, create the
<B>upclientbin</B> process as an instance of the client portion of the
Update Server. It accepts updates of the AFS binaries from the
<B>upserver</B> process running on the binary distribution machine for its
system type. For distribution to work properly, the <B>upserver</B>
process must already by running on that machine.
<P>Use the <B>-clear</B> argument to specify that the
<B>upclientbin</B> process requests unencrypted transfer of the binaries
in the <B>/usr/afs/bin</B> directory. Binaries are not sensitive
and encrypting them is time-consuming.
<P>By default, the Update Server performs updates every 300 seconds (five
minutes). Use the <B>-t</B> argument to specify an different number
of seconds.
<PRE>
# <B>./bos create</B> &lt;<VAR>machine&nbsp;name</VAR>> <B>upclientbin simple</B> \
<B>"/usr/afs/bin/upclient</B> &lt;<VAR>binary&nbsp;distribution&nbsp;machine</VAR>> \
[<B>-t</B> &lt;<VAR>time</VAR>>] <B>-clear /usr/afs/bin" -cell</B> &lt;<VAR>cell&nbsp;name</VAR>> <B>-noauth</B>
</PRE>
</UL>
<A NAME="IDX2814"></A>
<A NAME="IDX2815"></A>
<A NAME="IDX2816"></A>
<A NAME="IDX2817"></A>
<P><LI>Start the <B>runntp</B> process, which configures the Network Time
Protocol Daemon (NTPD) to choose a database server machine chosen randomly
from the local <B>/usr/afs/etc/CellServDB</B> file as its time
source. In the standard configuration, the first database server
machine installed in your cell refers to a time source outside the cell, and
serves as the basis for clock synchronization on all server machines.
<PRE>
# <B>./bos create</B> &lt;<VAR>machine&nbsp;name</VAR>> <B>runntp simple</B> \
<B>/usr/afs/bin/runntp -cell</B> &lt;<VAR>cell&nbsp;name</VAR>> <B>-noauth</B>
</PRE>
<TABLE><TR><TD ALIGN="LEFT" VALIGN="TOP"><B>Note:</B></TD><TD ALIGN="LEFT" VALIGN="TOP">Do not run the <B>runntp</B> process if NTPD or another time
synchronization protocol is already running on the machine. Some
versions of some operating systems run a time synchronization program by
default, as detailed in the <I>IBM AFS Release Notes</I>.
<P>Attempting to run multiple instances of the NTPD causes an error.
Running NTPD together with another time synchronization protocol is
unnecessary and can cause instability in the clock setting.
</TD></TR></TABLE>
<A NAME="IDX2818"></A>
<A NAME="IDX2819"></A>
<A NAME="IDX2820"></A>
<A NAME="IDX2821"></A>
<A NAME="IDX2822"></A>
<A NAME="IDX2823"></A>
<A NAME="IDX2824"></A>
<A NAME="IDX2825"></A>
<A NAME="IDX2826"></A>
<A NAME="IDX2827"></A>
<P><LI>Start the <B>fs</B> process, which binds together the File Server,
Volume Server, and Salvager.
<PRE>
# <B>./bos create </B> &lt;<VAR>machine&nbsp;name</VAR>> <B>fs fs </B> \
<B>/usr/afs/bin/fileserver /usr/afs/bin/volserver</B> \
<B>/usr/afs/bin/salvager -cell</B> &lt;<VAR>cell&nbsp;name</VAR>> <B>-noauth</B>
</PRE>
</OL>
<A NAME="IDX2828"></A>
<A NAME="IDX2829"></A>
<P><H3><A NAME="HDRWQ111" HREF="auqbg002.htm#ToC_107">Installing Client Functionality</A></H3>
<P>If you want this machine to be a client as well as a server,
follow the instructions in this section. Otherwise, skip to <A HREF="#HDRWQ112">Completing the Installation</A>.
<P>Begin by loading the necessary client files to the local disk. Then
create the necessary configuration files and start the Cache Manager.
For more detailed explanation of the procedures involved, see the
corresponding instructions in <A HREF="auqbg005.htm#HDRWQ17">Installing the First AFS Machine</A> (in the sections following <A HREF="auqbg005.htm#HDRWQ63">Overview: Installing Client Functionality</A>).
<P>If another AFS machine of this machine's system type exists, the AFS
binaries are probably already accessible in your AFS filespace (the
conventional location is
<B>/afs/</B><VAR>cellname</VAR><B>/</B><VAR>sysname</VAR><B>/usr/afsws</B>).
If not, or if this is the first AFS machine of its type, copy the AFS binaries
for this system type into an AFS volume by following the instructions in <A HREF="auqbg005.htm#HDRWQ83">Storing AFS Binaries in AFS</A>. Because this machine is not yet an AFS client, you
must perform the procedure on an existing AFS machine. However,
remember to perform the final step (linking the local directory
<B>/usr/afsws</B> to the appropriate location in the AFS file tree) on
this machine itself. If you also want to create AFS volumes to house
UNIX system binaries for the new system type, see <A HREF="auqbg005.htm#HDRWQ88">Storing System Binaries in AFS</A>.
<A NAME="IDX2830"></A>
<A NAME="IDX2831"></A>
<A NAME="IDX2832"></A>
<OL TYPE=1>
<P><LI>Copy client binaries and files to the local disk.
<UL>
<P><LI>On a machine of an existing system type, you can either load files from
the AFS CD-ROM or use a remote file transfer protocol to copy files from an
existing server machine of the same system type. To load from the
CD-ROM, see the instructions just following for a machine of a new system
type. If using a remote file transfer protocol, copy the complete
contents of the existing client machine's <B>/usr/vice/etc</B>
directory.
<P><LI>On a machine of a new system type, you must use the following instructions
to copy files from the AFS CD-ROM.
<OL TYPE=a>
<P><LI>On the local <B>/cdrom</B> directory, mount the AFS CD-ROM for this
machine's system type, if it is not already. For instructions on
mounting CD-ROMs (either locally or remotely via NFS), consult the operating
system documentation.
<P><LI>Copy files to the local <B>/usr/vice/etc</B> directory.
<P>This step places a copy of the AFS initialization script (and related
files, if applicable) into the <B>/usr/vice/etc</B> directory. In
the preceding instructions for incorporating AFS into the kernel, you copied
the script directly to the operating system's conventional location for
initialization files. When you incorporate AFS into the machine's
startup sequence in a later step, you can choose to link the two files.
<P>On some system types that use a dynamic kernel loader program, you
previously copied AFS library files into a subdirectory of the
<B>/usr/vice/etc</B> directory. On other system types, you copied
the appropriate AFS library file directly to the directory where the operating
system accesses it. The following commands do not copy or recopy the
AFS library files into the <B>/usr/vice/etc</B> directory, because on some
system types the library files consume a large amount of space. If you
want to copy them, add the <B>-r</B> flag to the first <B>cp</B>
command and skip the second <B>cp</B> command.
<PRE>
# <B>cd /cdrom/</B><VAR>sysname</VAR><B>/root.client/usr/vice/etc</B>
# <B>cp -p * /usr/vice/etc</B>
# <B>cp -rp C /usr/vice/etc</B>
</PRE>
</OL>
</UL>
<A NAME="IDX2833"></A>
<A NAME="IDX2834"></A>
<A NAME="IDX2835"></A>
<A NAME="IDX2836"></A>
<A NAME="IDX2837"></A>
<A NAME="IDX2838"></A>
<P><LI>Change to the <B>/usr/vice/etc</B> directory and create the
<B>ThisCell</B> file as a copy of the <B>/usr/afs/etc/ThisCell</B>
file. You must first remove the symbolic link to the
<B>/usr/afs/etc/ThisCell</B> file that the BOS Server created
automatically in <A HREF="#HDRWQ108">Starting Server Programs</A>.
<PRE>
# <B>cd /usr/vice/etc</B>
# <B>rm ThisCell</B>
# <B>cp /usr/afs/etc/ThisCell ThisCell</B>
</PRE>
<P><LI>Remove the symbolic link to the <B>/usr/afs/etc/CellServDB</B>
file.
<PRE>
# <B>rm CellServDB</B>
</PRE>
<A NAME="IDX2839"></A>
<A NAME="IDX2840"></A>
<A NAME="IDX2841"></A>
<P><LI>Create the <B>/usr/vice/etc/CellServDB</B> file. Use a network
file transfer program such as <B>ftp</B> or NFS to copy it from one of the
following sources, which are listed in decreasing order of preference:
<UL>
<P><LI>Your cell's central <B>CellServDB</B> source file (the
conventional location is
<B>/afs/</B><VAR>cellname</VAR><B>/common/etc/CellServDB</B>)
<P><LI>The global <B>CellServDB</B> file maintained by the AFS Product
Support group
<P><LI>An existing client machine in your cell
<P><LI>The <B>CellServDB.sample</B> file included in the
<VAR>sysname</VAR><B>/root.client/usr/vice/etc</B> directory of each
AFS CD-ROM; add an entry for the local cell by following the instructions
in <A HREF="auqbg005.htm#HDRWQ66">Creating the Client CellServDB File</A>
</UL>
<A NAME="IDX2842"></A>
<A NAME="IDX2843"></A>
<A NAME="IDX2844"></A>
<A NAME="IDX2845"></A>
<P><LI>Create the <B>cacheinfo</B> file for either a disk cache or a memory
cache. For a discussion of the appropriate values to record in the
file, see <A HREF="auqbg005.htm#HDRWQ67">Configuring the Cache</A>.
<P>To configure a disk cache, issue the following commands. If you are
devoting a partition exclusively to caching, as recommended, you must also
configure it, make a file system on it, and mount it at the directory created
in this step.
<PRE>
# <B>mkdir /usr/vice/cache</B>
# <B>echo "/afs:/usr/vice/cache:</B><VAR>#blocks</VAR><B>" > cacheinfo</B>
</PRE>
<P>To configure a memory cache:
<PRE>
# <B>echo "/afs:/usr/vice/cache:</B><VAR>#blocks</VAR><B>" > cacheinfo</B>
</PRE>
<A NAME="IDX2846"></A>
<A NAME="IDX2847"></A>
<A NAME="IDX2848"></A>
<A NAME="IDX2849"></A>
<A NAME="IDX2850"></A>
<A NAME="IDX2851"></A>
<P><LI>Create the local directory on which to mount the AFS filespace, by
convention <B>/afs</B>. If the directory already exists, verify
that it is empty.
<PRE>
# <B>mkdir /afs</B>
</PRE>
<P><LI>On AIX systems, add the following line to the <B>/etc/vfs</B>
file. It enables AIX to unmount AFS correctly during shutdown.
<PRE>
afs 4 none none
</PRE>
<P><LI>On Linux systems, copy the <B>afsd</B> options file from the
<B>/usr/vice/etc</B> directory to the <B>/etc/sysconfig</B> directory,
removing the <B>.conf</B> extension as you do so.
<PRE>
# <B>cp /usr/vice/etc/afs.conf /etc/sysconfig/afs</B>
</PRE>
<P><LI>Edit the machine's AFS initialization script or <B>afsd</B>
options file to set appropriate values for <B>afsd</B> command
parameters. The script resides in the indicated location on each system
type:
<UL>
<P><LI>On AIX systems, <B>/etc/rc.afs</B>
<P><LI>On Digital UNIX systems, <B>/sbin/init.d/afs</B>
<P><LI>On HP-UX systems, <B>/sbin/init.d/afs</B>
<P><LI>On IRIX systems, <B>/etc/init.d/afs</B>
<P><LI>On Linux systems, <B>/etc/sysconfig/afs</B> (the <B>afsd</B>
options file)
<P><LI>On Solaris systems, <B>/etc/init.d/afs</B>
</UL>
<P>Use one of the methods described in <A HREF="auqbg005.htm#HDRWQ70">Configuring the Cache Manager</A> to add the following flags to the <B>afsd</B> command
line. If you intend for the machine to remain an AFS client, also set
any performance-related arguments you wish.
<UL>
<P><LI>Add the <B>-nosettime</B> flag, because this is a file server machine
that is also a client.
<P><LI>Add the <B>-memcache</B> flag if the machine is to use a memory
cache.
<P><LI>Add the <B>-verbose</B> flag to display a trace of the Cache
Manager's initialization on the standard output stream.
</UL>
<P><LI>If appropriate, follow the instructions in <A HREF="auqbg005.htm#HDRWQ83">Storing AFS Binaries in AFS</A> to copy the AFS binaries for this system type into an AFS
volume. See the introduction to this section for further
discussion.
</OL>
<P><H3><A NAME="HDRWQ112" HREF="auqbg002.htm#ToC_108">Completing the Installation</A></H3>
<P>At this point you run the machine's AFS initialization
script to verify that it correctly loads AFS modifications into the kernel and
starts the BOS Server, which starts the other server processes. If you
have installed client files, the script also starts the Cache Manager.
If the script works correctly, perform the steps that incorporate it into the
machine's startup and shutdown sequence. If there are problems
during the initialization, attempt to resolve them. The AFS Product
Support group can provide assistance if necessary.
<P>If the machine is configured as a client using a disk cache, it can take a
while for the <B>afsd</B> program to create all of the
<B>V</B><VAR>n</VAR> files in the cache directory. Messages on the
console trace the initialization process.
<OL TYPE=1>
<P><LI>Issue the <B>bos shutdown</B> command to shut down the AFS server
processes other than the BOS Server. Include the <B>-wait</B> flag
to delay return of the command shell prompt until all processes shut down
completely.
<PRE>
# <B>/usr/afs/bin/bos shutdown</B> &lt;<VAR>machine&nbsp;name</VAR>> <B>-wait</B>
</PRE>
<P><LI>Issue the <B>ps</B> command to learn the BOS Server's process ID
number (PID), and then the <B>kill</B> command to stop the
<B>bosserver</B> process.
<PRE>
# <B>ps</B> <VAR>appropriate_ps_options</VAR> <B>| grep bosserver</B>
# <B>kill</B> <VAR>bosserver_PID</VAR>
</PRE>
<A NAME="IDX2852"></A>
<A NAME="IDX2853"></A>
<A NAME="IDX2854"></A>
<A NAME="IDX2855"></A>
<A NAME="IDX2856"></A>
<A NAME="IDX2857"></A>
<P><LI>Run the AFS initialization script by issuing the appropriate commands for
this system type.
<P><B>On AIX systems:</B>
<OL TYPE=a>
<P><LI>Reboot the machine and log in again as the local superuser
<B>root</B>.
<PRE>
# <B>cd /</B>
# <B>shutdown -r now</B>
login: <B>root</B>
Password: <VAR>root_password</VAR>
</PRE>
<P><LI>Run the AFS initialization script.
<PRE>
# <B>/etc/rc.afs</B>
</PRE>
<P><LI>Edit the AIX initialization file, <B>/etc/inittab</B>, adding the
following line to invoke the AFS initialization script. Place it just
after the line that starts NFS daemons.
<PRE>
rcafs:2:wait:/etc/rc.afs > /dev/console 2>&amp;1 # Start AFS services
</PRE>
<P><LI><B>(Optional)</B> There are now copies of the AFS initialization file
in both the <B>/usr/vice/etc</B> and <B>/etc</B> directories.
If you want to avoid potential confusion by guaranteeing that they are always
the same, create a link between them. You can always retrieve the
original script from the AFS CD-ROM if necessary.
<PRE>
# <B>cd /usr/vice/etc</B>
# <B>rm rc.afs</B>
# <B>ln -s /etc/rc.afs</B>
</PRE>
<P><LI>Proceed to Step <A HREF="#LIWQ113">4</A>.
</OL>
<A NAME="IDX2858"></A>
<P><B>On Digital UNIX systems:</B>
<OL TYPE=a>
<P><LI>Run the AFS initialization script.
<PRE>
# <B>/sbin/init.d/afs start</B>
</PRE>
<P><LI>Change to the <B>/sbin/init.d</B> directory and issue the
<B>ln -s</B> command to create symbolic links that incorporate the AFS
initialization script into the Digital UNIX startup and shutdown
sequence.
<PRE>
# <B>cd /sbin/init.d</B>
# <B>ln -s ../init.d/afs /sbin/rc3.d/S67afs</B>
# <B>ln -s ../init.d/afs /sbin/rc0.d/K66afs</B>
</PRE>
<P><LI><B>(Optional)</B> There are now copies of the AFS initialization file
in both the <B>/usr/vice/etc</B> and <B>/sbin/init.d</B>
directories. If you want to avoid potential confusion by guaranteeing
that they are always the same, create a link between them. You can
always retrieve the original script from the AFS CD-ROM if necessary.
<PRE>
# <B>cd /usr/vice/etc</B>
# <B>rm afs.rc</B>
# <B>ln -s /sbin/init.d/afs afs.rc</B>
</PRE>
<P><LI>Proceed to Step <A HREF="#LIWQ113">4</A>.
</OL>
<A NAME="IDX2859"></A>
<P><B>On HP-UX systems:</B>
<OL TYPE=a>
<P><LI>Run the AFS initialization script.
<PRE>
# <B>/sbin/init.d/afs start</B>
</PRE>
<P><LI>Change to the <B>/sbin/init.d</B> directory and issue the
<B>ln -s</B> command to create symbolic links that incorporate the AFS
initialization script into the HP-UX startup and shutdown sequence.
<PRE>
# <B>cd /sbin/init.d</B>
# <B>ln -s ../init.d/afs /sbin/rc2.d/S460afs</B>
# <B>ln -s ../init.d/afs /sbin/rc2.d/K800afs</B>
</PRE>
<P><LI><B>(Optional)</B> There are now copies of the AFS initialization file
in both the <B>/usr/vice/etc</B> and <B>/sbin/init.d</B>
directories. If you want to avoid potential confusion by guaranteeing
that they are always the same, create a link between them. You can
always retrieve the original script from the AFS CD-ROM if necessary.
<PRE>
# <B>cd /usr/vice/etc</B>
# <B>rm afs.rc</B>
# <B>ln -s /sbin/init.d/afs afs.rc</B>
</PRE>
<P><LI>Proceed to Step <A HREF="#LIWQ113">4</A>.
</OL>
<A NAME="IDX2860"></A>
<A NAME="IDX2861"></A>
<A NAME="IDX2862"></A>
<A NAME="IDX2863"></A>
<A NAME="IDX2864"></A>
<A NAME="IDX2865"></A>
<A NAME="IDX2866"></A>
<P><B>On IRIX systems:</B>
<OL TYPE=a>
<P><LI>If you have configured the machine to use the <B>ml</B> dynamic loader
program, reboot the machine and log in again as the local superuser
<B>root</B>.
<PRE>
# <B>cd /</B>
# <B>shutdown -i6 -g0 -y</B>
login: <B>root</B>
Password: <VAR>root_password</VAR>
</PRE>
<P><LI>Issue the <B>chkconfig</B> command to activate the
<B>afsserver</B> configuration variable.
<PRE>
# <B>/etc/chkconfig -f afsserver on</B>
</PRE>
<P>If you have configured this machine as an AFS client and want to it remain
one, also issue the <B>chkconfig</B> command to activate the
<B>afsclient</B> configuration variable.
<PRE>
# <B>/etc/chkconfig -f afsclient on</B>
</PRE>
<P><LI>Run the AFS initialization script.
<PRE>
# <B>/etc/init.d/afs start</B>
</PRE>
<P><LI>Change to the <B>/etc/init.d</B> directory and issue the
<B>ln -s</B> command to create symbolic links that incorporate the AFS
initialization script into the IRIX startup and shutdown sequence.
<PRE>
# <B>cd /etc/init.d</B>
# <B>ln -s ../init.d/afs /etc/rc2.d/S35afs</B>
# <B>ln -s ../init.d/afs /etc/rc0.d/K35afs</B>
</PRE>
<P><LI><B>(Optional)</B> There are now copies of the AFS initialization file
in both the <B>/usr/vice/etc</B> and <B>/etc/init.d</B>
directories. If you want to avoid potential confusion by guaranteeing
that they are always the same, create a link between them. You can
always retrieve the original script from the AFS CD-ROM if necessary.
<PRE>
# <B>cd /usr/vice/etc</B>
# <B>rm afs.rc</B>
# <B>ln -s /etc/init.d/afs afs.rc</B>
</PRE>
<P><LI>Proceed to Step <A HREF="#LIWQ113">4</A>.
</OL>
<A NAME="IDX2867"></A>
<P><B>On Linux systems:</B>
<OL TYPE=a>
<P><LI>Reboot the machine and log in again as the local superuser
<B>root</B>.
<PRE>
# <B>cd /</B>
# <B>shutdown -r now</B>
login: <B>root</B>
Password: <VAR>root_password</VAR>
</PRE>
<P><LI>Run the AFS initialization script.
<PRE>
# <B>/etc/rc.d/init.d/afs start</B>
</PRE>
<P><LI>Issue the <B>chkconfig</B> command to activate the <B>afs</B>
configuration variable. Based on the instruction in the AFS
initialization file that begins with the string <TT>#chkconfig</TT>, the
command automatically creates the symbolic links that incorporate the script
into the Linux startup and shutdown sequence.
<PRE>
# <B>/sbin/chkconfig --add afs</B>
</PRE>
<P><LI><B>(Optional)</B> There are now copies of the AFS initialization file
in both the <B>/usr/vice/etc</B> and
<B>/etc/rc.d/init.d</B> directories, and copies of the
<B>afsd</B> options file in both the <B>/usr/vice/etc</B> and
<B>/etc/sysconfig</B> directories. If you want to avoid potential
confusion by guaranteeing that the two copies of each file are always the
same, create a link between them. You can always retrieve the original
script or options file from the AFS CD-ROM if necessary.
<PRE>
# <B>cd /usr/vice/etc</B>
# <B>rm afs.rc afs.conf</B>
# <B>ln -s /etc/rc.d/init.d/afs afs.rc</B>
# <B>ln -s /etc/sysconfig/afs afs.conf</B>
</PRE>
<P><LI>Proceed to Step <A HREF="#LIWQ113">4</A>.
</OL>
<A NAME="IDX2868"></A>
<P><B>On Solaris systems:</B>
<OL TYPE=a>
<P><LI>Reboot the machine and log in again as the local superuser
<B>root</B>.
<PRE>
# <B>cd /</B>
# <B>shutdown -i6 -g0 -y</B>
login: <B>root</B>
Password: <VAR>root_password</VAR>
</PRE>
<P><LI>Run the AFS initialization script.
<PRE>
# <B>/etc/init.d/afs start</B>
</PRE>
<P><LI>Change to the <B>/etc/init.d</B> directory and issue the
<B>ln -s</B> command to create symbolic links that incorporate the AFS
initialization script into the Solaris startup and shutdown sequence.
<PRE>
# <B>cd /etc/init.d</B>
# <B>ln -s ../init.d/afs /etc/rc3.d/S99afs</B>
# <B>ln -s ../init.d/afs /etc/rc0.d/K66afs</B>
</PRE>
<P><LI><B>(Optional)</B> There are now copies of the AFS initialization file
in both the <B>/usr/vice/etc</B> and <B>/etc/init.d</B>
directories. If you want to avoid potential confusion by guaranteeing
that they are always the same, create a link between them. You can
always retrieve the original script from the AFS CD-ROM if necessary.
<PRE>
# <B>cd /usr/vice/etc</B>
# <B>rm afs.rc</B>
# <B>ln -s /etc/init.d/afs afs.rc</B>
</PRE>
</OL>
<P><LI><A NAME="LIWQ113"></A>Verify that <B>/usr/afs</B> and its subdirectories on the
new file server machine meet the ownership and mode bit requirements outlined
in <A HREF="auqbg005.htm#HDRWQ96">Protecting Sensitive AFS Directories</A>. If necessary, use the <B>chmod</B> command to
correct the mode bits.
<P><LI>To configure this machine as a database server machine, proceed to <A HREF="#HDRWQ114">Installing Database Server Functionality</A>.
</OL>
<A NAME="IDX2869"></A>
<A NAME="IDX2870"></A>
<HR><H2><A NAME="HDRWQ114" HREF="auqbg002.htm#ToC_109">Installing Database Server Functionality</A></H2>
<P>This section explains how to install database server
functionality. Database server machines have two defining
characteristics. First, they run the Authentication Server, Protection
Server, and Volume Location (VL) Server processes. They also run the
Backup Server if the cell uses the AFS Backup System, as is assumed in these
instructions. Second, they appear in the <B>CellServDB</B> file of
every machine in the cell (and of client machines in foreign cells, if they
are to access files in this cell).
<P>Note the following requirements for database server machines.
<UL>
<P><LI>In the conventional configuration, database server machines also serve as
file server machines (run the File Server, Volume Server and Salvager
processes). If you choose not to run file server functionality on a
database server machine, then the kernel does not have to incorporate AFS
modifications, but the local <B>/usr/afs</B> directory must house most of
the standard files and subdirectories. In particular, the
<B>/usr/afs/etc/KeyFile</B> file must contain the same keys as all other
server machines in the cell. If you run a system control machine, run
the <B>upclientetc</B> process on every database server machine other than
the system control machine; if you do not run a system control machine,
use the <B>bos addkey</B> command as instructed in the chapter in the
<I>IBM AFS Administration Guide</I> about maintaining server encryption
keys.
<P>The instructions in this section assume that the machine on which you are
installing database server functionality is already a file server
machine. Contact the AFS Product Support group to learn how to install
database server functionality on a non-file server machine.
<P><LI>During the installation of database server functionality, you must restart
all of the database server machines to force the election of a new Ubik
coordinator (synchronization site) for each database server process.
This can cause a system outage, which usually lasts less than 5
minutes.
<P><LI>Updating the kernel memory list of database server machines on each client
machine is generally the most time-consuming part of installing a new database
server machine. It is, however, crucial for correct functioning in your
cell. Incorrect knowledge of your cell's database server machines
can prevent your users from authenticating, accessing files, and issuing AFS
commands.
<P>You update a client's kernel memory list by changing the
<B>/usr/vice/etc/CellServDB</B> file and then either rebooting or issuing
the <B>fs newcell</B> command. For instructions, see the chapter in
the <I>IBM AFS Administration Guide</I> about administering client
machines.
<P>The point at which you update your clients' knowledge of database
server machines depends on which of the database server machines has the
lowest IP address. The following instructions indicate the appropriate
place to update your client machines in either case.
<UL>
<P><LI>If the new database server machine has a lower IP address than any
existing database server machine, update the <B>CellServDB</B> file on
every client machine before restarting the database server processes.
If you do not, users can become unable to update (write to) any of the AFS
databases. This is because the machine with the lowest IP address is
usually elected as the Ubik coordinator, and only the Coordinator accepts
database writes. On client machines that do not have the new list of
database server machines, the Cache Manager cannot locate the new
coordinator. (Be aware that if clients contact the new coordinator
before it is actually in service, they experience a timeout before contacting
another database server machine. This is a minor, and temporary,
problem compared to being unable to write to the database.)
<P><LI>If the new database server machine does not have the lowest IP address of
any database server machine, then it is better to update clients after
restarting the database server processes. Client machines do not start
using the new database server machine until you update their kernel memory
list, but that does not usually cause timeouts or update problems (because the
new machine is not likely to become the coordinator).
</UL>
</UL>
<A NAME="IDX2871"></A>
<P><H3><A NAME="Header_110" HREF="auqbg002.htm#ToC_110">Summary of Procedures</A></H3>
<P>To install a database server machine, perform the following
procedures.
<OL TYPE=1>
<P><LI>Install the <B>bos</B> suite of commands locally, as a precaution
<P><LI>Add the new machine to the <B>/usr/afs/etc/CellServDB</B> file on
existing file server machines
<P><LI>Update your cell's central <B>CellServDB</B> source file and the
file you make available to foreign cells
<P><LI>Update every client machine's <B>/usr/vice/etc/CellServDB</B>
file and kernel memory list of database server machines
<P><LI>Start the database server processes (Authentication Server, Backup Server,
Protection Server, and Volume Location Server)
<P><LI>Restart the database server processes on every database server machine
<P><LI>Notify the AFS Product Support group that you have installed a new
database server machine
</OL>
<A NAME="IDX2872"></A>
<A NAME="IDX2873"></A>
<A NAME="IDX2874"></A>
<P><H3><A NAME="Header_111" HREF="auqbg002.htm#ToC_111">Instructions</A></H3>
<TABLE><TR><TD ALIGN="LEFT" VALIGN="TOP"><B>Note:</B></TD><TD ALIGN="LEFT" VALIGN="TOP">It is assumed that your PATH environment variable includes the directory
that houses the AFS command binaries. If not, you possibly need to
precede the command names with the appropriate pathname.
</TD></TR></TABLE>
<OL TYPE=1>
<P><LI>You can perform the following instructions on either a server or client
machine. Login as an AFS administrator who is listed in the
<B>/usr/afs/etc/UserList</B> file on all server machines.
<PRE>
% <B>klog</B> <VAR>admin_user</VAR>
Password: <VAR>admin_password</VAR>
</PRE>
<P><LI>If you are working on a client machine configured in the conventional
manner, the <B>bos</B> command suite resides in the
<B>/usr/afsws/bin</B> directory, a symbolic link to an AFS
directory. An error during installation can potentially block access to
AFS, in which case it is helpful to have a copy of the <B>bos</B> binary
on the local disk. This step is not necessary if you are working on a
server machine, where the binary resides in the local <B>/usr/afs/bin</B>
directory.
<PRE>
% <B>cp /usr/afsws/bin/bos /tmp</B>
</PRE>
<A NAME="IDX2875"></A>
<A NAME="IDX2876"></A>
<A NAME="IDX2877"></A>
<A NAME="IDX2878"></A>
<A NAME="IDX2879"></A>
<P><LI><A NAME="LIWQ115"></A>Issue the <B>bos addhost</B> command to add the new
database server machine to the <B>/usr/afs/etc/CellServDB</B> file on
existing server machines (as well as the new database server machine
itself).
<P>Substitute the new database server machine's fully-qualified hostname
for the <VAR>host name</VAR> argument. If you run a system control
machine, substitute its fully-qualified hostname for the
<VAR>machine&nbsp;name</VAR> argument. If you do not run a system control
machine, repeat the <B>bos addhost</B> command once for each server
machine in your cell (including the new database server machine itself), by
substituting each one's fully-qualified hostname for the
<VAR>machine&nbsp;name</VAR> argument in turn.
<PRE>
% <B>bos addhost</B> &lt;<VAR>machine&nbsp;name</VAR>> &lt;<VAR>host&nbsp;name</VAR>>
</PRE>
<P>If you run a system control machine, wait for the Update Server to
distribute the new <B>CellServDB</B> file, which takes up to five minutes
by default. If you are issuing individual <B>bos addhost</B>
commands, attempt to issue all of them within five minutes.
<TABLE><TR><TD ALIGN="LEFT" VALIGN="TOP"><B>Note:</B></TD><TD ALIGN="LEFT" VALIGN="TOP">It is best to maintain a one-to-one mapping between hostnames and IP
addresses on a multihomed database server machine (the conventional
configuration for any AFS machine). The BOS Server uses the
<B>gethostbyname(&nbsp;)</B> routine to obtain the IP address
associated with the <VAR>host name</VAR> argument. If there is more than
one address, the BOS Server records in the <B>CellServDB</B> entry the one
that appears first in the list of addresses returned by the routine.
The routine possibly returns addresses in a different order on different
machines, which can create inconsistency.
</TD></TR></TABLE>
<P><LI><B>(Optional)</B> Issue the <B>bos listhosts</B> command on each
server machine to verify that the new database server machine appears in its
<B>CellServDB</B> file.
<PRE>
% <B>bos listhosts</B> &lt;<VAR>machine&nbsp;name</VAR>>
</PRE>
<P><LI><A NAME="LIWQ116"></A>Add the new database server machine to your cell's central
<B>CellServDB</B> source file, if you use one. The standard
location is
<B>/afs/</B><VAR>cellname</VAR><B>/common/etc/CellServDB</B>.
<P>If you are willing to make your cell accessible to users in foreign cells,
add the new database server machine to the file that lists your cell's
database server machines. The conventional location is
<B>/afs/</B><VAR>cellname</VAR><B>/service/etc/CellServDB.local</B>.
<A NAME="IDX2880"></A>
<A NAME="IDX2881"></A>
<A NAME="IDX2882"></A>
<P><LI><A NAME="LIWQ117"></A>If this machine's IP address is lower than any existing
database server machine's, update every client machine's
<B>/usr/vice/etc/CellServDB</B> file and kernel memory list to include
this machine. (If this machine's IP address is not the lowest, it
is acceptable to wait until Step <A HREF="#LIWQ123">12</A>.)
<P>There are several ways to update the <B>CellServDB</B> file on client
machines, as detailed in the chapter of the <I>IBM AFS Administration
Guide</I> about administering client machines. One option is to copy
over the central update source (which you updated in Step <A HREF="#LIWQ116">5</A>), with or without using the <B>package</B>
program. To update the machine's kernel memory list, you can
either reboot after changing the <B>CellServDB</B> file or issue the
<B>fs newcell</B> command.
<A NAME="IDX2883"></A>
<A NAME="IDX2884"></A>
<A NAME="IDX2885"></A>
<A NAME="IDX2886"></A>
<A NAME="IDX2887"></A>
<P><LI><A NAME="LIWQ118"></A>Start the Authentication Server (the <B>kaserver</B>
process).
<PRE>
% <B>bos create</B> &lt;<VAR>machine&nbsp;name</VAR>> <B>kaserver simple /usr/afs/bin/kaserver</B>
</PRE>
<A NAME="IDX2888"></A>
<A NAME="IDX2889"></A>
<P><LI><A NAME="LIWQ119"></A>Start the Backup Server (the <B>buserver</B>
process). You must perform other configuration procedures before
actually using the AFS Backup System, as detailed in the <I>IBM AFS
Administration Guide</I>.
<PRE>
% <B>bos create</B> &lt;<VAR>machine&nbsp;name</VAR>> <B>buserver simple /usr/afs/bin/buserver</B>
</PRE>
<A NAME="IDX2890"></A>
<A NAME="IDX2891"></A>
<P><LI><A NAME="LIWQ120"></A>Start the Protection Server (the <B>ptserver</B>
process).
<PRE>
% <B>bos create</B> &lt;<VAR>machine&nbsp;name</VAR>> <B>ptserver simple /usr/afs/bin/ptserver</B>
</PRE>
<A NAME="IDX2892"></A>
<A NAME="IDX2893"></A>
<P><LI><A NAME="LIWQ121"></A>Start the Volume Location (VL) Server (the <B>vlserver</B>
process).
<PRE>
% <B>bos create</B> &lt;<VAR>machine&nbsp;name</VAR>> <B>vlserver simple /usr/afs/bin/vlserver</B>
</PRE>
<A NAME="IDX2894"></A>
<A NAME="IDX2895"></A>
<A NAME="IDX2896"></A>
<A NAME="IDX2897"></A>
<P><LI><A NAME="LIWQ122"></A>Issue the <B>bos restart</B> command on every database
server machine in the cell, including the new machine. The command
restarts the Authentication, Backup, Protection, and VL Servers, which forces
an election of a new Ubik coordinator for each process. The new machine
votes in the election and is considered as a potential new coordinator.
<P>
<P>A cell-wide service outage is possible during the election of a new
coordinator for the VL Server, but it normally lasts less than five
minutes. Such an outage is particularly likely if you are installing
your cell's second database server machine. Messages tracing the
progress of the election appear on the console.
<P>Repeat this command on each of your cell's database server machines in
quick succession. Begin with the machine with the lowest IP
address.
<PRE>
% <B>bos restart</B> &lt;<VAR>machine&nbsp;name</VAR>> <B>kaserver buserver ptserver vlserver</B>
</PRE>
<P>If an error occurs, restart all server processes on the database server
machines again by using one of the following methods:
<UL>
<P><LI>Issue the <B>bos restart</B> command with the <B>-bosserver</B>
flag for each database server machine
<P><LI>Reboot each database server machine, either using the <B>bos exec</B>
command or at its console
</UL>
<P><LI><A NAME="LIWQ123"></A>If you did not update the <B>CellServDB</B> file on client
machines in Step <A HREF="#LIWQ117">6</A>, do so now.
<P><LI><A NAME="LIWQ124"></A>Send the new database server machine's name and IP address
to the AFS Product Support group.
<P>If you wish to participate in the AFS global name space, your cell's
entry appear in a <B>CellServDB</B> file that the AFS Product Support
group makes available to all AFS sites. Otherwise, they list your cell
in a private file that they do not share with other AFS sites.
</OL>
<A NAME="IDX2898"></A>
<A NAME="IDX2899"></A>
<A NAME="IDX2900"></A>
<A NAME="IDX2901"></A>
<HR><H2><A NAME="HDRWQ125" HREF="auqbg002.htm#ToC_112">Removing Database Server Functionality</A></H2>
<P>Removing database server machine functionality is nearly the
reverse of installing it.
<P><H3><A NAME="Header_113" HREF="auqbg002.htm#ToC_113">Summary of Procedures</A></H3>
<P>To decommission a database server machine, perform the following
procedures.
<OL TYPE=1>
<P><LI>Install the <B>bos</B> suite of commands locally, as a precaution
<P><LI>Notify the AFS Product Support group that you are decommissioning a
database server machine
<P><LI>Update your cell's central <B>CellServDB</B> source file and the
file you make available to foreign cells
<P><LI>Update every client machine's <B>/usr/vice/etc/CellServDB</B>
file and kernel memory list of database server machines
<P><LI>Remove the machine from the <B>/usr/afs/etc/CellServDB</B> file on
file server machines
<P><LI>Stop the database server processes and remove them from the
<B>/usr/afs/local/BosConfig</B> file if desired
<P><LI>Restart the database server processes on the remaining database server
machines
</OL>
<P><H3><A NAME="Header_114" HREF="auqbg002.htm#ToC_114">Instructions</A></H3>
<TABLE><TR><TD ALIGN="LEFT" VALIGN="TOP"><B>Note:</B></TD><TD ALIGN="LEFT" VALIGN="TOP">It is assumed that your PATH environment variable includes the directory
that houses the AFS command binaries. If not, you possibly need to
precede the command names with the appropriate pathname.
</TD></TR></TABLE>
<OL TYPE=1>
<P><LI>You can perform the following instructions on either a server or client
machine. Login as an AFS administrator who is listed in the
<B>/usr/afs/etc/UserList</B> file on all server machines.
<PRE>
% <B>klog</B> <VAR>admin_user</VAR>
Password: <VAR>admin_password</VAR>
</PRE>
<P><LI>If you are working on a client machine configured in the conventional
manner, the <B>bos</B> command suite resides in the
<B>/usr/afsws/bin</B> directory, a symbolic link to an AFS
directory. An error during installation can potentially block access to
AFS, in which case it is helpful to have a copy of the <B>bos</B> binary
on the local disk. This step is not necessary if you are working on a
server machine, where the binary resides in the local <B>/usr/afs/bin</B>
directory.
<PRE>
% <B>cp /usr/afsws/bin/bos /tmp</B>
</PRE>
<P><LI><A NAME="LIWQ126"></A>Send the revised list of your cell's database server
machines to the AFS Product Support group.
<P>This step is particularly important if your cell is included in the global
<B>CellServDB</B> file. If the administrators in foreign cells do
not learn about the change in your cell, they cannot update the
<B>CellServDB</B> file on their client machines. Users in foreign
cells continue to send database requests to the decommissioned machine, which
creates needless network traffic and activity on the machine. Also, the
users experience time-out delays while their request is forwarded to a valid
database server machine.
<P><LI><A NAME="LIWQ127"></A>Remove the decommissioned machine from your cell's central
<B>CellServDB</B> source file, if you use one. The conventional
location is
<B>/afs/</B><VAR>cellname</VAR><B>/common/etc/CellServDB</B>.
<P>If you maintain a file that users in foreign cells can access to learn
about your cell's database server machines, update it also. The
conventional location is
<B>/afs/</B><VAR>cellname</VAR><B>/service/etc/CellServDB.local</B>.
<A NAME="IDX2902"></A>
<A NAME="IDX2903"></A>
<A NAME="IDX2904"></A>
<A NAME="IDX2905"></A>
<P><LI><A NAME="LIWQ128"></A>Update every client machine's
<B>/usr/vice/etc/CellServDB</B> file and kernel memory list to exclude
this machine. Altering the <B>CellServDB</B> file and kernel memory
list before stopping the actual database server processes avoids possible
time-out delays that result when users send requests to a decommissioned
database server machine that is still listed in the file.
<P>There are several ways to update the <B>CellServDB</B> file on client
machines, as detailed in the chapter of the <I>IBM AFS Administration
Guide</I> about administering client machines. One option is to copy
over the central update source (which you updated in Step <A HREF="#LIWQ116">5</A>), with or without using the <B>package</B>
program. To update the machine's kernel memory list, you can
either reboot after changing the <B>CellServDB</B> file or issue the
<B>fs newcell</B> command.
<A NAME="IDX2906"></A>
<A NAME="IDX2907"></A>
<A NAME="IDX2908"></A>
<A NAME="IDX2909"></A>
<P><LI><A NAME="LIWQ129"></A>Issue the <B>bos removehost</B> command to remove the
decommissioned database server machine from the
<B>/usr/afs/etc/CellServDB</B> file on server machines.
<P>Substitute the decommissioned database server machine's
fully-qualified hostname for the <VAR>host name</VAR> argument. If you
run a system control machine, substitute its fully-qualified hostname for the
<VAR>machine&nbsp;name</VAR> argument. If you do not run a system control
machine, repeat the <B>bos removehost</B> command once for each server
machine in your cell (including the decommissioned database server machine
itself), by substituting each one's fully-qualified hostname for the
<VAR>machine&nbsp;name</VAR> argument in turn.
<PRE>
% <B>bos removehost</B> &lt;<VAR>machine&nbsp;name</VAR>> &lt;<VAR>host&nbsp;name</VAR>>
</PRE>
<P>If you run a system control machine, wait for the Update Server to
distribute the new <B>CellServDB</B> file, which takes up to five minutes
by default. If issuing individual <B>bos removehost</B> commands,
attempt to issue all of them within five minutes.
<P><LI><B>(Optional)</B> Issue the <B>bos listhosts</B> command on each
server machine to verify that the decommissioned database server machine no
longer appears in its <B>CellServDB</B> file.
<PRE>
% <B>bos listhosts</B> &lt;<VAR>machine&nbsp;name</VAR>>
</PRE>
<A NAME="IDX2910"></A>
<A NAME="IDX2911"></A>
<A NAME="IDX2912"></A>
<A NAME="IDX2913"></A>
<A NAME="IDX2914"></A>
<A NAME="IDX2915"></A>
<A NAME="IDX2916"></A>
<P><LI><A NAME="LIWQ130"></A>Issue the <B>bos stop</B> command to stop the database
server processes on the machine, by substituting its fully-qualified hostname
for the <VAR>machine&nbsp;name</VAR> argument. The command changes each
process's status in the <B>/usr/afs/local/BosConfig</B> file to
<TT>NotRun</TT>, but does not remove its entry from the file.
<PRE>
% <B>bos stop</B> &lt;<VAR>machine&nbsp;name</VAR>> <B>kaserver buserver ptserver vlserver</B>
</PRE>
<A NAME="IDX2917"></A>
<A NAME="IDX2918"></A>
<A NAME="IDX2919"></A>
<A NAME="IDX2920"></A>
<A NAME="IDX2921"></A>
<P><LI><A NAME="LIWQ131"></A><B>(Optional)</B> Issue the <B>bos delete</B> command
to remove the entries for database server processes from the
<B>BosConfig</B> file. This step is unnecessary if you plan to
restart the database server functionality on this machine in future.
<PRE>
% <B>bos delete</B> &lt;<VAR>machine&nbsp;name</VAR>> <B>kaserver buserver ptserver vlserver</B>
</PRE>
<A NAME="IDX2922"></A>
<A NAME="IDX2923"></A>
<A NAME="IDX2924"></A>
<A NAME="IDX2925"></A>
<P><LI><A NAME="LIWQ132"></A>Issue the <B>bos restart</B> command on every database
server machine in the cell, to restart the Authentication, Backup, Protection,
and VL Servers. This forces the election of a Ubik coordinator for each
process, ensuring that the remaining database server processes recognize that
the machine is no longer a database server.
<P>A cell-wide service outage is possible during the election of a new
coordinator for the VL Server, but it normally lasts less than five
minutes. Messages tracing the progress of the election appear on the
console.
<P>Repeat this command on each of your cell's database server machines in
quick succession. Begin with the machine with the lowest IP
address.
<PRE>
% <B>bos restart</B> &lt;<VAR>machine&nbsp;name</VAR>> <B>kaserver buserver ptserver vlserver</B>
</PRE>
<P>If an error occurs, restart all server processes on the database server
machines again by using one of the following methods:
<UL>
<P><LI>Issue the <B>bos restart</B> command with the <B>-bosserver</B>
flag for each database server machine
<P><LI>Reboot each database server machine, either using the <B>bos exec</B>
command or at its console
</UL>
</OL>
<HR><P ALIGN="center"> <A HREF="../index.htm"><IMG SRC="../books.gif" BORDER="0" ALT="[Return to Library]"></A> <A HREF="auqbg002.htm#ToC"><IMG SRC="../toc.gif" BORDER="0" ALT="[Contents]"></A> <A HREF="auqbg005.htm"><IMG SRC="../prev.gif" BORDER="0" ALT="[Previous Topic]"></A> <A HREF="#Top_Of_Page"><IMG SRC="../top.gif" BORDER="0" ALT="[Top of Topic]"></A> <A HREF="auqbg007.htm"><IMG SRC="../next.gif" BORDER="0" ALT="[Next Topic]"></A> <A HREF="auqbg009.htm#HDRINDEX"><IMG SRC="../index.gif" BORDER="0" ALT="[Index]"></A> <P>
<!-- Begin Footer Records ========================================== -->
<P><HR><B>
<br>&#169; <A HREF="http://www.ibm.com/">IBM Corporation 2000.</A> All Rights Reserved
</B>
<!-- End Footer Records ============================================ -->
<A NAME="Bot_Of_Page"></A>
</BODY></HTML>