openafs/doc/html/ReleaseNotes-3.6/aurns004.htm
Derrick Brashear d7da1acc31 initial-html-documentation-20010606
pull in all documentation from IBM
2001-06-06 19:09:07 +00:00

4263 lines
215 KiB
HTML

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 4//EN">
<HTML><HEAD>
<TITLE>Release Notes</TITLE>
<!-- Begin Header Records ========================================== -->
<!-- /tmp/idwt3575/aurns000.scr converted by idb2h R4.2 (359) ID -->
<!-- Workbench Version (AIX) on 2 Oct 2000 at 12:27:59 -->
<META HTTP-EQUIV="updated" CONTENT="Mon, 02 Oct 2000 12:27:58">
<META HTTP-EQUIV="review" CONTENT="Tue, 02 Oct 2001 12:27:58">
<META HTTP-EQUIV="expires" CONTENT="Wed, 02 Oct 2002 12:27:58">
</HEAD><BODY>
<!-- (C) IBM Corporation 2000. All Rights Reserved -->
<BODY bgcolor="ffffff">
<!-- End Header Records ============================================ -->
<A NAME="Top_Of_Page"></A>
<H1>Release Notes</H1>
<HR><P ALIGN="center"> <A HREF="../index.htm"><IMG SRC="../books.gif" BORDER="0" ALT="[Return to Library]"></A> <A HREF="aurns002.htm#ToC"><IMG SRC="../toc.gif" BORDER="0" ALT="[Contents]"></A> <A HREF="aurns003.htm"><IMG SRC="../prev.gif" BORDER="0" ALT="[Previous Topic]"></A> <A HREF="#Bot_Of_Page"><IMG SRC="../bot.gif" BORDER="0" ALT="[Bottom of Topic]"></A> <P>
<HR><H1><A NAME="Header_6" HREF="aurns002.htm#ToC_6">AFS 3.6 Release Notes</A></H1>
<P>This file documents new features, upgrade procedures, and remaining
limitations associated with the initial General Availability (GA) release of
AFS<SUP><SUP>(R)</SUP></SUP> 3.6 (build level <B>afs3.6
2.0</B>).
<TABLE><TR><TD ALIGN="LEFT" VALIGN="TOP"><B>Note:</B></TD><TD ALIGN="LEFT" VALIGN="TOP">This document includes all product information available at the time the
document was produced. For additional information that became available
later, see the <B>README.txt</B> file included on the AFS
CD-ROM.
</TD></TR></TABLE>
<HR><H2><A NAME="HDRSUMMARY" HREF="aurns002.htm#ToC_7">Summary of New Features</A></H2>
<P>AFS 3.6 includes the following new features.
<UL>
<P><LI>Support for the 64-bit version of Solaris 7.
<P><LI>Support for the 64-bit version of HP-UX 11.0.
<P><LI>The HP-UX 11.0 File Server uses the POSIX-compliant threading
package provided with HP-UX. (Other supported operating systems started
using native threads in AFS 3.5.) See <A HREF="#HDRREQ_HP">Product Notes for HP-UX Systems</A>.
<P><LI>There is a single edition of AFS 3.6, instead of separate United
States and international editions as in previous releases. The United
States government now permits export outside North America of the encryption
software that the Update Server uses to protect user-level data. With
AFS 3.6, cells outside North America can run a system control machine
to distribute the contents of the <B>/usr/afs/etc</B> directory among
server machines.
<P>The AFS 3.6 distribution includes a single CD-ROM for each system
type, which contains all AFS software. There is no CD-ROM labeled
<B>Encryption Files</B> or <B>Domestic Edition</B> in the AFS
3.6 distribution.
<P>Because they were produced before the change in export restrictions, the
<I>IBM AFS Administration Guide</I> and <I>IBM AFS Administration
Reference</I> still distinguish between United States and international
editions of AFS. However, AFS customers in any country can ignore the
distinction and use the United States instructions if they choose.
<P><LI>Support for volumes up to 8 GB in size. In previous versions of
AFS, the limit was 2 GB.
<P>Note that smaller volumes are still more practical than large ones in
general. The larger a volume, the longer it takes to move or clone it,
which introduces greater potential for an outage to halt the operation before
it completes.
<P><LI>Support for backing up AFS data to the Tivoli Storage Manager (TSM),
formerly called the ADSTAR Distributed Storage Manager (ADSM). TSM
implements the Open Group's Backup Service application programming
interface (API), also called XBSA. Support for additional
XBSA-compliant programs in future releases of AFS is possible. See <A HREF="#HDRTSM">Support for Backup to TSM</A>.
<P><LI>A new command and new options to existing commands. See <A HREF="#HDRCMD-CHANGES">Changes to AFS Commands, Files, and Functionality</A>.
</UL>
<HR><H2><A NAME="HDRSYSTYPES" HREF="aurns002.htm#ToC_8">Supported System Types</A></H2>
<P>AFS supports the following system types.
<BR>
<TABLE WIDTH="100%">
<TR>
<TD ALIGN="LEFT" VALIGN="TOP" WIDTH="20%"><B>alpha_dux40</B>
</TD><TD ALIGN="LEFT" VALIGN="TOP" WIDTH="80%">DEC AXP system with one or more processors running Digital UNIX
4.0d, 4.0e, or 4.0f
</TD></TR><TR>
<TD ALIGN="LEFT" VALIGN="TOP" WIDTH="20%"><B>hp_ux110</B>
</TD><TD ALIGN="LEFT" VALIGN="TOP" WIDTH="80%">Hewlett-Packard system with one or more processors running the 32-bit or
64-bit version of HP-UX 11.0
</TD></TR><TR>
<TD ALIGN="LEFT" VALIGN="TOP" WIDTH="20%"><B>i386_linux22</B>
</TD><TD ALIGN="LEFT" VALIGN="TOP" WIDTH="80%">IBM-compatible PC with one or more processors running Linux kernel
version 2.2.5-15 (the version in Red Hat Linux 6.0),
2.2.10, 2.2.12, 2.2.12-20 (the
version in Red Hat Linux 6.1), 2.2.13, or
2.2.14
</TD></TR><TR>
<TD ALIGN="LEFT" VALIGN="TOP" WIDTH="20%"><B>rs_aix42</B>
</TD><TD ALIGN="LEFT" VALIGN="TOP" WIDTH="80%">IBM RS/6000 with one or more 32-bit or 64-bit processors running AIX
4.2, 4.2.1, 4.3, 4.3.1,
4.3.2, or 4.3.3
</TD></TR><TR>
<TD ALIGN="LEFT" VALIGN="TOP" WIDTH="20%"><B>sgi_65</B>
</TD><TD ALIGN="LEFT" VALIGN="TOP" WIDTH="80%">Silicon Graphics system with one or more processors running IRIX
6.5 or 6.5.4. Support is provided for the
following CPU board types, as reported by the IRIX <B>uname -m</B>
command: IP19, IP20, IP21, IP22, IP25, IP26, IP27, IP28, IP30, IP32
</TD></TR><TR>
<TD ALIGN="LEFT" VALIGN="TOP" WIDTH="20%"><B>sun4x_56</B>
</TD><TD ALIGN="LEFT" VALIGN="TOP" WIDTH="80%">Sun SPARCstation with one or more processors running Solaris 2.6
</TD></TR><TR>
<TD ALIGN="LEFT" VALIGN="TOP" WIDTH="20%"><B>sun4x_57</B>
</TD><TD ALIGN="LEFT" VALIGN="TOP" WIDTH="80%">Sun SPARCstation with one or more processors running the 32-bit or 64-bit
version of Solaris 7
</TD></TR></TABLE>
<HR><H2><A NAME="HDRHWARE_REQS" HREF="aurns002.htm#ToC_9">Hardware and Software Requirements</A></H2>
<P>For a list of requirements for both server and client
machines, see the chapter titled <I>Installation Overview</I> in the
<I>IBM AFS Quick Beginnings</I> document.
<HR><H2><A NAME="HDRDOC" HREF="aurns002.htm#ToC_10">Accessing the AFS Binary Distribution and Documentation</A></H2>
<P>The AFS Binary Distribution includes a separate CD-ROM for
each supported operating system, containing all AFS binaries and files for
both server and client machines, plus the documentation set in multiple
formats. At the top level of the CD-ROM is a directory called
<B>Documentation</B> plus a directory containing the system-specific AFS
binaries, named using the values listed in <A HREF="#HDRSYSTYPES">Supported System Types</A>. The CD-ROM for some operating systems has more than
one system-specific directory; for example, the Solaris CD-ROM has
<B>sun4x_56</B> and <B>sun4x_57</B>.
<P>The instructions in <A HREF="#HDRINSTALL">Upgrading Server and Client Machines to AFS 3.6</A> specify when to mount the CD-ROM and which files or
directories to copy to the local disk or into an AFS volume.
<P>The documents are also available online at &lt;A
HREF="http://www.transarc.com/Library/documentation/afs_doc.html"><B>http://www.transarc.com/Library/documentation/afs_doc.html</B>&lt;/A>.
The documentation set includes the following documents:
<UL>
<P><LI><I>IBM AFS Release Notes</I> (this document)
<P><LI><I>IBM AFS Administration Guide</I> (called <I>AFS System
Administrator's Guide</I> in previous releases)
<P><LI><I>IBM AFS Administration Reference</I> (called <I>AFS Command
Reference Manual</I> in previous releases)
<P><LI><I>IBM AFS Quick Beginnings</I> (called <I>AFS Installation
Guide</I> in previous releases)
<P><LI><I>IBM AFS User Guide</I> (called <I>AFS User's Guide</I> in
previous releases)
</UL>
<P>Documents are provided in the following formats:
<UL>
<P><LI>HTML, suitable for online viewing in a Web browser or other HTML viewer
<P><LI>PDF, suitable for online viewing or for printing using the Acrobat Reader
program from Adobe
</UL>
<P>If you do not already have the Acrobat Reader program, you can download it
for free at &lt;A
HREF="http://www.adobe.com/products/acrobat/readstep.html"><B>http://www.adobe.com/products/acrobat/readstep.html</B>&lt;/A>.
<P>Adobe provides only an English-language version of Acrobat Reader for UNIX
platforms. The program can display PDF files written in any
language. It is the program interface (menus, messages, and so on) that
is available in English only.
<P>To make Reader's interface display properly in non-English language
locales, use one of two methods to set the program's language environment
to English:
<UL>
<P><LI>Set the LANG environment variable in the Reader initialization
script. The advantage of this method is that it ensures proper behavior
even when Reader is launched by other applications, such as a browser or an
application's Help menu. Editing the script usually requires local
superuser <B>root</B> privilege, however.
<P>If your Reader distribution includes the script, it is installed by
convention as <VAR>AcroRead_Dir</VAR><B>/bin/acroread</B>, where
<VAR>AcroRead_Dir</VAR> is the installation directory for Reader files.
<P>Add the following line to the script, directly after the
<TT>#!/bin/sh</TT> statement:
<PRE> LANG=C; export LANG
</PRE>
<P><LI>Set the LANG environment variable to the value <B>C</B> in the command
shell before starting the Reader program. The following is the
appropriate command for some shells.
<PRE> % <B>setenv LANG C</B>
</PRE>
<P>Note that this setting affects all programs started in the command shell,
with possibly undesirable results if they also use the LANG variable.
The preceding method affects Reader only.
</UL>
<HR><H2><A NAME="HDRLIMITS" HREF="aurns002.htm#ToC_11">Product Notes</A></H2>
<P>The following sections summarize limitations and
requirements that pertain to all system types and to individual system types,
and describe revisions to the AFS documents:
<UL>
<P><LI><A HREF="#HDRREQ_ALL">Product Notes for All System Types</A>
<P><LI><A HREF="#HDRREQ_AIX">Product Notes for AIX Systems</A>
<P><LI><A HREF="#HDRREQ_DUX">Product Notes for Digital UNIX Systems</A>
<P><LI><A HREF="#HDRREQ_HP">Product Notes for HP-UX Systems</A>
<P><LI><A HREF="#HDRREQ_IRIX">Product Notes for IRIX Systems</A>
<P><LI><A HREF="#HDRREQ_LNX">Product Notes for Linux Systems</A>
<P><LI><A HREF="#HDRREQ_SOL">Product Notes for Solaris Systems</A>
<P><LI><A HREF="#HDRDOC_NOTES">Documentation Notes</A>
</UL>
<P><H3><A NAME="HDRREQ_ALL" HREF="aurns002.htm#ToC_12">Product Notes for All System Types</A></H3>
<UL>
<P><LI><B>Limit on number of file server machine interfaces</B>
<P>AFS supports up to 15 addresses on a multihomed file server machine.
If more interfaces are configured with the operating system, AFS uses only the
first 15.
<P><LI><B>Limit on number of client machine interfaces</B>
<P>AFS supports up to 32 addresses on a multihomed client machine. Do
not configure more interfaces.
<P><LI><B>Limit on number of AFS server partitions</B>
<P>AFS supports up to 256 server (<B>/vicep</B>) partitions on a file
server machine. This corresponds to directory names <B>/vicepa</B>
through <B>/vicepz</B> and <B>/vicepaa</B> through
<B>/vicepiv</B>.
<P><LI><B>Limit on number of file server machines</B>
<P>The VLDB can store up to 255 server entries, each representing one file
server machine (single- or multihomed). This effectively determines the
maximum number of file server machines in the cell. To make room in the
VLDB for new server entries, use the <B>vos changeaddr</B> command's
<B>-remove</B> argument to remove the entries for decommissioned file
server machines.
<P><LI><B>Limit on file size</B>
<P>AFS supports a maximum file size of 2 GB.
<P><LI><B>Limit on volume and partition size</B>
<P>AFS supports a maximum volume size of 8 GB. In AFS version
3.5 and earlier, the limit is 2 GB. There is no limit on
partition size other than the one imposed by the operating system.
<P><LI><B>Limit on cache size</B>
<P>AFS supports a maximum disk cache size of 1 GB. In AFS version
3.1 and earlier, the limit is 700 MB.
<P><LI><B>Limit on number of File Server threads</B>
<P>The File Server (<B>fileserver</B> process) can use up to 128 threads,
unless the operating system imposes a lower limit. Testing for the AFS
3.6 GA release indicates that HP-UX sometimes imposes a lower limit,
depending on the resources available on a machine. See <A HREF="#HDRREQ_HP">Product Notes for HP-UX Systems</A>.
<P>The File Server always reserves seven threads for special uses, so the
maximum effective value for the <B>fileserver</B> command's
<B>-p</B> argument is seven less than the actual limit. On most
systems, the effective maximum is therefore <B>121</B>.
<P><LI><B>Limit on number of volume site definitions</B>
<P>The VLDB entry for a volume can accommodate a maximum of 13 site
definitions. The site housing the read/write and backup versions of the
volume counts as one site, and each read-only site counts as an additional
site (even the read-only site defined on the same partition as the read/write
site counts as a separate site).
<P><LI><B>No support for VxFS as server or cache partition</B>
<P>AFS does not support use of the VxFS file system as either a client cache
partition or server (<B>/vicep</B>) partition. It is acceptable to
use both VxFS and AFS on the same machine, but the cache partition and all AFS
server partitions must use a supported file system type such as UFS.
See the following sections of this document for similar restrictions affecting
particular operating systems.
<P><LI><B>Run same version of a server process on all server machines</B>
<P>For predictable performance, run the same version of an AFS server process
on all server machines in a cell. For example, if you upgrade the
Volume Location Server process on a database server machine to AFS 3.6,
you must upgrade it on all of them. The upgrade instructions in <A HREF="#HDRINSTALL">Upgrading Server and Client Machines to AFS 3.6</A> have you upgrade the binaries for all server processes on
all machines to the same version, and in general that is the best
policy. Unless otherwise noted, it is acceptable to run different build
levels of a major version on different machines (for example, AFS 3.5
build 3.0 on one machine and AFS 3.5 build 3.11 on
another).
<P><LI><B>Single edition of AFS for all countries</B>
<P>There is a single edition of AFS 3.6 for both North American and
international customers. For details, see <A HREF="#HDRSUMMARY">Summary of New Features</A>.
<P><LI><B>TSM is the supported XBSA server</B>
<P>The AFS 3.6 Backup System can communicate with one XBSA server, the
Tivoli Storage Manager (TSM). There are several requirements and
limitations associated with its use, as detailed in <A HREF="#HDRTSM_REQ">Product Notes for Use of TSM</A>.
<P><LI><B>Use Netscape 4.0 or higher</B>
<P>If using a Netscape browser to read the HTML version of an AFS document,
use version 4.0 or higher. Some fonts used in the documents
possibly do not display properly in earlier versions.
<P><LI><B>Set the Acrobat Reader environment to English</B>
<P>The user interface to the Adobe Acrobat Reader program for displaying PDF
files works correctly only when the program's language environment is set
to English. Users in non-English language locales probably need to
adjust the language setting. See <A HREF="#HDRDOC">Accessing the AFS Binary Distribution and Documentation</A>.
<P><LI><B>No support for IPv6</B>
<P>AFS does not support version 6 of the Internet Protocol (IPv6). You
must continue to specify the IPv4 protocol names <TT>udp</TT> and
<TT>tcp</TT> in the entries for AFS-modified services in the
<B>inetd</B> configuration file, rather than the IPv6 names
<TT>upd6</TT> and <TT>tcp6</TT>. If you use the IPv6 version, the
AFS-modified <B>inetd</B> daemon cannot locate the service and does not
open the service's port.
<P>The <B>inetd</B> configuration file included with some operating system
revisions possibly specifies IPv6 protocols by default. You must modify
or replace the file in order to use the AFS-modified version of remote
services.
<P><LI><B>Limit on directory size when element names are long</B>
<P>If the name of every file system element (file, link, or subdirectory) in a
directory is 16 characters or more, then when there are about 31,700 elements
it becomes impossible to create any more elements with long names. It
is still possible to create elements with names shorter than 16
characters. This limitation is due to the way AFS implements
directories. For a more detailed explanation, contact your AFS product
support representative.
<P><LI><B>Setting setuid or setgid bit on file fails silently</B>
<P>Only members of the <B>system:administrators</B> group can turn
on the setuid or setgid mode bit on an AFS file or directory. However,
AFS generates an error message only when a regular user attempts to set the
bit on a directory. Attempts on a file fail silently.
<P><LI><B>The add instruction in the uss bulk input file does not work as
documented</B>
<P>The documentation specifies the following syntax for creating an
authentication-only account (entries in the Authentication and Protection
Databases only) by using an <B>add</B> instruction in the <B>uss</B>
bulk template file:
<PRE> add <VAR>username</VAR>[:]
</PRE>
<P>However, you must in fact follow the <VAR>username</VAR> value with two
colons for the <B>uss bulk</B> command to create the account:
<PRE> add <VAR>username</VAR>::
</PRE>
<P><LI><B>Running the backup savedb command blocks other Backup System
operations</B>
<P>The Backup Server locks the Backup Database as it performs the <B>backup
savedb</B> command, which can take a long time. Because other backup
operations cannot access the database during this time, they appear to
hang. Avoid running other backup operations after issuing the
<B>backup savedb</B> command.
<P>Actually, this limitation applies to any operation that locks the Backup
Database for a significant amount of time, but most other operations do
not. In any case, running the <B>backup savedb</B> command is
appropriate only in the rare case when the Backup Database is corrupted, so
this limitation usually does not have a significant impact.
<P><LI><B>NFS/AFS Translator sometimes performs poorly under heavy load</B>
<P>The NFS/AFS Translator does not always perform well under heavy
load. Sometimes the translator machine hangs, and sometimes NFS client
machines display the following error message.
<PRE> NFS Stale File Handle
</PRE>
<P><LI><B>Sample files for package program not included</B>
<P>The AFS distribution does not include the sample files referred to in the
chapter of the <I>IBM AFS Administration Guide</I> about the
<B>package</B> program (the files formerly installed by convention in the
<B>etc</B>, <B>lib</B>, and <B>src</B> subdirectories of the
<B>/afs/</B><VAR>cellname</VAR><B>/wsadmin</B> directory).
<I>IBM AFS Quick Beginnings</I> therefore does not include instructions
for installing the sample files. If you wish to use the
<B>package</B> program and the discussion in the <I>IBM AFS
Administration Guide</I> is not sufficient to guide you, contact your AFS
product support representative for assistance.
</UL>
<P><H3><A NAME="HDRREQ_AIX" HREF="aurns002.htm#ToC_13">Product Notes for AIX Systems</A></H3>
<UL>
<P><LI><B>The klog command's -setpag flag is supported on AIX
4.2.1 and 4.3.3 only</B>
<P>To use the <B>klog</B> command's <B>-setpag</B> flag, you must
install the indicated AIX APAR (Authorized Program Analysis Report), available
from IBM, on a machine running the indicated AIX version:
<UL>
<P><LI>APAR IY07834 on AIX 4.2.1 machines
<P><LI>APAR IY07835 on AIX 4.3.3 machines
</UL>
<P>To determine if the APAR is installed, issue the following command:
<PRE> % <B>instfix -i -k</B> <VAR>APAR_identifier</VAR>
</PRE>
<P>IBM provides an APAR for the indicated (latest) AIX versions only.
Therefore, the <B>-setpag</B> flag does not work correctly on machines
running the base level of AIX 4.2 or 4.3, or AIX
4.3.1 or 4.3.2.
<P><LI><B>Change to AFS installation procedure for AIX
4.3.3</B>
<P>If version 4.3.3.0 or higher of the AIX
<B>bos.rte.security</B> fileset is installed (usually true
on a machine using the AIX 4.3.3 kernel), you must modify the
procedure documented in <I>IBM AFS Quick Beginnings</I> for enabling
integrated AFS login. Instead of editing the
<B>/etc/security/login.cfg</B> file, you edit the
<B>/usr/lib/security/methods.cfg</B> file.
<P>To determine which version of the <B>bos.rte.security</B>
fileset is installed, issue the following command:
<PRE> # <B>lslpp -L bos.rte.security</B>
</PRE>
<P>The change affects Step 3 in the section titled <I>Enabling AFS Login on
AIX Systems</I> in each of two chapters in <I>IBM AFS Quick
Beginnings</I>: <I>Installing the First AFS Machine</I> and
<I>Installing Additional Client Machines</I>. For the complete text
of the modified step, see <A HREF="#HDRDOC_NOTES">Documentation Notes</A>.
<P><LI><B>No support for the NFS/AFS Translator with base level of AIX
4.2</B>
<P>AFS does not support the use of machines running the base level of AIX
4.2 as NFS/AFS Translator machines. The AFS distribution does
not include the required kernel extensions file, formerly installed by
convention as
<B>/usr/vice/etc/dkload/afs.ext.trans</B>. Do not set
the NFS variable to the value <B>$NFS_NFS</B> in the AFS initialization
script (by convention, <B>/etc/rc.afs</B>).
<P>Machines running AIX 4.2.1 and higher are supported as
NFS/AFS Translator machines. They use the
<B>afs.ext.iauth</B> kernel extensions file instead.
<P><LI><B>NFS/AFS Translator cannot coexist with NFS/DFS Gateway</B>
<P>A machine running AIX 4.2.1 or higher cannot act as both an
NFS/AFS Translator and a NFS/DFS Gateway Server at the same time, because both
translation protocols must have exclusive access to the AIX <B>iauth</B>
interface. An attempt by either file system to access the
<B>iauth</B> interface when the other file system is already using it
fails with an error message.
<P><LI><B>No support for NFS Version 3 software on NFS clients</B>
<P>Do not run NFS Version 3 software on NFS client machines that use an
NFS/AFS Translator machine running AIX. The NFS3 client software uses
the <B>readdir+</B> NFS command on directories, which can cause excessive
volume lookups on the translator machine. This can lead to timeouts,
especially when used in the <B>/afs</B> directory or other directories
with many volume mount points. Use NFS Version 2 instead.
<P><LI><B>No support for Large File Enabled Journalled File System as AFS
server partition</B>
<P>AFS does not support use of AIX's Large File Enabled Journalled File
System as an AFS server (<B>/vicep</B>) partition. If you configure
a partition that uses that file system as an AFS server partition, the File
Server ignores it and writes the following message to the
<B>/usr/afs/logs/FileLog</B> file:
<PRE> /vicep<VAR>xx</VAR> is a big files filesystem, ignoring it
</PRE>
<P>AFS supports use of the Large File Enabled Journalled File System as the
cache partition on a client machine.
<P><LI><B>PASSWORD_EXPIRES variable not set on AIX</B>
<P>The AIX secondary authentication system does not support setting the
PASSWORD_EXPIRES environment variable during login.
<P><LI><B>The chuser, chfn and chsh commands are inoperative</B>
<P>The <B>chuser</B>, <B>chfn</B>, and <B>chsh</B> commands are
inoperative on AFS machines running AIX. AFS authentication uses the
AIX secondary authentication system, and sets the <TT>registry</TT> variable
in the <B>/etc/security/user</B> file to <TT>DCE</TT> for the default
user. That is, the setting is
<PRE> registry = DCE
</PRE>
<P>as described in the sections of <I>IBM AFS Quick Beginnings</I> that
discuss enabling AFS login on AIX systems. However, when the
<TT>registry</TT> variable has any value other than <TT>registry =
files</TT>, AIX does not allow edits to <B>/etc/passwd</B> and related
files, and so disallows the <B>chuser</B>, <B>chfn</B> and
<B>chsh</B> commands. Attempts to edit entries by running these
commands on the command line result in error messages like the
following.
<UL>
<P><LI>From the <B>chuser</B> command:
<PRE> You can only change the HOME directory on the name server.
</PRE>
<P><LI>From the <B>chfn</B> command:
<PRE> You can only change the User INFORMATION on the name server.
</PRE>
<P><LI>From the <B>chsh</B> command:
<PRE> You can only change the Initial PROGRAM on the name server.
</PRE>
</UL>
<P>From within SMIT, using the <B>chuser</B> function results in an error
message like the following:
<P>
<PRE> 3004-716: You can only change the HOME directory on the name server
</PRE>
<P>It is not possible for AFS Development to alter this behavior, because AIX
imposes the restriction. Sites that wish to run these commands must
develop a solution appropriate for their needs.
</UL>
<P><H3><A NAME="HDRREQ_DUX" HREF="aurns002.htm#ToC_14">Product Notes for Digital UNIX Systems</A></H3>
<UL>
<P><LI><B>No support for the NFS/AFS Translator</B>
<P>AFS does not support use of Digital UNIX machines as NFS/AFS Translator
machines.
<P><LI><B>No support for AdvFS as server or cache partition</B>
<P>AFS does not support use of Digital UNIX's Advanced File System
(AdvFS) as either a client cache partition or a server (<B>/vicep</B>)
partition. It is acceptable to use both AdvFS and AFS on the same
machine, but the cache partition and all AFS server partitions must be UFS
partitions.
<P><LI><B>No support for real-time kernel preemption or related lock
modes</B>
<P>AFS does not function correctly on a Digital UNIX machine when real-time
preemption of system calls is enabled in the kernel. Do not enable this
feature in any manner, including the following:
<UL>
<P><LI>By including the following statement in the <B>/usr/sys/conf/AFS</B>
file:
<PRE> options RT_PREEMPT_OPT
</PRE>
<P><LI>By including either of the following instructions in the
<B>/etc/sysconfigtab</B> file:
<PRE> rt_preempt_opt=1
rt-preempt-opt=1
</PRE>
</UL>
<P>Also, AFS does not function correctly when the value of the kernel
<B>lockmode</B> option is other than <B>0</B> (zero, the default) or
<B>2</B>. Lock mode values <B>1</B>, <B>3</B>, and
<B>4</B> are unsupported because they imply that real-time preemption is
enabled (indeed, enabling real-time preemption sets the lock mode to
<B>1</B> automatically).
<P><LI><B>Building AFS from source requires /usr/sys/AFS directory</B>
<P>Building AFS from source for Digital UNIX requires that certain header
files (such as <B>cpus.h</B>) reside in the local
<B>/usr/sys/AFS</B> directory. This directory exists only if you
have previously incorporated AFS modifications into the kernel of the machine
on which you are performing the compilation. Otherwise, the required
header files reside only in the local directory called
<B>/usr/sys/</B><VAR>machine_name</VAR>.
<P>If the <B>/usr/sys/AFS</B> directory does not exist, issue the
following command to create it as a link:
<PRE> # <B>ln -s /usr/sys/</B><VAR>machine_name</VAR> <B>/usr/sys/AFS</B>
</PRE>
<P>When the compilation is complete, remove the link.
</UL>
<P><H3><A NAME="HDRREQ_HP" HREF="aurns002.htm#ToC_15">Product Notes for HP-UX Systems</A></H3>
<UL>
<P><LI><B>No support for the NFS/AFS Translator</B>
<P>AFS does not support use of HP-UX 11.0 machines as NFS/AFS
Translator machines.
<P><LI><B>Upgrade kernel extensions when upgrading the File Server</B>
<P>The AFS 3.6 version of the File Server uses the native HP-UX
threading package. When upgrading to the new File Server on a machine
that previously ran File Server version 3.5 or earlier, you must also
upgrade the AFS kernel extensions to the AFS 3.6 version.
<P>For instructions on upgrading server and client machines, see <A HREF="#HDRINSTALL">Upgrading Server and Client Machines to AFS 3.6</A>.
<P><LI><B>Possible lower limit on number of File Server threads</B>
<P>On some machines, HP-UX reduces the number of threads available to the File
Server to fewer than the AFS default of 128. To determine the maximum
number of threads available to the File Server (or any single process) on an
HP-UX machine, issue the following command:
<PRE> % <B>getconf _SC_THREAD_THREADS_MAX</B>
</PRE>
<P>As on other system types, the HP-UX File Server reserves seven threads for
special uses, so the maximum effective value for the <B>fileserver</B>
command's <B>-p</B> argument is seven less than the number reported
by the <B>getconf</B> command.
<P><LI><B>PAM can succeed inappropriately when pam_dial_auth module is
optional</B>
<P>For AFS authentication to work correctly for a service, all entries for the
service in the HP-UX PAM configuration file (<B>/etc/pam.conf</B>
by convention) must have the value <TT>optional</TT> in the third field, as
specified in <I>IBM AFS Quick Beginnings</I>. However, when you
make the <B>login</B> entry that invokes the <B>pam_dial_auth</B>
module optional in this way, it can mean that PAM succeeds (the user can
login) even when the user does not meet all of the <B>pam_dial_auth</B>
module's requirements. This is not usually considered
desirable.
<P>If you do not use dial-up authentication, comment out or remove the entry
for the <B>login</B> service that invokes the <B>pam_dial_auth</B>
module. If you do use dial-up authentication, you must develop a
configuration that meets your needs; consult the HP-UX documentation for
PAM and the <B>pam_dial_auth</B> module.
<P><LI><B>HP patch PHCO_18572 enables PAM to change to home directory</B>
<P>You must install Hewlett-Packard patch PHCO_18572 to enable HP-UX's
standard PAM to change to a user's home directory during login.
The patch is accessible for download via the UNIX File Transfer Protocol
(<B>ftp</B>) at the following address:
<DL>
<DD><P><B>ftp://hpatlse.atl.hp.com/hp-ux_patches/s700_800/11.X/PHCO_18572</B>
</DL>
<P>The patch is also available from HP Electronic Support Centers at the
following URLs.
<UL>
<P><LI>In the Americas and Asia Pacific: &lt;A
HREF="http://us-support.external.hp.com"><B>http://us-support.external.hp.com</B>&lt;/A>
<P><LI>In Europe: &lt;A
HREF="http://europe-support.external.hp.com"><B>http://europe-support.external.hp.com</B>&lt;/A>
</UL>
</UL>
<P><H3><A NAME="HDRREQ_IRIX" HREF="aurns002.htm#ToC_16">Product Notes for IRIX Systems</A></H3>
<UL>
<P><LI><B>kdump program does not work with dynamically loaded kernels</B>
<P>The AFS kernel dump program, <B>kdump</B>, cannot retrieve kernel
information from an IRIX system on which the dynamic kernel loader,
<B>ml</B>, was used to load AFS extensions. The <B>kdump</B>
program can read only static kernels into which AFS is built.
<P><LI><B>No AFS-modified remote commands</B>
<P>The AFS distribution for IRIX machines does not include AFS-modified
versions of any of the remote (<B>r*</B>) commands except
<B>inetd.afs</B>. Silicon Graphics has already modified the
IRIX versions of the remote commands to be compatible with AFS.
<P><LI><B>Do not run the fsr program</B>
<P>Do not run the IRIX File System Reorganizer (<B>fsr</B> program) on a
client cache partition (<B>/usr/vice/cache</B> directory or equivalent) or
AFS server partition (<B>/vicep</B> directory). The program can
corrupt or remove AFS data.
<P><LI><B>The timed daemon runs by default</B>
<P>The IRIX 6.5 distribution includes and starts the <B>timed</B>
time-synchronization daemon by default. If you want to use the
<B>runntp</B> program and the Network Time Protocol Daemon (NTPD) on AFS
server machines, as documented in <I>IBM AFS Quick Beginnings</I>, issue
the following commands. They disable the <B>timed</B> daemon and
remove it from the machine's startup sequence.
<PRE> # <B>/etc/chkconfig -f timed off</B>
# <B>/sbin/killall timed</B>
</PRE>
<P><LI><B>Default login program does not grant AFS tokens</B>
<P>The IRIX 6.5 distribution includes the <B>clogin</B> program as
the default login utility. This graphical utility does not grant AFS
tokens. If you want your users to obtain tokens during login, you must
disable the <B>clogin</B> program and substitute either the standard
command-line <B>login</B> program or the <B>xdm</B> graphical login
utility, both of which grant AFS tokens if AFS modifications have been
incorporated into the kernel. Issue the following command to disable
the <B>clogin</B> program.
<PRE> # <B>/etc/chkconfig -f visuallogin off</B>
</PRE>
</UL>
<P><H3><A NAME="HDRREQ_LNX" HREF="aurns002.htm#ToC_17">Product Notes for Linux Systems</A></H3>
<UL>
<P><LI><B>Supported kernel versions</B>
<P>The General Availability release of AFS 3.6 supports Red Hat
Software's Linux 6.0 (which incorporates kernel version
2.2.5-15) and Linux 6.1 (which incorporates kernel
version 2.2.12-20). The distribution also includes
AFS kernel extensions for kernel versions 2.2.10,
2.2.12, 2.2.13, and 2.2.14.
The AFS initialization script included in the AFS 3.6 distribution
automatically selects the appropriate kernel extensions for the kernel version
in use on the local machine.
<P>Red Hat Linux 6.0 and 6.1 include a compiled kernel, but for
the other supported kernel versions you must obtain kernel source and compile
the kernel yourself. In this case, you must use version
2.7.2.3 or higher of the <B>gcc</B> program, which is
part of the Linux distribution. Do not use other compilers.
<P>The Linux kernel-building tools by default create a symmetric
multiprocessor (SMP) kernel, which can run on both uniprocessor and
multiprocessor machines. However, a uniprocessor machine generally
performs best with a uniprocessor kernel.
<P>You can obtain Linux kernel source via the UNIX File Transfer Protocol
(<B>ftp</B>) at <B>ftp.kernel.org</B> or one of its
mirror sites. There is also kernel upgrade information at &lt;A
HREF="http://www.kernel.org"><B>http://www.kernel.org</B>&lt;/A>.
<P><LI><B>AFS requires libc6</B>
<P>For correct AFS performance, the operating system must use the C library
called <B>libc6</B> (or <B>glibc2</B>), rather than <B>libc5</B>
(<B>glibc1</B>).
<P><LI><B>Modified insmod program required with some kernels</B>
<P>If using an SMP kernel or a uniprocessor kernel configured to use more than
1 GB of memory, you must use a modified version of the <B>insmod</B>
program. You do not need the modified program if using a standard
uniprocessor kernel.
<P>You can download the modified <B>insmod</B> program at the following
URLs:
<UL>
<P><LI>&lt;A
HREF="http://www.transarc.com/Support/afs/index.html"><B>http://www.transarc.com/Support/afs/index.html</B>&lt;/A>.
See the <B>Downloads</B> section of the page. To comply with the
GNU Public License (GPL), the download site also makes available the complete
modified <B>insmod.c</B> source file and a source-code patch
against the original <B>insmod.c</B> file.
<P><LI>&lt;A
HREF="http://www.pi.se/blox/modutils/index.html"><B>http://www.pi.se/blox/modutils/index.html</B>&lt;/A>.
Select the file listed at the top of the index. This is a site for
Linux <B>modutils</B> source code.
</UL>
<P><LI><B>No support for the NFS/AFS Translator</B>
<P>AFS does not support the use of Linux machines as NFS/AFS Translator
machines.
<P><LI><B>No AuthLog database</B>
<P>The Authentication Server running on a Linux machine creates and writes
messages to the <B>/usr/afs/logs/AuthLog</B> file, just as on other system
types. However, it does not create or use the two files which
constitute the auxiliary AuthLog database on other system types
(<B>AuthLog.dir</B> and <B>AuthLog.pag</B>). The
<B>kdb</B> command is therefore inoperative on Linux machines. The
auxiliary database is useful mostly for debugging and is not required for
normal operations.
<P><LI><B>Curses utility required for monitoring programs</B>
<P>For the <B>afsmonitor</B>, <B>scout</B> and <B>fms</B> programs
to work properly, the dynamic library <B>/usr/lib/libncurses.so</B>
must be installed on the machine. It is available in most Linux
distributions.
</UL>
<P><H3><A NAME="HDRREQ_SOL" HREF="aurns002.htm#ToC_18">Product Notes for Solaris Systems</A></H3>
<UL>
<P><LI><B>Different location for 64-bit Solaris 7 kernel extensions</B>
<P>
<P>As noted in <A HREF="#HDRINSTALL">Upgrading Server and Client Machines to AFS 3.6</A>, the 64-bit version of Solaris 7 uses a different
location for kernel extension library files than previous versions of
Solaris: <B>/kernel/fs/sparcv9/afs</B>. The 32-bit
version of Solaris 7 uses the same location as Solaris 2.6,
<B>/kernel/fs/afs</B>.
<P><LI><B>SunSoft Patch 106541 for Solaris 7 replaces the /sbin/mountall
script</B>
<P>As part of replacing the standard <B>fsck</B> program on an AFS file
server machine that runs Solaris, you make two changes in the
<B>/sbin/mountall</B> script. If you use Solaris 7 and apply
SunSoft Patch 10654, it replaces the <B>/sbin/mountall</B> script.
This has two implications:
<OL TYPE=1>
<P><LI>If you apply the patch on an existing file server machine, the changes you
already made in the <B>/sbin/mountall</B> script are overwritten.
You must make the changes again in the new (replacement) script.
<P><LI>In the replacement script, the appearance of one of the sections of code
that you must alter is different than in the original script and as specified
in <I>IBM AFS Quick Beginnings</I>.
</OL>
<P>For more details, see <A HREF="#HDRDOC_NOTES">Documentation Notes</A>.
<P><LI><B>PAM can succeed inappropriately when pam_dial_auth module is
optional</B>
<P>For AFS authentication to work correctly for a service, all entries for the
service in the Solaris PAM configuration file (<B>/etc/pam.conf</B>
by convention) must have the value <TT>optional</TT> in the third field, as
specified in <I>IBM AFS Quick Beginnings</I>. However, when you
make the <B>login</B> entry that invokes the <B>pam_dial_auth</B>
module optional in this way, it can mean that PAM succeeds (the user can
login) even when the user does not meet all of the <B>pam_dial_auth</B>
module's required conditions. This is not usually considered
desirable.
<P>If you do not use dial-up authentication, comment out or remove the entry
for the <B>login</B> service that invokes the <B>pam_dial_auth</B>
module. If you do use dial-up authentication, you must develop a
configuration that meets your needs; consult the Solaris documentation
for PAM and the <B>pam_dial_auth</B> module.
<P>The AFS Development group has filed a Request for Enhancement (RFE
#4122186) with SunSoft for a design change that eliminates this problem with
the <B>pam_dial_auth</B> module. There is no projected solution
date. For further information, contact your AFS product support
representative.
<P><LI><B>Solaris 2.6 patches are required for CDE</B>
<P>There were several defects in the initial release of the Solaris 2.6
implementation of the Common Desktop Environment (CDE). They prevented
integrated AFS login from working consistently under CDE. SunSoft now
provides patches that correct the problems. You must install them in
order to obtain support for use of CDE from your AFS product support
representative.
<UL>
<P><LI>If using version 1.2 of the Solaris CDE, install SunSoft patches
105703-03 and 106027-01 (or later revisions).
<P><LI>If using version 1.3 of the Solaris CDE, which is included on the
SDE CD-ROM, install SunSoft patch 106661-04 (or a later
revision).
</UL>
<P>Use the following command to determine which version of CDE you are
running:
<PRE> % <B>pkginfo -l SUNWdtdte</B>
</PRE>
</UL>
<P><H3><A NAME="HDRDOC_NOTES" HREF="aurns002.htm#ToC_19">Documentation Notes</A></H3>
<UL>
<P><LI><B>Instructions for international edition of AFS are obsolete</B>
<P>As noted in <A HREF="#HDRSUMMARY">Summary of New Features</A>, the <I>IBM AFS Administration Guide</I> and <I>IBM
AFS Administration Reference</I> distinguish between United States and
international editions of AFS, because the documents were produced before a
relaxation of United States government export restrictions. AFS
3.6 includes just one edition. AFS customers in any country can
ignore the documented distinction between editions and use the United States
instructions if they choose.
<P><LI><B>Clarification on obtaining technical support</B>
<P>The AFS documents refer you to the <I>AFS Product Support group</I> for
technical assistance with AFS problems and questions. This is intended
to be a generic term. To learn how to obtain technical support, consult
your AFS license agreement or other materials from your AFS vendor.
<P><LI><B>Change to</B> <I>IBM AFS Quick Beginnings</I> <B>instructions
for enabling AFS login on AIX machines</B>
<P>If version 4.3.3.0 or higher of the AIX
<B>bos.rte.security</B> fileset is installed (usually true
on a machine using the AIX 4.3.3 kernel), edit the
<B>/usr/lib/security/methods.cfg</B> file instead of the
<B>/etc/security/login.cfg</B> file as documented in <I>IBM AFS
Quick Beginnings</I>.
<P>The change affects Step 3 in the section titled <I>Enabling AFS Login on
AIX Systems</I> in each of two chapters in <I>IBM AFS Quick
Beginnings</I>: <I>Installing the First AFS Machine</I> and
<I>Installing Additional Client Machines</I>. The corrected text
follows.
<P>
<P>Create or edit the <TT>DCE</TT> and <TT>AFS</TT> stanzas in one of two
files on the local disk:
<UL>
<P><LI>The <B>/usr/lib/security/methods.cfg</B> file, if version
4.3.3.0 or higher of the AIX
<B>bos.rte.security</B> fileset is installed on the machine
(usually true on a machine using the AIX 4.3.3 kernel)
<P><LI>The <B>/etc/security/login.cfg</B> file, if an earlier version
of the fileset is installed
</UL>
<P>Edit the stanzas as follows:
<UL>
<P><LI>In the <TT>DCE</TT> stanza, set the <TT>program</TT> attribute as
indicated.
<P>If you use the AFS Authentication Server (<B>kaserver</B>
process):
<PRE> DCE:
program = /usr/vice/etc/afs_dynamic_auth
</PRE>
<P>If you use a Kerberos implementation of AFS authentication:
<PRE> DCE:
program = /usr/vice/etc/afs_dynamic_kerbauth
</PRE>
<P><LI>In the <TT>AFS</TT> stanza, set the <TT>program</TT> attribute as
indicated.
<P>If you use the AFS Authentication Server (<B>kaserver</B>
process):
<PRE> AFS:
program = /usr/vice/etc/afs_dynamic_auth
</PRE>
<P>If you use a Kerberos implementation of AFS authentication:
<PRE> AFS:
program = /usr/vice/etc/afs_dynamic_kerbauth
</PRE>
</UL>
<P><LI><B>Change to</B> <I>IBM AFS Quick Beginnings</I> <B>instructions
for replacing Solaris fsck program</B>
<P>In two sections of <I>IBM AFS Quick Beginnings</I>, there are
instructions for editing the <B>/sbin/mountall</B> script on Solaris
machines as part of replacing the standard <B>fsck</B> program. The
two sections are <I>Configuring the AFS-modified fsck Program on Solaris
Systems</I> in the chapter about the first AFS machine and <I>Getting
Started on Solaris Systems</I> in the chapter about additional server
machines.
<P>If you use Solaris 7 and apply SunSoft Patch 10654, it replaces the
<B>/sbin/mountall</B> script. In the replacement script, the
appearance of one of the sections of code that you must alter is different
than in the original script and as specified in <I>IBM AFS Quick
Beginnings</I>, which is as follows:
<PRE> # For fsck purposes, we make a distinction between ufs and
# other file systems
#
if [ "$fstype" = "ufs" ]; then
ufs_fscklist="$ufs_fscklist $fsckdev"
saveentry $fstype "$OPTIONS" $special $mountp
continue
fi
</PRE>
<P>In the replacement script, the code is instead similar to the
following:
<PRE> # For fsck purposes, we make a distinction between ufs and
# other file systems. Here we check that the file system is
# not mounted using fsck -m before adding it to the list to
# be checked
#
if [ "$fstype" = "ufs" ]; then
/usr/sbin/fsck -m -F $fstype $fsckdev >/dev/null 2>&amp;1
if [ $? != 33 ]; then
ufs_fscklist="$ufs_fscklist $fsckdev"
saveentry $fstype "$OPTIONS" $special $mountp
continue
else
echo "$fsckdev already mounted"
continue
fi
fi
</PRE>
<P>You still need to change the first <TT>if</TT> statement (the one
directly after the comment) to check for both the UFS and AFS file system
types, as specified in <I>IBM AFS Quick Beginnings</I>:
<PRE> if [ "$fstype" = "ufs" -o "$fstype" = "afs" ]; then
</PRE>
<P><LI><B>Correction to</B> <I>IBM AFS Quick Beginnings</I>
<B>instructions for accessing AFS documents</B>
<P>The section of <I>IBM AFS Quick Beginnings</I> titled <I>Storing AFS
Documents in AFS</I> (in the chapter about the first AFS machine)
incorrectly describes the organization of the top-level
<B>Documentation</B> directory on the AFS CD-ROM. It states that
there is a subdirectory for each document format. Instead, there is a
subdirectory for each language in which the documents are available, named
using the following codes:
<DL>
<DD><P><B>de_DE</B> for German
<DD><P><B>en_US</B> for United States English
<DD><P><B>es_ES</B> for Spanish
<DD><P><B>ko_KR</B> for Korean
<DD><P><B>pt_BR</B> for Brazilian Portuguese
<DD><P><B>zh_CN</B> for Simplified Chinese
<DD><P><B>zh_TW</B> for Traditional Chinese
</DL>
<P>In each language directory is a subdirectory for each available document
format. In each format directory is a subdirectory for each
document. For example, the path on the CD-ROM to the English-language
HTML version of the <I>IBM AFS Quick Beginnings</I> is
<B>Documentation/en_US/HTML/QkBegin</B>.
<TABLE><TR><TD ALIGN="LEFT" VALIGN="TOP"><B>Note:</B></TD><TD ALIGN="LEFT" VALIGN="TOP">Not all documents are available in every language, as determined by the IBM
translation center responsible for each language. All documents are
available in English.
</TD></TR></TABLE>
<P>Assuming that you want to install the documentation for one language only,
substitute the following text for Step 5 in the instructions in <I>Storing
AFS Documents in AFS</I>:
<P>
<P>Copy the AFS documents in one or more formats from the CD-ROM into
subdirectories of the <B>/afs/</B><VAR>cellname</VAR><B>/afsdoc</B>
directory. Repeat the commands for each format.
<PRE> # <B>mkdir</B> <VAR>format_name</VAR>
# <B>cd</B> <VAR>format_name</VAR>
# <B>cp -rp /cdrom/Documentation/</B><VAR>language_code</VAR><B>/</B><VAR>format</VAR> <B>.</B>
</PRE>
<P>If you choose to store the HTML version of the documents in AFS, note that
in addition to a subdirectory for each document there are several files with a
<B>.gif</B> extension, which enable readers to move easily between
sections of a document. The file called <B>index.htm</B> is
an introductory HTML page that contains a hyperlink to each of the
documents. For online viewing to work properly, these files must remain
in the top-level HTML directory (the one named, for example,
<B>/afs/</B><VAR>cellname</VAR><B>/afsdoc/HTML</B>).
<P><LI><B>Revised reference page for NetRestrict files</B>
<P>The <I>IBM AFS Administration Guide</I> and <I>IBM AFS Administration
Reference</I> incorrectly state that the value <B>255</B> acts as a
wildcard in IP addresses that appear in the <B>NetRestrict</B> file
(client or server version). Wildcarding does not work and is not
supported. For corrected documentation, see <A HREF="#HDRCLI_NETRESTRICT">NetRestrict (client version)</A> and <A HREF="#HDRSV_NETRESTRICT">NetRestrict (server version)</A>.
<P><LI><B>Revised reference pages for backup commands and configuration
file</B>
<P>The <I>IBM AFS Administration Guide</I> and <I>IBM AFS Administration
Reference</I> do not document the interoperation of the AFS Backup System
and the Tivoli Storage Manager (TSM), because support for TSM was added after
the documents were produced.
<P>For a complete description of the new TSM-related features and
configuration procedures, see <A HREF="#HDRTSM">Support for Backup to TSM</A> and the indicated reference pages:
<DL>
<DD><P><A HREF="#HDRBK_DELETEDUMP">backup deletedump</A>
<DD><P><A HREF="#HDRBK_DUMPINFO">backup dumpinfo</A>
<DD><P><A HREF="#HDRBK_STATUS">backup status</A>
<DD><P><A HREF="#HDRCFG">CFG_<I>tcid</I></A>
</DL>
<P><LI><B>Revised reference page for vos delentry command</B>
<P>The <I>IBM AFS Administration Guide</I> and <I>IBM AFS Administration
Reference</I> incorrectly state that the <B>vos delentry</B> command
accepts the name or volume ID number of any type of volume (read/write,
read-only, or backup). In fact, it accepts only a read/write
volume's name or ID. Because a single VLDB entry represents all
versions of a volume (read/write, readonly, and backup), the command removes
the entire entry even though only the read/write volume is specified.
For complete documentation, see <A HREF="#HDRVOS_DELENTRY">vos delentry</A>.
</UL>
<HR><H2><A NAME="HDRCMD-CHANGES" HREF="aurns002.htm#ToC_20">Changes to AFS Commands, Files, and Functionality</A></H2>
<P>This section briefly describes commands, command
options, and functionality that are new or changed in AFS 3.6.
Unless otherwise noted, the <I>IBM AFS Administration Guide</I> and
<I>IBM AFS Administration Reference</I> include complete documentation of
these items.
<P><H3><A NAME="Header_21" HREF="aurns002.htm#ToC_21">A New Command</A></H3>
<P>AFS 3.6 includes the new <B>fs flushmount</B>
command. The command's intended use is to discard information
about mount points that has become corrupted in the cache. The next
time an application accesses the mount point, the Cache Manager must fetch the
most current version of it from a File Server. Data cached from files
or directories in the volume is not affected. The only other way to
discard the information is to reinitialize the Cache Manager by rebooting the
machine.
<P>Symptoms of a corrupted mount point included garbled output from the
<B>fs lsmount</B> command, and failed attempts to change directory to or
list the contents of the volume root directory represented by the mount
point.
<P><H3><A NAME="HDRNEW_CMDS" HREF="aurns002.htm#ToC_22">New File or Command Functionality</A></H3>
<P>AFS 3.6 adds the following new options and
functionality to existing commands and files.
<UL>
<P><LI><B>Changes that support XBSA servers</B>
<P>Several <B>backup</B> commands and configuration files include new
features that support backup to XBSA servers such as TSM. See <A HREF="#HDRTSM_NEW">New Command and File Features that Support TSM</A>.
<P><LI><B>New instructions in the CFG_</B><VAR>tcid</VAR> <B>file</B>
<P>There are new instructions in the <B>CFG_</B><VAR>tcid</VAR> file that
apply to all types of backup media: <B>CENTRALLOG</B>,
<B>GROUPID</B>, <B>LASTLOG</B>, <B>MAXPASS</B>, and
<B>STATUS</B>. (There are also new instructions that apply only to
XBSA servers, as documented in <A HREF="#HDRTSM_NEW">New Command and File Features that Support TSM</A>.)
<P>The new instructions are not documented in the <I>IBM AFS Administration
Guide</I> or <I>IBM AFS Administration Reference</I>. See <A HREF="#HDRCFG">CFG_<I>tcid</I></A>. (Note that this is a new way of referring to this
file, called <B>CFG_</B><VAR>device_name</VAR> in the <I>IBM AFS
Administration Guide</I> and <I>IBM AFS Administration
Reference</I>. For a Tape Coordinator that communicates with an XBSA
server, the variable part of the filename is a port offset number rather than
a device name, so the more generic <VAR>tcid</VAR> is a better description of
possible values in this part of the filename.)
<P><LI><B>New -temporary flag to backup addvolset command</B>
<P>The <B>backup addvolset</B> command has a new <B>-temporary</B>
flag. A temporary volume set is not recorded in the Backup Database and
exists only during the lifetime of the interactive session in which it is
created.
<P><LI><B>New options to the backup deletedump command</B>
<P>There are new options to the <B>backup deletedump</B> command:
the <B>-groupid</B> argument specifies the group ID number associated with
the dump records to delete, and the <B>-noexecute</B> flag displays a list
of the records to be deleted rather than actually deleting them. (There
are also new options that apply only to records for data dumped to an XBSA
server, as documented in <A HREF="#HDRTSM_NEW">New Command and File Features that Support TSM</A>.)
<P>The new options are not documented in the <I>IBM AFS Administration
Guide</I> or <I>IBM AFS Administration Reference</I>. See <A HREF="#HDRBK_DELETEDUMP">backup deletedump</A>.
<P><LI><B>New output from the backup dumpinfo command</B>
<P>When both the <B>-id</B> and <B>-verbose</B> options to the
<B>backup dumpinfo</B> command are provided, the output is divided into
several sections. In the first section, headed by the label
<TT>Dump</TT>, the new <TT>Group id</TT> field replaces the <TT>id</TT>
field that previously appeared about halfway down the list of fields (the
first field in the section is still labeled <TT>id</TT>). The
<TT>Group id</TT> field reports the dump's group ID number, which is
recorded in the Backup Database if the <B>GROUPID</B> instruction appears
in the Tape Coordinator's <B> /usr/afs/backup/CFG_</B><VAR>tcid</VAR>
file when the dump is created.
<P>(The command's output also includes a new message that reports whether
the dump data is stored on an XBSA server, as detailed in <A HREF="#HDRTSM_NEW">New Command and File Features that Support TSM</A>.)
<P>The new output is not documented in the <I>IBM AFS Administration
Guide</I> or <I>IBM AFS Administration Reference</I>. See <A HREF="#HDRBK_DUMPINFO">backup dumpinfo</A>.
<P><LI><B>BOS Server sends additional field to notifier programs</B>
<P>The AFS 3.6 BOS Server sends additional information to notifier
programs when an AFS server process exits. The <B>bnode_proc</B>
structure now includes the <B>lastExit</B> field, which reports the exit
code associated with the process's most recent exit. Previously,
the only information about exit codes available to the notifier program was in
the <B>bnode</B> structure's <B>errorCode</B> field, which
records the exit code generated when the process last exited due to an
error. The BOS Server does not clear the <B>errorCode</B> field, so
the value set at the last exit due to error is reported even for exits that
are not due to error.
<P>If your notifier program currently checks the <B>errorCode</B> field
but you really want a notification only when the most recent exit is due to an
error, change the program to check the <B>lastExit</B> field in the
<B>bnode_proc</B> structure instead. An error code appears in the
<B>lastExit</B> field only if the most recent exit really was due to an
error (in which case the same code also appears in the <B>errorCode</B>
field).
<P>The <B>bos create</B> command's reference page in the <I>IBM AFS
Administration Reference</I> describes all of the fields that the BOS Server
can include in the <B>bnode_proc</B> and <B>bnode</B>
structures. As noted there, the BOS Server does not necessarily include
every field in the structures it sends to a notifier program, because some of
them are for internal use. For best results, the notifier program must
correctly handle the absence of a field that it expects to find.
<P><LI><B>Only administrators can use kas examine command's -showkey
flag</B>
<P>As in AFS 3.5, the AFS 3.6 Authentication Server does not
require that you disable authorization checking on its database server machine
before it returns the octal digits that constitute the encrypted password or
key stored in an Authentication Database entry, which was the requirement with
earlier versions of AFS. Instead, it always returns the octal digits,
as long as the connection between the <B>kas</B> command interpreter and
Authentication Server is encrypted. AFS 3.5 introduced the
<B>-showkey</B> flag to make the <B>kas examine</B> command display
the octal digits.
<P>This change in requirements creates a potential security exposure, however,
in that earlier versions of the <B>kas examine</B> command always display
the octal digits (instead of a checksum) when directed to an AFS 3.5 or
3.6 Authentication Server. To eliminate this exposure, in AFS
3.6 the Authentication Server returns octal digits only for a principal
that has the <TT>ADMIN</TT> flag in its Authentication Database
entry.
<P>The main effect of the new requirement is that only administrators can
include the <B>-showkey</B> flag on the AFS 3.6 <B>kas
examine</B> command. It does not effectively change the privilege
required to display the octal digits when using versions of the <B>kas
examine</B> command before AFS 3.5 Patch 2 (build level
<B>afs3.5 3.17</B>), because it was assumed with earlier
versions that only administrators were able to disable authorization
checking. It also does not affect the automated installation and
configuration tool provided for AFS for Windows, which still can be used only
by administrators.
<P><LI><B>The vos delentry command accepts only read/write volume names</B>
<P>The AFS 3.6 version of the <B>vos delentry</B> command accepts
only read/write volume names or volume ID numbers as values for its
<B>-id</B> or <B>-prefix</B> arguments. The new restriction is
not documented in the <I>IBM AFS Administration Guide</I> or <I>IBM AFS
Administration Reference</I>. See <A HREF="#HDRVOS_DELENTRY">vos delentry</A>.
</UL>
<HR><H2><A NAME="HDRTSM" HREF="aurns002.htm#ToC_23">Support for Backup to TSM</A></H2>
<P>AFS 3.6 introduces support for backing up AFS data to
media managed by the Tivoli Storage Manager (TSM), a third-party backup
program which implements the Open Group's Backup Service application
programming interface (API), also called <I>XBSA</I>. TSM was
formerly called the ADSTAR Distributed Storage Manager, or ADSM. It is
assumed that the administrator is familiar with TSM; explaining TSM or
XBSA concepts or terminology is beyond the scope of this document.
<P>See the following subsections:
<UL>
<P><LI><A HREF="#HDRTSM_NEW">New Command and File Features that Support TSM</A>
<P><LI><A HREF="#HDRTSM_REQ">Product Notes for Use of TSM</A>
<P><LI><A HREF="#HDRTSM_CONFIG">Configuring the Backup System and TSM</A>
</UL>
<P><H3><A NAME="HDRTSM_NEW" HREF="aurns002.htm#ToC_24">New Command and File Features that Support TSM</A></H3>
<P>The AFS 3.6 version of the following commands and
configuration files include new options or instructions to support backup to
TSM.
<UL>
<P><LI><B>New XBSA-related instructions in the CFG_</B><VAR>tcid</VAR>
<B>file</B>
<P>Several new or modified instructions in the <B>CFG_</B><VAR>tcid</VAR>
file support backup of AFS data to XBSA servers such as TSM:
<B>MGMTCLASS</B>, <B>NODE</B>, <B>PASSFILE</B>,
<B>PASSWORD</B>, <B>SERVER</B>, and <B>TYPE</B>. (There are
also new instructions that apply to automated tape devices and backup data
files as well as XBSA servers, as detailed in <A HREF="#HDRNEW_CMDS">New File or Command Functionality</A>.)
<P>The new instructions are not documented in the <I>IBM AFS Administration
Guide</I> or <I>IBM AFS Administration Reference</I>. See <A HREF="#HDRCFG">CFG_<I>tcid</I></A>. (Note that this is a new way of referring to this
file, called <B>CFG_</B><VAR>device_name</VAR> in the <I>IBM AFS
Administration Guide</I> and <I>IBM AFS Administration
Reference</I>. For a Tape Coordinator that communicates with XBSA
server such as TSM, the variable part of the filename is a port offset number
rather than a device name, so the more generic <VAR>tcid</VAR> is a better
description of possible values in this part of the filename.)
<P><LI><B>New options to the backup deletedump command</B>
<P>The <B>backup deletedump</B> command has new options that enable you to
delete dump records stored on an XBSA server such as TSM, as well as the
corresponding Backup Database records:
<UL>
<P><LI>The <B>-dbonly</B> flag deletes Backup Database records without
attempting to delete the corresponding records stored on the XBSA
server.
<P><LI>The <B>-force</B> flag deletes Backup Database records even if it is
not possible to delete the corresponding records stored on the XBSA
server.
<P><LI>The <B>-port</B> argument identifies the Tape Coordinator that
communicates with the XBSA server on which to delete records.
</UL>
<P>There are also two new options that apply to automated tape devices and
backup data files as well as XBSA servers, as detailed in <A HREF="#HDRNEW_CMDS">New File or Command Functionality</A>.
<P>The new options are not documented in the <I>IBM AFS Administration
Guide</I> or <I>IBM AFS Administration Reference</I>. See <A HREF="#HDRBK_DELETEDUMP">backup deletedump</A>.
<P><LI><B>New output from the backup dumpinfo command</B>
<P>When the <B>-id</B> option is provided to the <B>backup
dumpinfo</B> command, the following line appears in the output if a TSM
server was the backup medium for the dump:
<PRE> Backup Service: TSM: Server: <VAR>hostname</VAR>
</PRE>
<P>where <VAR>hostname</VAR> is the name of the TSM server machine.
(There is also new output for dumps to all types of backup media, as detailed
in <A HREF="#HDRNEW_CMDS">New File or Command Functionality</A>.)
<P>The new output is not documented in the <I>IBM AFS Administration
Guide</I> or <I>IBM AFS Administration Reference</I>. See <A HREF="#HDRBK_DUMPINFO">backup dumpinfo</A>.
<P><LI><B>New output from the backup status command</B>
<P>If the Tape Coordinator is communicating with a TSM server, the following
message appears last in the output from the <B>backup status</B>
command:
<PRE> TSM Tape coordinator
</PRE>
<P>The new output is not documented in the <I>IBM AFS Administration
Guide</I> or <I>IBM AFS Administration Reference</I>. See <A HREF="#HDRBK_STATUS">backup status</A>.
</UL>
<P><H3><A NAME="HDRTSM_REQ" HREF="aurns002.htm#ToC_25">Product Notes for Use of TSM</A></H3>
<UL>
<P><LI><B>Supported Tape Coordinator machine types</B>
<P>To communicate with a TSM server, the Tape Coordinator must run on a
machine that uses one of the following operating systems: AIX 4.3
or higher, Solaris 2.6, Solaris 7.
<P><LI><B>Supported version of TSM API</B>
<P>The AFS 3.6 Tape Coordinator uses version 3.7.1
(Version 3, Release 7) of the TSM client API. Use of other versions of
the API is not supported or tested. For instructions on obtaining the
API, see <A HREF="#HDRTSM_CONFIG">Configuring the Backup System and TSM</A>.
<P><LI><B>CFG_</B><VAR>tcid</VAR> <B>file is required for TSM servers</B>
<P>To communicate with a TSM server, a Tape Coordinator must have a
<B>CFG_</B><VAR>tcid</VAR> file that includes the following fields:
<B>SERVER</B>, <B>TYPE</B>, and <B>PASSWORD</B> or
<B>PASSFILE</B>. For instructions on creating the file, see <A HREF="#HDRTSM_CONFIG">Configuring the Backup System and TSM</A>.
<P><LI><B>No entry in tapeconfig file for TSM servers</B>
<P>Do not create an entry in the <B>/usr/afs/backup/tapeconfig</B> file
for a Tape Coordinator that communicates with an XBSA server such as
TSM. Creating the <B>CFG_</B><VAR>tcid</VAR> file is
sufficient.
<P><LI><B>Acceptable value for the TYPE instruction</B>
<P>In AFS 3.6, there is one acceptable value for the <B>TYPE</B>
instruction in the <B>CFG_</B><VAR>tcid</VAR> file:
<B>tsm</B>.
<P><LI><B>TSM node name must be defined</B>
<P>If the <B>NODE</B> instruction is not included in the
<B>/usr/afs/backup/CFG_</B><VAR>tcid</VAR> file, the TSM
<B>dsm.sys</B> file must define a value for the NODENAME
variable.
<P><LI><B>Unsupported backup commands and options</B>
<P>The following commands are not supported for XBSA servers such as
TSM. In other words, the commands fail with an error message when their
<B>-port</B> argument specifies a Tape Coordinator that communicates with
an XBSA server:
<UL>
<P><LI><B>backup labeltape</B>
<P><LI><B>backup readlabel</B>
<P><LI><B>backup restoredb</B>
<P><LI><B>backup savedb</B>
<P><LI><B>backup scantape</B>
</UL>
<P>In addition, the <B>-append</B> flag to the <B>backup dump</B>
command is ignored when the <B>-port</B> argument specifies a Tape
Coordinator that communicates with an XBSA server (the notion of appended
dumps does not apply to XBSA servers).
</UL>
<P><H3><A NAME="HDRTSM_CONFIG" HREF="aurns002.htm#ToC_26">Configuring the Backup System and TSM</A></H3>
<P>Perform the following steps to configure TSM and the
AFS Backup System for interoperation.
<TABLE><TR><TD ALIGN="LEFT" VALIGN="TOP"><B>Note:</B></TD><TD ALIGN="LEFT" VALIGN="TOP">You possibly need to perform additional TSM configuration procedures
unrelated to AFS. See the TSM documentation.
</TD></TR></TABLE>
<OL TYPE=1>
<P><LI>Become the local superuser <B>root</B>, if you are not already.
<P>
<PRE> % <B>su root</B>
Password: <VAR>root_password</VAR>
</PRE>
<P><LI>Install version 3.7.1 of the TSM client API on the local
disk of the Tape Coordinator machine. If you do not already have the
API, you can use the following instructions to download it using the UNIX File
Transfer Protocol (<B>ftp</B>).
<OL TYPE=a>
<P><LI>Verify that there is enough free space on the local disk to accommodate
the API package:
<UL>
<P><LI>On AIX systems, 4 MB on the disk that houses the <B>/usr/tivoli</B>
directory
<P><LI>On Solaris systems, 13 MB on the disk that houses the
<B>/opt/tivoli</B> directory
</UL>
<P><LI>Connect to the <B>ftp</B> server called
<B>ftp.software.ibm.com</B>, logging in as
<B>anonymous</B> and providing your electronic mail address as the
password.
<P><LI>Switch to binary mode.
<PRE> ftp> <B>bin</B>
</PRE>
<P><LI>Change directory as indicated:
<PRE> ftp> <B>cd storage/tivoli-storage-management-maintenance/client/v3r7</B>
</PRE>
<P><LI>Change to the appropriate directory and retrieve the API file.
<UL>
<P><LI>On an AIX 4.3 system:
<PRE> ftp> <B>cd AIX/v371</B>
ftp> <B>get tivoli.tsm.client.api.aix43.32bit</B>
</PRE>
<P><LI>On a Solaris 2.6 or 7 system:
<PRE> ftp> <B>cd Solaris/v371</B>
ftp> <B>get IP21804.tar.Z</B>
</PRE>
</UL>
<P><LI>Use the appropriate tool to install the TSM API package locally:
<UL>
<P><LI>On AIX machines, use <B>smit</B>, which installs the files in the
<B>/usr/tivoli/tsm/client/api/bin</B> directory
<P><LI>On Solaris machines, use the following command, which installs the files
in the <B>/opt/tivoli/tsm/client/api/bin</B> directory:
<PRE> # <B>uncompress IP21804.tar.Z | tar xvf -</B>
</PRE>
</UL>
</OL>
<P><LI>Set the following TSM environment variables as indicated. If you do
not set them, you must use the default values specified in the TSM
documentation.
<DL>
<P><DT><B>DSMI_DIR
</B><DD>Specifies the pathname of the directory that contains the TSM client
system options file, <B>dsm.sys</B>. The directory must have
a subdirectory (which can be a symbolic link) called <B>en_US</B> that
contains the <B>dsmclientV3.cat</B> catalog file.
<P>Do not put a final slash ( <B>/</B> ) on the directory name.
Examples of appropriate values are <B>/opt/tivoli/tsm/client/api/bin</B>
on Solaris machines and <B>/usr/tivoli/tsm/client/api/bin</B> on AIX
machines.
<P><DT><B>DSMI_CONFIG
</B><DD>Specifies the pathname of the directory that contains the TSM client user
options file, <B>dsm.opt</B>. The value can be the same as
for the <B>DSMI_DIR</B> variable. Do not put a final slash (
<B>/</B> ) on the directory name.
<P><DT><B> DSMI_LOG
</B><DD>Specifies the full pathname (including the filename) of the log file for
error messages from the API. An appropriate value is
<B>/usr/afs/backup/butc.TSMAPI.log</B>.
</DL>
<P><LI><A NAME="LIWQ5"></A>Verify that the <B>dsm.sys</B> file includes the
following instructions. For a description of the fields, see the TSM
documentation.
<PRE> ServerName <VAR>machine_name</VAR>
CommMethod tcpip
TCPPort <VAR>TSM_port</VAR>
TCPServerAddress <VAR>full_machine_name</VAR>
PasswordAccess prompt
Compression yes
</PRE>
<P>The following is an example of appropriate values:
<PRE> ServerName tsm3
CommMethod tcpip
TCPPort 1500
TCPServerAddress tsm3.abc.com
PasswordAccess prompt
Compression yes
</PRE>
<P><LI>Verify that the <B>dsm.opt</B> file includes the following
instructions. For a description of the fields, see the TSM
documentation.
<PRE> ServerName <VAR>machine_name</VAR>
tapeprompt no
compressalways yes
</PRE>
<P><LI><A NAME="LITSM_BK_ADDHOST"></A>Create a Backup Database entry for each Tape
Coordinator that is to communicate with the TSM server. Multiple Tape
Coordinators can interact with the same TSM server if the server has
sufficient capacity.
<PRE> # <B>backup addhost</B> &lt;<VAR>tape&nbsp;machine&nbsp;name</VAR>> &lt;<VAR>TC&nbsp;port&nbsp;offset</VAR>>
</PRE>
<P>where
<DL>
<P><DT><B><VAR>tape machine name</VAR>
</B><DD>Specifies the fully qualified hostname of the Tape Coordinator
machine.
<P><DT><B><VAR>TC port offset</VAR>
</B><DD>Specifies the Tape Coordinator's port offset number.
Acceptable values are integers in the range from <B>0</B> (zero) through
<B>58510</B>.
</DL>
<P><LI>Create a device configuration file for the Tape Coordinator called
<B>/usr/afs/backup/CFG_</B><VAR>tcid</VAR>, where <VAR>tcid</VAR> is the Tape
Coordinator's port offset number as defined in Step <A HREF="#LITSM_BK_ADDHOST">6</A>. The file must include the following
instructions:
<UL>
<P><LI><B>SERVER</B>, which takes as its argument the fully qualified
hostname of the TSM server machine. It matches the value in the
<VAR>full_machine_name</VAR> field of the <B>dsm.sys</B> file, as
defined in Step <A HREF="#LIWQ5">4</A>.
<P><LI><B>TYPE</B>, which takes as its argument the string <B>tsm</B>
(the only acceptable value in AFS 3.6).
<P><LI>One of <B>PASSWORD</B> or <B>PASSFILE</B>, to define the password
which the Tape Coordinator uses when communicating with the TSM server.
<B>PASSWORD</B> takes as its argument the actual password character
string. <B>PASSFILE</B> takes as its argument the complete pathname
of the file that contains the string on its first line.
</UL>
<P>For more detailed descriptions of the instructions, and of other
instructions you can include in the configuration file, see <A HREF="#HDRCFG">CFG_<I>tcid</I></A>.
</OL>
<HR><H2><A NAME="HDRINSTALL" HREF="aurns002.htm#ToC_27">Upgrading Server and Client Machines to AFS 3.6</A></H2>
<P>This section explains how to upgrade server and client
machines from AFS 3.5 or AFS 3.6 Beta to AFS 3.6.
Before performing an upgrade, please read all of the introductory material in
this section.
<P>If you are installing AFS for the first time, skip this chapter and refer
to the <I>IBM AFS Quick Beginnings</I> document for AFS 3.6.
<P>AFS provides backward compatibility to the previous release only: AFS
3.6 is certified to be compatible with AFS 3.5 but not
necessarily with earlier versions.
<TABLE><TR><TD ALIGN="LEFT" VALIGN="TOP"><B>Note:</B></TD><TD ALIGN="LEFT" VALIGN="TOP">This document does not provide instructions for upgrading from AFS
3.4a or earlier directly to AFS 3.6. A file system
conversion is required on some system types. See the <I>AFS Release
Notes</I> for AFS 3.5 and contact your AFS product support
representative for assistance.
</TD></TR></TABLE>
<P><H3><A NAME="Header_28" HREF="aurns002.htm#ToC_28">Prerequisites for Upgrading</A></H3>
<P>You must meet the following requirements to upgrade successfully to AFS
3.6:
<UL>
<P><LI>You can access the AFS 3.6 binaries by network, or have the CD-ROM
labeled <B>AFS Version 3.6</B> for each system type you need to
upgrade. See <A HREF="#HDRGETBIN">Obtaining the Binary Distribution</A>.
<P><LI>You have access to the <I>IBM AFS Quick Beginnings</I> document for
AFS 3.6, either in hardcopy (for English, IBM document number
SC09-4560-00, part number CT6Q7NA) or at &lt;A
HREF="http://www.transarc.com/Library/documentation/afs_doc.html"><B>http://www.transarc.com/Library/documentation/afs_doc.html</B>&lt;/A>.
See also <A HREF="#HDRDOC">Accessing the AFS Binary Distribution and Documentation</A>.
<P><LI>The partition that houses the <B>/usr/afs/bin</B> directory on each
server machine has at least 18 MB of disk space for storing the AFS server
binaries.
<P><LI>The partition that houses the <B>/usr</B> directory on each client
machine has at least 4 MB of disk space for storing the AFS client binaries
and kernel library files (stored by convention in the <B>/usr/vice/etc</B>
directory).
<P><LI>You can log into all server and client machines as the local superuser
<B>root</B>.
<P><LI>You are listed in the cell's <B>/usr/afs/etc/UserList</B> file
and can authenticate as a member of the <B>system:administrators</B>
group.
</UL>
<P><H3><A NAME="HDRGETBIN" HREF="aurns002.htm#ToC_29">Obtaining the Binary Distribution</A></H3>
<P>Use one of the following methods to obtain the AFS
distribution of each system type for which you are licensed.
<UL>
<P><LI>Working with your AFS Sales Representative, obtain the AFS 3.6
CD-ROM for each system type.
<P><LI>Access the distribution by network in IBM's Electronic Software
Distribution system.
</UL>
<P><H3><A NAME="HDRSTOREBIN" HREF="aurns002.htm#ToC_30">Storing Binaries in AFS</A></H3>
<P>It is conventional to store many of the programs and
files from the AFS binary distribution in a separate volume for each system
type, mounted in your AFS filespace at
<B>/afs/</B><VAR>cellname</VAR><B>/</B><VAR>sysname</VAR><B>/usr/afsws</B>.
These instructions rename the volume currently mounted at this location and
create a new volume for AFS 3.6 binaries.
<P>Repeat the instructions for each system type.
<OL TYPE=1>
<P><LI>Authenticate as an administrator listed in the
<B>/usr/afs/etc/UserList</B> file.
<P><LI>Issue the <B>vos create</B> command to create a new volume for AFS
3.6 binaries called
<VAR>sysname</VAR><B>.3.6</B>. Set an unlimited quota
on the volume to avoid running out of space as you copy files from the
distribution.
<PRE> % <B>vos create</B> &lt;<VAR>machine&nbsp;name</VAR>> &lt;<VAR>partition&nbsp;name</VAR>> <VAR>sysname</VAR><B>.3.6 -maxquota 0</B>
</PRE>
<P><LI>Issue the <B>fs mkmount</B> command to mount the volume at a temporary
location.
<PRE> % <B>fs mkmount /afs/.</B><VAR>cellname</VAR><B>/temp</B> <VAR>sysname</VAR><B>.3.6</B>
</PRE>
<P><LI>Prepare to access the files using the method you have selected:
<UL>
<P><LI>If copying files from the CD-ROM, mount the CD-ROM for this machine's
system type on the local <B>/cdrom</B> directory. For instructions
on mounting CD-ROMs (either locally or remotely via NFS), consult the
operating system documentation. Then change directory as
indicated.
<PRE> % <B>cd /cdrom/</B><VAR>sysname</VAR>
</PRE>
<P><LI>If accessing the distribution electronically, download the necessary file
or files. If necessary, use commands such as <B>gunzip</B> and
<B>tar xvf</B> to uncompress and unpack the distribution. Place the
contents in a temporary location (<VAR>temp_afs36_dir</VAR>) and change
directory to that location.
<PRE> % <B>cd</B> <VAR>temp_afs36_dir</VAR>
</PRE>
</UL>
<P><LI>Copy files from the distribution into the
<VAR>sysname</VAR><B>.3.6</B> volume.
<PRE> % <B>cp -rp bin /afs/.</B><VAR>cellname</VAR><B>/temp</B>
% <B>cp -rp etc /afs/.</B><VAR>cellname</VAR><B>/temp</B>
% <B>cp -rp include /afs/.</B><VAR>cellname</VAR><B>/temp</B>
% <B>cp -rp lib /afs/.</B><VAR>cellname</VAR><B>/temp</B>
</PRE>
<P><LI><A NAME="LISTOREBIN-CLIENT"></A><B>(Optional)</B> By convention, the contents of
the distribution's <B>root.client</B> directory are not stored
in AFS. However, if you are upgrading client functionality on many
machines, it can be simpler to copy the client files from your local AFS space
than from the CD-ROM or from IBM's Electronic Software Distribution
system. If you wish to store the contents of the
<B>root.client</B> directory in AFS temporarily, copy them
now.
<PRE> % <B>cp -rp root.client /afs/.</B><VAR>cellname</VAR><B>/temp</B>
</PRE>
<P><LI>Issue the <B>vos rename</B> command to change the name of the volume
currently mounted at the
<B>/afs/</B><I>cellname</I><B>/</B><I>sysname</I><B>/usr/afsws</B>
directory. A possible value for the <VAR>extension</VAR> reflects the AFS
version and build level (for example:
<B>3.5-bld3.32</B>).
<P>If you do not plan to retain the old volume, you can substitute the
<B>vos remove</B> command in this step.
<PRE> % <B>vos rename</B> <VAR>sysname</VAR><B>.usr.afsws</B> <VAR>sysname</VAR><B>.usr.afsws.</B><VAR>extension</VAR>
</PRE>
<P><LI>Issue the <B>vos rename</B> command to change the name of the
<VAR>sysname</VAR><B>.3.6</B> volume to
<VAR>sysname</VAR><B>.usr.afsws</B>.
<PRE> % <B>vos rename</B> <VAR>sysname</VAR><B>.3.6</B> <VAR>sysname</VAR><B>.usr.afsws</B>
</PRE>
<P><LI>Issue the <B>fs rmmount</B> command to remove the temporary mount
point for the <VAR>sysname</VAR><B>.3.6</B> volume.
<PRE> % <B>fs rmmount /afs/.</B><VAR>cellname</VAR><B>/temp</B>
</PRE>
</OL>
<P><H3><A NAME="HDROS-UP" HREF="aurns002.htm#ToC_31">Upgrading the Operating System</A></H3>
<P>AFS 3.6 supports the 64-bit version of HP-UX
11.0 and Solaris 7. To upgrade from the 32-bit version,
you possibly need to reinstall the operating system completely before
installing AFS 3.6. When performing any operating system
upgrade, you must take several actions to preserve AFS functionality,
including the following:
<UL>
<P><LI>Unmount the AFS server partitions (those mounted on <B>/vicep</B>
directories) on all file server machines, to prevent the standard vendor
version of the <B>fsck</B> program from running on them when you reboot
the machine during installation of the new operating system. On several
operating systems, the standard <B>fsck</B> program does not recognize AFS
volume data and discards it. Also, disable automatic mounting of the
partitions during reboot until you have substituted the AFS <B>vfsck</B>
program for the vendor <B>fsck</B> program.
<P><LI>Create copies of the AFS-modified versions of binaries or files so that
they are not overwritten by the standard versions during the operating system
upgrade, particularly if you are not performing an immediate AFS
upgrade. Examples include the remote commands (<B>ftpd</B>,
<B>inetd</B>, <B>rcp</B>, <B>rsh</B>, and so on) and the
<B>vfsck</B> binary. After you have successfully installed the new
version of the operating system, move the AFS-modified files and commands back
to the directories from which they are accessed during normal use.
</UL>
<P><H3><A NAME="HDRSV-BIN" HREF="aurns002.htm#ToC_32">Distributing Binaries to Server Machines</A></H3>
<P>The instructions in this section explain how to use the
Update Server to distribute server binaries from a binary distribution machine
of each system type.
<P>Repeat the steps on each binary distribution machine in your cell.
If you do not use the Update Server, repeat the steps on every server machine
in your cell. If you are copying files from the AFS product tree, the
server machine must also be configured as an AFS client machine.
<OL TYPE=1>
<P><LI>Become the local superuser <B>root</B>, if you are not already.
<P>
<PRE> % <B>su root</B>
Password: <VAR>root_password</VAR>
</PRE>
<P><LI>Create a temporary subdirectory of the <B>/usr/afs/bin</B> directory
to store the AFS 3.6 server binaries.
<PRE> # <B>mkdir /usr/afs/bin.36</B>
</PRE>
<P><LI>Prepare to access server files using the method you have selected from
those listed in <A HREF="#HDRGETBIN">Obtaining the Binary Distribution</A>:
<UL>
<P><LI>If copying files from the CD-ROM, mount the CD-ROM for this machine's
system type on the local <B>/cdrom</B> directory. For instructions
on mounting CD-ROMs (either locally or remotely via NFS), consult the
operating system documentation. Then change directory as
indicated.
<PRE> # <B>cd /cdrom/</B><VAR>sysname</VAR><B>/root.server/usr/afs/bin</B>
</PRE>
<P><LI>If accessing the distribution electronically, you possibly already
downloaded it in <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>. If so, it is still in the <VAR>temp_afs36_dir</VAR>
directory. If not, download it and run any commands necessary to
uncompress or unpack the distribution. Place it in a temporary location
(<VAR>temp_afs36_dir</VAR>), and change directory to the indicated
subdirectory.
<PRE> # <B>cd</B> <VAR>temp_afs36_dir</VAR><B>/root.server/usr/afs/bin</B>
</PRE>
</UL>
<P><LI>Copy the server binaries from the distribution into the
<B>/usr/afs/bin.36</B> directory.
<PRE> # <B>cp -p * /usr/afs/bin.36</B>
</PRE>
<P><LI><A NAME="LISV-BIN-NEWDIRS"></A>Rename the current <B>/usr/afs/bin</B> directory
to <B>/usr/afs/bin.old</B> and the
<B>/usr/afs/bin.36</B> directory to the standard location.
<PRE> # <B>cd /usr/afs</B>
# <B>mv bin bin.old</B>
# <B>mv bin.36 bin</B>
</PRE>
</OL>
<P><H3><A NAME="HDRSV-UP" HREF="aurns002.htm#ToC_33">Upgrading Server Machines</A></H3>
<P>Repeat the following instructions on each server
machine. Perform them first on the database server machine with the
lowest IP address, next on the other database server machines, and finally on
other server machines.
<P>The AFS data stored on a server machine is inaccessible to client machines
during the upgrade process, so it is best to perform it at the time and in the
manner that disturbs your users least.
<OL TYPE=1>
<P><LI><A NAME="LISVUP-UPCLIENTBIN"></A>If you have just followed the steps in <A HREF="#HDRSV-BIN">Distributing Binaries to Server Machines</A> to install the server binaries on binary distribution
machines, wait the required interval (by default, five minutes) for the local
<B>upclientbin</B> process to retrieve the binaries.
<P>If you do not use binary distribution machines, perform the instructions in
<A HREF="#HDRSV-BIN">Distributing Binaries to Server Machines</A> on this machine.
<P><LI>Become the local superuser <B>root</B>, if you are not already, by
issuing the <B>su</B> command.
<PRE> % <B>su root</B>
Password: <VAR>root_password</VAR>
</PRE>
<P><LI>If the machine also functions as a client machine, prepare to access
client files using the method you have selected from those listed in <A HREF="#HDRGETBIN">Obtaining the Binary Distribution</A>:
<UL>
<P><LI>If you copied the contents of the <B>root.client</B> directory
into AFS (in Step <A HREF="#LISTOREBIN-CLIENT">6</A> of <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>), change directory as indicated.
<PRE> # <B>cd /afs/</B><VAR>cellname</VAR><B>/</B><VAR>sysname</VAR><B>/usr/afsws/root.client</B>
</PRE>
<P><LI>If copying files from the CD-ROM, mount the CD-ROM for this machine's
system type on the local <B>/cdrom</B> directory. For instructions
on mounting CD-ROMs (either locally or remotely via NFS), consult the
operating system documentation. Then change directory as
indicated.
<PRE> # <B>cd /cdrom/</B><VAR>sysname</VAR><B>/root.client</B>
</PRE>
<P><LI>If accessing the distribution electronically, you possibly already
downloaded it in <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>. If so, it is still in the <VAR>temp_afs36_dir</VAR>
directory. If not, download it and run any commands necessary to
uncompress or unpack the distribution. Place it in a temporary location
(<VAR>temp_afs36_dir</VAR>), and change directory to the indicated
subdirectory.
<PRE> # <B>cd</B> <VAR>temp_afs36_dir</VAR><B>/root.client</B>
</PRE>
</UL>
<P><LI>If the machine also functions as a client machine, copy the AFS 3.6
version of the <B>afsd</B> binary and other files to the
<B>/usr/vice/etc</B> directory.
<TABLE><TR><TD ALIGN="LEFT" VALIGN="TOP"><B>Note:</B></TD><TD ALIGN="LEFT" VALIGN="TOP">Some files in the <B>/usr/vice/etc</B> directory, such as the AFS
initialization file (called <B>afs.rc</B> on many system types), do
not necessarily need to change for a new release. It is a good policy
to compare the contents of the distribution directory and the
<B>/usr/vice/etc</B> directory before performing the copying
operation. If there are files in the <B>/usr/vice/etc</B> directory
that you created for AFS 3.5 or 3.6 Beta and that you want to
retain, either move them to a safe location before performing the following
instructions, or alter the following instructions to copy over only the
appropriate files.
</TD></TR></TABLE>
<PRE> # <B>cp -p usr/vice/etc/* /usr/vice/etc</B>
# <B>cp -rp usr/vice/etc/C /usr/vice/etc</B>
</PRE>
<P>If you have not yet incorporated AFS into the machine's authentication
system, perform the instructions in the section titled <I>Enabling AFS
Login</I> for this system type in the <I>IBM AFS Quick Beginnings</I>
chapter about configuring client machines. If this machine was running
the same operating system revision with AFS 3.5 or AFS 3.6 Beta,
you presumably already incorporated AFS into its authentication system.
<P><LI>AFS performance is most dependable if the AFS release version of the
kernel extensions and server processes is the same. Therefore, it is
best to incorporate the AFS 3.6 kernel extensions into the kernel at
this point.
<P>First issue the following command to shut down the server processes,
preventing them from restarting accidently before you incorporate the AFS
3.6 extensions into the kernel.
<PRE> # <B>bos shutdown</B> &lt;<VAR>machine&nbsp;name</VAR>> <B>-localauth -wait</B>
</PRE>
<P>Then perform the instructions in <A HREF="#HDRKERNEL">Incorporating AFS into the Kernel and Enabling the AFS Initialization Script</A>, which have you reboot the machine. Assuming that the
machine's AFS initialization script is configured to invoke the
<B>bosserver</B> command as specified in <I>IBM AFS Quick
Beginnings</I>, the BOS Server starts itself and then the other AFS server
processes listed in its local <B>/usr/afs/local/BosConfig</B> file.
<P>There are two circumstances in which you must incorporate the kernel
extensions and reboot now rather than later:
<UL>
<P><LI>You are upgrading the File Server on an HP-UX machine
<P><LI>The machine also serves as a client, you upgraded the client files in the
previous step, and you want the new Cache Manager to become operative right
away
</UL>
<P>In any other circumstances, you can choose to upgrade the kernel extensions
later. Choose one of the following options:
<UL>
<P><LI>Restart all server processes by issuing the <B>bos restart</B> command
with the <B>-bosserver</B> flag.
<PRE> # <B>bos restart</B> &lt;<VAR>machine&nbsp;name</VAR>> <B>-localauth -bosserver</B>
</PRE>
<P><LI>Wait to start using the new binaries until the processes restart
automatically at the binary restart time specified in the
<B>/usr/afs/local/BosConfig</B> file.
</UL>
<P><LI><A NAME="LISV-UP-PRUNE"></A>Once you are satisfied that the machine is functioning
correctly at AFS 3.6, there is no need to retain previous versions of
the server binaries in the <B>/usr/afs/bin</B> directory. (You can
always use the <B>bos install</B> command to reinstall them if it becomes
necessary to downgrade). If you use the Update Server, the
<B>upclientbin</B> process renamed them with a <B>.old</B>
extension in Step <A HREF="#LISVUP-UPCLIENTBIN">1</A>. To reclaim the disk space occupied in the
<B>/usr/afs/bin</B> directory by <B>.bak</B> and
<B>.old</B> files, you can use the following command:
<PRE> # <B>bos prune</B> &lt;<VAR>machine&nbsp;name</VAR>> <B>-bak -old -localauth</B>
</PRE>
<P>Step <A HREF="#LISV-BIN-NEWDIRS">5</A> of <A HREF="#HDRSV-BIN">Distributing Binaries to Server Machines</A> had you move the previous version of the
binaries to the <B>/usr/afs/bin.old</B> directory. You can
also remove that directory on any machine where you created it.
<PRE> # <B>rm -rf /usr/afs/bin.old</B>
</PRE>
</OL>
<P><H3><A NAME="HDRCLI-UP" HREF="aurns002.htm#ToC_34">Upgrading Client Machines</A></H3>
<OL TYPE=1>
<P><LI>Become the local superuser <B>root</B>, if you are not already, by
issuing the <B>su</B> command.
<PRE> % <B>su root</B>
Password: <VAR>root_password</VAR>
</PRE>
<P><LI>Prepare to access client files using the method you have selected from
those listed in <A HREF="#HDRGETBIN">Obtaining the Binary Distribution</A>:
<UL>
<P><LI>If you copied the contents of the <B>root.client</B> directory
into AFS (in Step <A HREF="#LISTOREBIN-CLIENT">6</A> of <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>), change directory as indicated.
<PRE> # <B>cd /afs/</B><VAR>cellname</VAR><B>/</B><VAR>sysname</VAR><B>/usr/afsws/root.client</B>
</PRE>
<P><LI>If copying files from the CD-ROM, mount the CD-ROM for this machine's
system type on the local <B>/cdrom</B> directory. For instructions
on mounting CD-ROMs (either locally or remotely via NFS), consult the
operating system documentation. Then change directory as
indicated.
<PRE> # <B>cd /cdrom/</B><VAR>sysname</VAR><B>/root.client</B>
</PRE>
<P><LI>If accessing the distribution electronically, you possibly already
downloaded it in <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>. If so, it is still in the <VAR>temp_afs36_dir</VAR>
directory. If not, download it and run any commands necessary to
uncompress or unpack the distribution. Place it in a temporary location
(<VAR>temp_afs36_dir</VAR>), and change directory to the indicated
subdirectory.
<PRE> # <B>cd</B> <VAR>temp_afs36_dir</VAR><B>/root.client</B>
</PRE>
</UL>
<P><LI>Copy the AFS 3.6 version of the <B>afsd</B> binary and other
files to the <B>/usr/vice/etc</B> directory.
<TABLE><TR><TD ALIGN="LEFT" VALIGN="TOP"><B>Note:</B></TD><TD ALIGN="LEFT" VALIGN="TOP">Some files in the <B>/usr/vice/etc</B> directory, such as the AFS
initialization file (called <B>afs.rc</B> on many system types), do
not necessarily need to change for a new release. It is a good policy
to compare the contents of the distribution directory and the
<B>/usr/vice/etc</B> directory before performing the copying
operation. If there are files in the <B>/usr/vice/etc</B> directory
that you created for AFS 3.5 or 3.6 Beta and that you want to
retain, either move them to a safe location before performing the following
instructions, or alter the following instructions to copy over only the
appropriate files.
</TD></TR></TABLE>
<PRE> # <B>cp -p usr/vice/etc/* /usr/vice/etc</B>
# <B>cp -rp usr/vice/etc/C /usr/vice/etc</B>
</PRE>
<P>If you have not yet incorporated AFS into the machine's authentication
system, perform the instructions in the section titled <I>Enabling AFS
Login</I> for this system type in the <I>IBM AFS Quick Beginnings</I>
chapter about configuring client machines. If this machine was running
the same operating system revision with AFS 3.5 or AFS 3.6 Beta,
you presumably already incorporated AFS into its authentication system.
<P><LI>Perform the instructions in <A HREF="#HDRKERNEL">Incorporating AFS into the Kernel and Enabling the AFS Initialization Script</A> to incorporate AFS extensions into the kernel. The
instructions conclude with a reboot of the machine, which starts the new Cache
Manager.
</OL>
<P><H3><A NAME="HDRKERNEL" HREF="aurns002.htm#ToC_35">Incorporating AFS into the Kernel and Enabling the AFS Initialization Script</A></H3>
<P>As part of upgrading a machine to AFS 3.6, you must
incorporate AFS 3.6 extensions into its kernel and verify that the AFS
initialization script is included in the machine's startup
sequence. Proceed to the instructions for your system type:
<UL>
<P><LI><A HREF="#HDRKERN_AIX">Loading AFS into the AIX Kernel</A>
<P><LI><A HREF="#HDRKERN_DUX">Building AFS into the Digital UNIX Kernel</A>
<P><LI><A HREF="#HDRKERN_HP">Building AFS into the HP-UX Kernel</A>
<P><LI><A HREF="#HDRKERN_IRIX">Incorporating AFS into the IRIX Kernel</A>
<P><LI><A HREF="#HDRKERN_LNX">Loading AFS into the Linux Kernel</A>
<P><LI><A HREF="#HDRKERN_SOL">Loading AFS into the Solaris Kernel</A>
</UL>
<P><H3><A NAME="HDRKERN_AIX" HREF="aurns002.htm#ToC_36">Loading AFS into the AIX Kernel</A></H3>
<P>The AIX kernel extension facility is the dynamic kernel
loader provided by IBM Corporation. AIX does not support incorporation
of AFS modifications during a kernel build.
<P>For AFS to function correctly, the kernel extension facility must run each
time the machine reboots, so the AFS initialization script (included in the
AFS distribution) invokes it automatically. In this section you copy
the script to the conventional location and edit it to select the appropriate
options depending on whether NFS is also to run.
<P>After editing the script, you verify that there is an entry in the AIX
<B>inittab</B> file that invokes it, then reboot the machine to
incorporate the new AFS extensions into the kernel and restart the Cache
Manager.
<OL TYPE=1>
<P><LI>Access the AFS distribution by changing directory as indicated.
Substitute <B>rs_aix42</B> for the <VAR>sysname</VAR> variable.
<UL>
<P><LI>If you copied the contents of the <B>root.client</B> directory
into AFS (in Step <A HREF="#LISTOREBIN-CLIENT">6</A> of <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>), change directory as indicated.
<PRE> # <B>cd /afs/</B><VAR>cellname</VAR><B>/</B><VAR>sysname</VAR><B>/usr/afsws/root.client</B>
</PRE>
<P><LI>If copying files from the CD-ROM, mount the CD-ROM for this machine's
system type on the local <B>/cdrom</B> directory. For instructions
on mounting CD-ROMs (either locally or remotely via NFS), consult the
operating system documentation. Then change directory as
indicated.
<PRE> # <B>cd /cdrom/</B><VAR>sysname</VAR><B>/root.client</B>
</PRE>
<P><LI>If accessing the distribution electronically, you possibly already
downloaded it in <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>. If so, it is still in the <VAR>temp_afs36_dir</VAR>
directory. If not, download it and run any commands necessary to
uncompress or unpack the distribution. Place it in a temporary location
(<VAR>temp_afs36_dir</VAR>), and change directory to the indicated
subdirectory.
<PRE> # <B>cd</B> <VAR>temp_afs36_dir</VAR><B>/root.client</B>
</PRE>
</UL>
<P><LI>Copy the AFS kernel library files to the local
<B>/usr/vice/etc/dkload</B> directory.
<PRE> # <B>cd usr/vice/etc</B>
# <B>cp -rp dkload /usr/vice/etc</B>
</PRE>
<P><LI>Because you ran AFS 3.5 on this machine, the appropriate AFS
initialization file possibly already exists as
<B>/etc/rc.afs</B>. Compare it to the version in the
<B>root.client/usr/vice/etc</B> directory of the AFS 3.6
distribution to see if any changes are needed.
<P>If the initialization file is not already in place, copy it now.
<PRE> # <B>cp -p rc.afs /etc/rc.afs</B>
</PRE>
<P><LI>Edit the <B>/etc/rc.afs</B> script, setting the <TT>NFS</TT>
variable if it is not already.
<UL>
<P><LI>If the machine is not to function as an NFS/AFS Translator, set the NFS
variable as follows:
<PRE> NFS=$NFS_NONE
</PRE>
<P><LI>If the machine is to function as an NFS/AFS Translator and is running AIX
4.2.1 or higher, set the NFS variable as follows. Only
sites that have a license for the NFS/AFS Translator are allowed to run
translator machines. Machines running the base level of AIX 4.2
cannot be translator machines.
<P>NFS must already be loaded into the kernel. It is loaded
automatically on machines running AIX 4.1.1 and later, as long
as the file <B>/etc/exports</B> exists.
<PRE> NFS=$NFS_IAUTH
</PRE>
</UL>
<P><LI>Place the following line in the AIX initialization file,
<B>/etc/inittab</B>, if it is not already. It invokes the AFS
initialization script and needs to appear just after the line that starts NFS
daemons.
<PRE> rcafs:2:wait:/etc/rc.afs > /dev/console 2>&amp;1 # Start AFS services
</PRE>
<P><LI><B>(Optional)</B> There are now copies of the AFS initialization file
in both the <B>/usr/vice/etc</B> and <B>/etc</B> directories.
If you want to avoid potential confusion by guaranteeing that they are always
the same, create a link between them. You can always retrieve the
original script from the AFS distribution if necessary.
<PRE> # <B>cd /usr/vice/etc</B>
# <B>rm rc.afs</B>
# <B>ln -s /etc/rc.afs</B>
</PRE>
<P><LI>Reboot the machine.
<PRE> # <B>shutdown -r now</B>
</PRE>
<P><LI>If you are upgrading a server machine, login again as the local superuser
<B>root</B>, then return to Step <A HREF="#LISV-UP-PRUNE">6</A> in <A HREF="#HDRSV-UP">Upgrading Server Machines</A>.
<PRE> login: <B>root</B>
Password: <VAR>root_password</VAR>
</PRE>
</OL>
<P><H3><A NAME="HDRKERN_DUX" HREF="aurns002.htm#ToC_37">Building AFS into the Digital UNIX Kernel</A></H3>
<P>On Digital UNIX machines, you must build AFS
modifications into a new static kernel; Digital UNIX does not support
dynamic loading. If the machine's hardware and software
configuration exactly matches another Digital UNIX machine on which AFS
3.6 is already built into the kernel, you can choose to copy the kernel
from that machine to this one. In general, however, it is better to
build AFS modifications into the kernel on each machine according to the
following instructions.
<P>If the machine was running a version of Digital UNIX 4.0 with a
previous version of AFS, the configuration changes specified in Step <A HREF="#LIDUX-CONFAFS">1</A> through Step <A HREF="#LIDUX-VFSCONF">4</A> are presumably already in place.
<OL TYPE=1>
<P><LI><A NAME="LIDUX-CONFAFS"></A>Create a copy called <B>AFS</B> of the basic kernel
configuration file included in the Digital UNIX distribution as
<B>/usr/sys/conf/</B><VAR>machine_name</VAR>, where <VAR>machine_name</VAR> is
the machine's hostname in all uppercase letters.
<PRE> # <B>cd /usr/sys/conf</B>
# <B>cp</B> <VAR>machine_name</VAR> <B>AFS</B>
</PRE>
<P><LI>Add AFS to the list of options in the configuration file you created in
the previous step, so that the result looks like the following:
<PRE> . .
. .
options UFS
options NFS
options AFS
. .
. .
</PRE>
<P><LI>Add an entry for AFS to two places in the <B>/usr/sys/conf/files</B>
file.
<UL>
<P><LI>Add a line for AFS to the list of <TT>OPTIONS</TT>, so that the result
looks like the following:
<PRE> . . .
. . .
OPTIONS/nfs optional nfs define_dynamic
OPTIONS/afs optional afs define_dynamic
OPTIONS/cdfs optional cdfs define_dynamic
. . .
. . .
</PRE>
<P><LI>Add an entry for AFS to the list of <TT>MODULES</TT>, so that the result
looks like the following:
<PRE> . . . .
. . . .
#
MODULE/nfs_server optional nfs_server Binary
nfs/nfs_server.c module nfs_server optimize -g3
nfs/nfs3_server.c module nfs_server optimize -g3
#
MODULE/afs optional afs Binary
afs/libafs.c module afs
#
</PRE>
</UL>
<P><LI><A NAME="LIDUX-VFSCONF"></A>Add an entry for AFS to two places in the
<B>/usr/sys/vfs/vfs_conf.c</B> file.
<UL>
<P><LI>Add AFS to the list of defined file systems, so that the result looks like
the following:
<PRE> . .
. .
#include &lt;afs.h>
#if defined(AFS) &amp;&amp; AFS
extern struct vfsops afs_vfsops;
#endif
. .
. .
</PRE>
<P><LI>Put a declaration for AFS in the <B>vfssw[]</B> table's
MOUNT_ADDON slot, so that the result looks like the following:
<PRE> . . .
. . .
&amp;fdfs_vfsops, "fdfs", /* 12 = MOUNT_FDFS */
#if defined(AFS)
&amp;afs_vfsops, "afs",
#else
(struct vfsops *)0, "", /* 13 = MOUNT_ADDON */
#endif
#if NFS &amp;&amp; INFS_DYNAMIC
&amp;nfs3_vfsops, "nfsv3", /* 14 = MOUNT_NFS3 */
</PRE>
</UL>
<P><LI>Access the AFS distribution by changing directory as indicated.
Substitute <B>alpha_dux40</B> for the <VAR>sysname</VAR> variable.
<UL>
<P><LI>If you copied the contents of the <B>root.client</B> directory
into AFS (in Step <A HREF="#LISTOREBIN-CLIENT">6</A> of <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>), change directory as indicated.
<PRE> # <B>cd /afs/</B><VAR>cellname</VAR><B>/</B><VAR>sysname</VAR><B>/usr/afsws/root.client</B>
</PRE>
<P><LI>If copying files from the CD-ROM, mount the CD-ROM for this machine's
system type on the local <B>/cdrom</B> directory. For instructions
on mounting CD-ROMs (either locally or remotely via NFS), consult the
operating system documentation. Then change directory as
indicated.
<PRE> # <B>cd /cdrom/</B><VAR>sysname</VAR><B>/root.client</B>
</PRE>
<P><LI>If accessing the distribution electronically, you possibly already
downloaded it in <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>. If so, it is still in the <VAR>temp_afs36_dir</VAR>
directory. If not, download it and run any commands necessary to
uncompress or unpack the distribution. Place it in a temporary location
(<VAR>temp_afs36_dir</VAR>), and change directory to the indicated
subdirectory.
<PRE> # <B>cd</B> <VAR>temp_afs36_dir</VAR><B>/root.client</B>
</PRE>
</UL>
<P><LI>Because you ran AFS 3.5 on this machine, the appropriate AFS
initialization file possibly already exists as
<B>/sbin/init.d/afs</B>. Compare it to the version in the
<B>root.client/usr/vice/etc</B> directory of the AFS 3.6
distribution to see if any changes are needed.
<P>If the initialization file is not already in place, copy it now.
Note the removal of the <B>.rc</B> extension as you copy.
<PRE> # <B>cp -p usr/vice/etc/afs.rc /sbin/init.d/afs</B>
</PRE>
<P><LI>Copy the AFS kernel module to the local <B>/usr/sys/BINARY</B>
directory.
<P>The AFS 3.6 distribution includes only the
<B>libafs.nonfs.o</B> version of the library, because
Digital UNIX machines are not supported as NFS/AFS Translator machines.
<PRE> # <B>cp -p bin/libafs.nonfs.o /usr/sys/BINARY/afs.mod</B>
</PRE>
<P><LI>Configure and build the kernel. Respond to any prompts by pressing
&lt;<B>Return</B>>. The resulting kernel is in the file
<B>/sys/AFS/vmunix</B>.
<PRE> # <B>doconfig -c AFS</B>
</PRE>
<P><LI>Rename the existing kernel file and copy the new, AFS-modified file to the
standard location.
<PRE> # <B>mv /vmunix /vmunix_orig</B>
# <B>cp -p /sys/AFS/vmunix /vmunix</B>
</PRE>
<P><LI>Verify the existence of the symbolic links specified in the following
commands, which incorporate the AFS initialization script into the Digital
UNIX startup and shutdown sequence. If necessary, issue the commands to
create the links.
<PRE> # <B>ln -s ../init.d/afs /sbin/rc3.d/S67afs</B>
# <B>ln -s ../init.d/afs /sbin/rc0.d/K66afs</B>
</PRE>
<P><LI><B>(Optional)</B> If the machine is configured as a client, there are
now copies of the AFS initialization file in both the <B>/usr/vice/etc</B>
and <B>/sbin/init.d</B> directories. If you want to avoid
potential confusion by guaranteeing that they are always the same, create a
link between them. You can always retrieve the original script from the
AFS distribution if necessary.
<PRE> # <B>cd /usr/vice/etc</B>
# <B>rm afs.rc</B>
# <B>ln -s /sbin/init.d/afs afs.rc</B>
</PRE>
<P><LI>Reboot the machine.
<PRE> # <B>shutdown -r now</B>
</PRE>
<P><LI>If you are upgrading a server machine, login again as the local superuser
<B>root</B>, then return to Step <A HREF="#LISV-UP-PRUNE">6</A> in <A HREF="#HDRSV-UP">Upgrading Server Machines</A>.
<PRE> login: <B>root</B>
Password: <VAR>root_password</VAR>
</PRE>
</OL>
<P><H3><A NAME="HDRKERN_HP" HREF="aurns002.htm#ToC_38">Building AFS into the HP-UX Kernel</A></H3>
<P>On HP-UX machines, you must build AFS modifications into a
new kernel; HP-UX does not support dynamic loading. If the
machine's hardware and software configuration exactly matches another
HP-UX machine on which AFS 3.6 is already built into the kernel, you
can choose to copy the kernel from that machine to this one. In
general, however, it is better to build AFS modifications into the kernel on
each machine according to the following instructions.
<OL TYPE=1>
<P><LI>Move the existing kernel-related files to a safe location.
<PRE> # <B>cp -p /stand/vmunix /stand/vmunix.noafs</B>
# <B>cp -p /stand/system /stand/system.noafs</B>
</PRE>
<P><LI>Access the AFS distribution by changing directory as indicated.
Substitute <B>hp_ux110</B> for the <VAR>sysname</VAR> variable.
<UL>
<P><LI>If you copied the contents of the <B>root.client</B> directory
into AFS (in Step <A HREF="#LISTOREBIN-CLIENT">6</A> of <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>), change directory as indicated.
<PRE> # <B>cd /afs/</B><VAR>cellname</VAR><B>/</B><VAR>sysname</VAR><B>/usr/afsws/root.client</B>
</PRE>
<P><LI>If copying files from the CD-ROM, mount the CD-ROM for this machine's
system type on the local <B>/cdrom</B> directory. For instructions
on mounting CD-ROMs (either locally or remotely via NFS), consult the
operating system documentation. Then change directory as
indicated.
<PRE> # <B>cd /cdrom/</B><VAR>sysname</VAR><B>/root.client</B>
</PRE>
<P><LI>If accessing the distribution electronically, you possibly already
downloaded it in <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>. If so, it is still in the <VAR>temp_afs36_dir</VAR>
directory. If not, download it and run any commands necessary to
uncompress or unpack the distribution. Place it in a temporary location
(<VAR>temp_afs36_dir</VAR>), and change directory to the indicated
subdirectory.
<PRE> # <B>cd</B> <VAR>temp_afs36_dir</VAR><B>/root.client</B>
</PRE>
</UL>
<P><LI>Because you ran AFS 3.5 on this machine, the appropriate AFS
initialization file possibly already exists as
<B>/sbin/init.d/afs</B>. Compare it to the version in the
<B>root.client/usr/vice/etc</B> directory of the AFS 3.6
distribution to see if any changes are needed.
<P>If the initialization file is not already in place, copy it now.
Note the removal of the <B>.rc</B> extension as you copy.
<PRE> # <B>cp -p usr/vice/etc/afs.rc /sbin/init.d/afs</B>
</PRE>
<P><LI>Copy the file <B>afs.driver</B> to the local
<B>/usr/conf/master.d</B> directory, changing its name to
<B>afs</B> as you do so.
<PRE> # <B>cp -p usr/vice/etc/afs.driver /usr/conf/master.d/afs</B>
</PRE>
<P><LI>Copy the AFS kernel module to the local <B>/usr/conf/lib</B>
directory.
<P>HP-UX machines are not supported as NFS/AFS Translator machines, so AFS
3.6 includes only libraries called
<B>libafs.nonfs.a</B> (for the 32-bit version of
HP-UX) and <B>libafs64.nonfs.a</B> (for the 64-bit
version of HP-UX). Change the library's name to
<B>libafs.a</B> as you copy it.
<P>For the 32-bit version of HP-UX:
<PRE> # <B>cp -p bin/libafs.nonfs.a /usr/conf/lib/libafs.a</B>
</PRE>
<P>For the 64-bit version of HP-UX:
<PRE> # <B>cp -p bin/libafs64.nonfs.a /usr/conf/lib/libafs.a</B>
</PRE>
<P><LI>Verify the existence of the symbolic links specified in the following
commands, which incorporate the AFS initialization script into the HP-UX
startup and shutdown sequence. If necessary, issue the commands to
create the links.
<PRE> # <B>ln -s ../init.d/afs /sbin/rc2.d/S460afs</B>
# <B>ln -s ../init.d/afs /sbin/rc2.d/K800afs</B>
</PRE>
<P><LI><B>(Optional)</B> If the machine is configured as a client, there are
now copies of the AFS initialization file in both the <B>/usr/vice/etc</B>
and <B>/sbin/init.d</B> directories. If you want to avoid
potential confusion by guaranteeing that they are always the same, create a
link between them. You can always retrieve the original script from the
AFS distribution if necessary.
<PRE> # <B>cd /usr/vice/etc</B>
# <B>rm afs.rc</B>
# <B>ln -s /sbin/init.d/afs afs.rc</B>
</PRE>
<P><LI>Incorporate the AFS driver into the kernel, either using the
<B>SAM</B> program or a series of individual commands. Both methods
reboot the machine, which loads the new kernel and starts the Cache
Manager.
<UL>
<P><LI>To use the <B>SAM</B> program:
<OL TYPE=a>
<P><LI>Invoke the <B>SAM</B> program, specifying the hostname of the local
machine as <VAR>local_hostname</VAR>. The <B>SAM</B> graphical user
interface pops up.
<PRE> # <B>sam -display</B> <VAR>local_hostname</VAR><B>:0</B>
</PRE>
<P><LI>Choose the <B>Kernel Configuration</B> icon, then the
<B>Drivers</B> icon. From the list of drivers, select
<B>afs</B>.
<P><LI>Open the pull-down <B>Actions</B> menu and choose the <B>Add Driver
to Kernel</B> option.
<P><LI>Open the <B>Actions</B> menu again and choose the <B>Create a New
Kernel</B> option.
<P><LI>Confirm your choices by choosing <B>Yes</B> and <B>OK</B> when
prompted by subsequent pop-up windows. The <B>SAM</B> program
builds the kernel and reboots the system.
<P><LI>Login again as the superuser <B>root</B>.
<PRE> login: <B>root</B>
Password: <VAR>root_password</VAR>
</PRE>
</OL>
<P><LI>To use individual commands:
<OL TYPE=a>
<P><LI>Edit the file <B>/stand/system</B>, adding an entry for <B>afs</B>
to the <TT>Subsystems</TT> section.
<P><LI>Change to the <B>/stand/build</B> directory and issue the
<B>mk_kernel</B> command to build the kernel.
<PRE> # <B>cd /stand/build</B>
# <B>mk_kernel</B>
</PRE>
<P><LI>Move the new kernel to the standard location (<B>/stand/vmunix</B>),
reboot the machine to start using it, and login again as the superuser
<B>root</B>.
<PRE> # <B>mv /stand/build/vmunix_test /stand/vmunix</B>
# <B>cd /</B>
# <B>shutdown -r now</B>
login: <B>root</B>
Password: <VAR>root_password</VAR>
</PRE>
</OL>
</UL>
<P><LI>If you are upgrading a server machine, login again as the local superuser
<B>root</B>, then return to Step <A HREF="#LISV-UP-PRUNE">6</A> in <A HREF="#HDRSV-UP">Upgrading Server Machines</A>.
<PRE> login: <B>root</B>
Password: <VAR>root_password</VAR>
</PRE>
</OL>
<P><H3><A NAME="HDRKERN_IRIX" HREF="aurns002.htm#ToC_39">Incorporating AFS into the IRIX Kernel</A></H3>
<P>To incorporate AFS into the kernel on IRIX machines,
choose one of two methods:
<UL>
<P><LI>Dynamic loading using the <B>ml</B> program distributed by Silicon
Graphics, Incorporated (SGI).
<P><LI>Building a new static kernel. Proceed to <A HREF="#HDRBUILD-IRIX">Building AFS into the IRIX Kernel</A>.
</UL>
<P><H4><A NAME="Header_40">Loading AFS into the IRIX Kernel</A></H4>
<P>The <B>ml</B> program is the dynamic kernel loader provided by SGI
for IRIX systems. If you use it rather than building AFS modifications
into a static kernel, then for AFS to function correctly the <B>ml</B>
program must run each time the machine reboots. Therefore, the AFS
initialization script (included on the AFS CD-ROM) invokes it automatically
when the <B>afsml</B> configuration variable is activated. In this
section you activate the variable and run the script.
<OL TYPE=1>
<P><LI>Issue the <B>uname -m</B> command to determine the machine's CPU
type. The <B>IP</B><VAR>xx</VAR> value in the output must match one
of the supported CPU types listed in <A HREF="#HDRSYSTYPES">Supported System Types</A>.
<PRE> # <B>uname -m</B>
</PRE>
<P><LI>Access the AFS distribution by changing directory as indicated.
Substitute <B>sgi_65</B> for the <VAR>sysname</VAR> variable.
<UL>
<P><LI>If you copied the contents of the <B>root.client</B> directory
into AFS (in Step <A HREF="#LISTOREBIN-CLIENT">6</A> of <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>), change directory as indicated.
<PRE> # <B>cd /afs/</B><VAR>cellname</VAR><B>/</B><VAR>sysname</VAR><B>/usr/afsws/root.client</B>
</PRE>
<P><LI>If copying files from the CD-ROM, mount the CD-ROM for this machine's
system type on the local <B>/cdrom</B> directory. For instructions
on mounting CD-ROMs (either locally or remotely via NFS), consult the
operating system documentation. Then change directory as
indicated.
<PRE> # <B>cd /cdrom/</B><VAR>sysname</VAR><B>/root.client</B>
</PRE>
<P><LI>If accessing the distribution electronically, you possibly already
downloaded it in <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>. If so, it is still in the <VAR>temp_afs36_dir</VAR>
directory. If not, download it and run any commands necessary to
uncompress or unpack the distribution. Place it in a temporary location
(<VAR>temp_afs36_dir</VAR>), and change directory to the indicated
subdirectory.
<PRE> # <B>cd</B> <VAR>temp_afs36_dir</VAR><B>/root.client</B>
</PRE>
</UL>
<P><LI>Copy the appropriate AFS kernel library file to the local
<B>/usr/vice/etc/sgiload</B> directory; the <B>IP</B><VAR>xx</VAR>
portion of the library file name must match the value returned by the
<B>uname -m</B> command. Also choose the file appropriate to
whether the machine's kernel supports NFS server functionality (NFS must
be supported for the machine to act as an NFS/AFS Translator). Single-
and multiprocessor machines use the same library file.
<P>You can choose to copy all of the kernel library files into the
<B>/usr/vice/etc/sgiload</B> directory, but they require a significant
amount of space.
<PRE> # <B>cd usr/vice/etc/sgiload</B>
</PRE>
<P>If the machine is not to act as an NFS/AFS translator:
<PRE> # <B>cp -p libafs.IP<VAR>xx</VAR>.nonfs.o /usr/vice/etc/sgiload</B>
</PRE>
<P>If the machine is to act as an NFS/AFS translator, in which case its kernel
must support NFS server functionality:
<PRE> # <B>cp -p libafs.IP<VAR>xx</VAR>.o /usr/vice/etc/sgiload</B>
</PRE>
<P><LI>Proceed to <A HREF="#HDRIRIX-SCRIPT">Enabling the AFS Initialization Script on IRIX Systems</A>.
</OL>
<P><H4><A NAME="HDRBUILD-IRIX">Building AFS into the IRIX Kernel</A></H4>
<P>If you prefer to build a kernel, and the machine's
hardware and software configuration exactly matches another IRIX machine on
which AFS 3.6 is already built into the kernel, you can choose to copy
the kernel from that machine to this one. In general, however, it is
better to build AFS modifications into the kernel on each machine according to
the following instructions.
<OL TYPE=1>
<P><LI>Access the AFS distribution by changing directory as indicated.
Substitute <B>sgi_65</B> for the <VAR>sysname</VAR> variable.
<UL>
<P><LI>If you copied the contents of the <B>root.client</B> directory
into AFS (in Step <A HREF="#LISTOREBIN-CLIENT">6</A> of <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>), change directory as indicated.
<PRE> # <B>cd /afs/</B><VAR>cellname</VAR><B>/</B><VAR>sysname</VAR><B>/usr/afsws/root.client</B>
</PRE>
<P><LI>If copying files from the CD-ROM, mount the CD-ROM for this machine's
system type on the local <B>/cdrom</B> directory. For instructions
on mounting CD-ROMs (either locally or remotely via NFS), consult the
operating system documentation. Then change directory as
indicated.
<PRE> # <B>cd /cdrom/</B><VAR>sysname</VAR><B>/root.client</B>
</PRE>
<P><LI>If accessing the distribution electronically, you possibly already
downloaded it in <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>. If so, it is still in the <VAR>temp_afs36_dir</VAR>
directory. If not, download it and run any commands necessary to
uncompress or unpack the distribution. Place it in a temporary location
(<VAR>temp_afs36_dir</VAR>), and change directory to the indicated
subdirectory.
<PRE> # <B>cd</B> <VAR>temp_afs36_dir</VAR><B>/root.client</B>
</PRE>
</UL>
<P><LI>Issue the <B>uname -m</B> command to determine the machine's CPU
type. The <B>IP</B><VAR>xx</VAR> value in the output must match one
of the supported CPU types listed in the <I>IBM AFS Release Notes</I> for
the current version of AFS.
<PRE> # <B>uname -m</B>
</PRE>
<P><LI>Copy the appropriate AFS kernel library file to the local file
<B>/var/sysgen/boot/afs.a</B>; the <B>IP</B><VAR>xx</VAR>
portion of the library file name must match the value returned by the
<B>uname -m</B> command. Also choose the file appropriate to
whether the machine's kernel supports NFS server functionality (NFS must
be supported for the machine to act as an NFS/AFS Translator). Single-
and multiprocessor machines use the same library file.
<PRE> # <B>cd bin</B>
</PRE>
<P>If the machine is not to act as an NFS/AFS translator:
<PRE> # <B>cp -p libafs.IP<VAR>xx</VAR>.nonfs.a /var/sysgen/boot/afs.a</B>
</PRE>
<P>If the machine is to act as an NFS/AFS translator, in which case its kernel
must support NFS server functionality:
<PRE> # <B>cp -p libafs.IP<VAR>xx</VAR>.a /var/sysgen/boot/afs.a</B>
</PRE>
<P><LI>Copy the kernel initialization file <B>afs.sm</B> to the local
<B>/var/sysgen/system</B> directory, and the kernel master file
<B>afs</B> to the local <B>/var/sysgen/master.d</B>
directory.
<PRE> # <B>cp -p afs.sm /var/sysgen/system</B>
# <B>cp -p afs /var/sysgen/master.d</B>
</PRE>
<P><LI>Copy the existing kernel file, <B>/unix</B>, to a safe location and
compile the new kernel. It is created as
<B>/unix.install</B>, and overwrites the existing <B>/unix</B>
file when the machine reboots.
<PRE> # <B>cp -p /unix /unix_orig</B>
# <B>autoconfig</B>
</PRE>
<P><LI>Proceed to <A HREF="#HDRIRIX-SCRIPT">Enabling the AFS Initialization Script on IRIX Systems</A>.
</OL>
<P><H4><A NAME="HDRIRIX-SCRIPT">Enabling the AFS Initialization Script on IRIX Systems</A></H4>
<OL TYPE=1>
<P><LI>Because you ran AFS 3.5 on this machine, the appropriate AFS
initialization file possibly already exists as
<B>/etc/init.d/afs</B>. Compare it to the version in the
<B>root.client/usr/vice/etc</B> directory of the AFS 3.6
distribution to see if any changes are needed.
<P>If the initialization file is not already in place, copy it now. If
the machine is configured as a client machine, you already copied the script
to the local <B>/usr/vice/etc</B> directory. Otherwise, change
directory as indicated, substituting <B>sgi_65</B> for the
<VAR>sysname</VAR> variable.
<UL>
<P><LI>If you copied the contents of the <B>root.client</B> directory
into AFS (in Step <A HREF="#LISTOREBIN-CLIENT">6</A> of <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>), change directory as indicated.
<PRE> # <B>cd /afs/</B><VAR>cellname</VAR><B>/</B><VAR>sysname</VAR><B>/usr/afsws/root.client</B>
</PRE>
<P><LI>If copying files from the CD-ROM, mount the CD-ROM for this machine's
system type on the local <B>/cdrom</B> directory. For instructions
on mounting CD-ROMs (either locally or remotely via NFS), consult the
operating system documentation. Then change directory as
indicated.
<PRE> # <B>cd /cdrom/</B><VAR>sysname</VAR><B>/root.client</B>
</PRE>
<P><LI>If accessing the distribution electronically, you possibly already
downloaded it in <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>. If so, it is still in the <VAR>temp_afs36_dir</VAR>
directory. If not, download it and run any commands necessary to
uncompress or unpack the distribution. Place it in a temporary location
(<VAR>temp_afs36_dir</VAR>), and change directory to the indicated
subdirectory.
<PRE> # <B>cd</B> <VAR>temp_afs36_dir</VAR><B>/root.client</B>
</PRE>
</UL>
<P>Now copy the script. Note the removal of the <B>.rc</B>
extension as you copy.
<PRE> # <B>cp -p</B> <VAR>script_location</VAR><B>/afs.rc /etc/init.d/afs</B>
</PRE>
<P><LI>If the <B>afsml</B> configuration variable is not already set
appropriately, issue the <B>chkconfig</B> command.
<P>If you are using the <B>ml</B> program:
<PRE> # <B>/etc/chkconfig -f afsml on</B>
</PRE>
<P>If you built AFS into a static kernel:
<PRE> # <B>/etc/chkconfig -f afsml off</B>
</PRE>
<P>If the machine is to function as an NFS/AFS Translator, the kernel supports
NFS server functionality, and the <B>afsxnfs</B> variable is not already
set appropriately, set it now.
<PRE> # <B>/etc/chkconfig -f afsxnfs on</B>
</PRE>
<P><LI>Verify the existence of the symbolic links specified in the following
commands, which incorporate the AFS initialization script into the IRIX
startup and shutdown sequence. If necessary, issue the commands to
create the links.
<PRE> # <B>ln -s ../init.d/afs /etc/rc2.d/S35afs</B>
# <B>ln -s ../init.d/afs /etc/rc0.d/K35afs</B>
</PRE>
<P><LI><B>(Optional)</B> If the machine is configured as a client, there are
now copies of the AFS initialization file in both the <B>/usr/vice/etc</B>
and <B>/etc/init.d</B> directories. If you want to avoid
potential confusion by guaranteeing that they are always the same, create a
link between them. You can always retrieve the original script from the
AFS distribution if necessary.
<PRE> # <B>cd /usr/vice/etc</B>
# <B>rm afs.rc</B>
# <B>ln -s /etc/init.d/afs afs.rc</B>
</PRE>
<P><LI>Reboot the machine.
<P>
<PRE> # <B>shutdown -i6 -g0 -y</B>
</PRE>
<P><LI>If you are upgrading a server machine, login again as the local superuser
<B>root</B>, then return to Step <A HREF="#LISV-UP-PRUNE">6</A> in <A HREF="#HDRSV-UP">Upgrading Server Machines</A>.
<PRE> login: <B>root</B>
Password: <VAR>root_password</VAR>
</PRE>
</OL>
<P><H3><A NAME="HDRKERN_LNX" HREF="aurns002.htm#ToC_43">Loading AFS into the Linux Kernel</A></H3>
<P>The <B>insmod</B> program is the dynamic kernel
loader for Linux. Linux does not support incorporation of AFS
modifications during a kernel build.
<P>For AFS to function correctly, the <B>insmod</B> program must run each
time the machine reboots, so the AFS initialization script (included on the
AFS CD-ROM) invokes it automatically. The script also includes commands
that select the appropriate AFS library file automatically. In this
section you run the script.
<OL TYPE=1>
<P><LI>Access the AFS distribution by changing directory as indicated.
Substitute <B>i386_linux22</B> for the <VAR>sysname</VAR> variable.
<UL>
<P><LI>If you copied the contents of the <B>root.client</B> directory
into AFS (in Step <A HREF="#LISTOREBIN-CLIENT">6</A> of <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>), change directory as indicated.
<PRE> # <B>cd /afs/</B><VAR>cellname</VAR><B>/</B><VAR>sysname</VAR><B>/usr/afsws/root.client</B>
</PRE>
<P><LI>If copying files from the CD-ROM, mount the CD-ROM for this machine's
system type on the local <B>/cdrom</B> directory. For instructions
on mounting CD-ROMs (either locally or remotely via NFS), consult the
operating system documentation. Then change directory as
indicated.
<PRE> # <B>cd /cdrom/</B><VAR>sysname</VAR><B>/root.client</B>
</PRE>
<P><LI>If accessing the distribution electronically, you possibly already
downloaded it in <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>. If so, it is still in the <VAR>temp_afs36_dir</VAR>
directory. If not, download it and run any commands necessary to
uncompress or unpack the distribution. Place it in a temporary location
(<VAR>temp_afs36_dir</VAR>), and change directory to the indicated
subdirectory.
<PRE> # <B>cd</B> <VAR>temp_afs36_dir</VAR><B>/root.client</B>
</PRE>
</UL>
<P><LI>Copy the AFS kernel library files to the local
<B>/usr/vice/etc/modload</B> directory. The filenames for the
libraries have the format
<B>libafs-</B><VAR>version</VAR><B>.o</B>, where <VAR>version</VAR>
indicates the kernel build level. The string <B>.mp</B> in
the <VAR>version</VAR> indicates that the file is appropriate for use with
symmetric multiprocessor (SMP) kernels.
<PRE> # <B>cd usr/vice/etc</B>
# <B>cp -rp modload /usr/vice/etc</B>
</PRE>
<P><LI>The AFS 3.6 distribution includes a new AFS initialization file
that can select automatically from the kernel extensions included in AFS
3.6. Copy it to the <B>/etc/rc.d/init.d</B>
directory, removing the <B>.rc</B> extension as you do.
<PRE> # <B>cp -p afs.rc /etc/rc.d/init.d/afs</B>
</PRE>
<P>The <B>afsd</B> options file possibly already exists as
<B>/etc/sysconfig/afs</B> from running a previous version of AFS on this
machine. Compare it to the version in the
<B>root.client/usr/vice/etc</B> directory of the AFS 3.6
distribution to see if any changes are needed.
<P>If the options file is not already in place, copy it now. Note the
removal of the <B>.conf</B> extension as you copy.
<PRE> # <B>cp -p afs.conf /etc/sysconfig/afs</B>
</PRE>
<P>If necessary, edit the options file to invoke the desired arguments on the
<B>afsd</B> command in the initialization script. For further
information, see the section titled <I>Configuring the Cache Manager</I>
in the <I>IBM AFS Quick Beginnings</I> chapter about configuring client
machines.
<P><LI>Issue the <B>chkconfig</B> command to activate the <B>afs</B>
configuration variable, if it is not already. Based on the instruction
in the AFS initialization file that begins with the string
<TT>#chkconfig</TT>, the command automatically creates the symbolic links
that incorporate the script into the Linux startup and shutdown
sequence.
<PRE> # <B>/sbin/chkconfig --add afs</B>
</PRE>
<P><LI><B>(Optional)</B> If the machine is configured as a client, there are
now copies of the AFS initialization file in both the <B>/usr/vice/etc</B>
and <B>/etc/init.d</B> directories, and copies of the
<B>afsd</B> options file in both the <B>/usr/vice/etc</B> and
<B>/etc/sysconfig</B> directories. If you want to avoid potential
confusion by guaranteeing that the two copies of each file are always the
same, create a link between them. You can always retrieve the original
script or options file from the AFS distribution if necessary.
<PRE> # <B>cd /usr/vice/etc</B>
# <B>rm afs.rc afs.conf</B>
# <B>ln -s /etc/rc.d/init.d/afs afs.rc</B>
# <B>ln -s /etc/sysconfig/afs afs.conf</B>
</PRE>
<P><LI>Reboot the machine.
<PRE> # <B>shutdown -r now</B>
</PRE>
<P><LI>If you are upgrading a server machine, login again as the local superuser
<B>root</B>, then return to Step <A HREF="#LISV-UP-PRUNE">6</A> in <A HREF="#HDRSV-UP">Upgrading Server Machines</A>.
<PRE> login: <B>root</B>
Password: <VAR>root_password</VAR>
</PRE>
</OL>
<P><H3><A NAME="HDRKERN_SOL" HREF="aurns002.htm#ToC_44">Loading AFS into the Solaris Kernel</A></H3>
<P>The <B>modload</B> program is the dynamic kernel
loader provided by Sun Microsystems for Solaris systems. Solaris does
not support incorporation of AFS modifications during a kernel build.
<P>For AFS to function correctly, the <B>modload</B> program must run each
time the machine reboots, so the AFS initialization script (included on the
AFS CD-ROM) invokes it automatically. In this section you copy the
appropriate AFS library file to the location where the <B>modload</B>
program accesses it and then run the script.
<OL TYPE=1>
<P><LI>Access the AFS distribution by changing directory as indicated.
Substitute <B>sun4x_56</B> or <B>sun4x_57</B> for the <VAR>sysname</VAR>
variable.
<UL>
<P><LI>If you copied the contents of the <B>root.client</B> directory
into AFS (in Step <A HREF="#LISTOREBIN-CLIENT">6</A> of <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>), change directory as indicated.
<PRE> # <B>cd /afs/</B><VAR>cellname</VAR><B>/</B><VAR>sysname</VAR><B>/usr/afsws/root.client</B>
</PRE>
<P><LI>If copying files from the CD-ROM, mount the CD-ROM for this machine's
system type on the local <B>/cdrom</B> directory. For instructions
on mounting CD-ROMs (either locally or remotely via NFS), consult the
operating system documentation. Then change directory as
indicated.
<PRE> # <B>cd /cdrom/</B><VAR>sysname</VAR><B>/root.client</B>
</PRE>
<P><LI>If accessing the distribution electronically, you possibly already
downloaded it in <A HREF="#HDRSTOREBIN">Storing Binaries in AFS</A>. If so, it is still in the <VAR>temp_afs36_dir</VAR>
directory. If not, download it and run any commands necessary to
uncompress or unpack the distribution. Place it in a temporary location
(<VAR>temp_afs36_dir</VAR>), and change directory to the indicated
subdirectory.
<PRE> # <B>cd</B> <VAR>temp_afs36_dir</VAR><B>/root.client</B>
</PRE>
</UL>
<P><LI>If this machine is running Solaris 2.6 or the 32-bit version
of Solaris 7, and ran that operating system with AFS 3.5, the
appropriate AFS initialization file possibly already exists as
<B>/etc/init.d/afs</B>. Compare it to the version in the
<B>root.client/usr/vice/etc</B> directory of the AFS 3.6
distribution to see if any changes are needed.
<P>If this machine is running the 64-bit version of Solaris 7, the AFS
initialization file differs from the AFS 3.5 version. Copy it
from the AFS 3.6 distribution.
<P>Note the removal of the <B>.rc</B> extension as you copy.
<PRE> # <B>cd usr/vice/etc</B>
# <B>cp -p afs.rc /etc/init.d/afs</B>
</PRE>
<P><LI>Copy the appropriate AFS kernel library file to the appropriate file in a
subdirectory of the local <B>/kernel/fs</B> directory.
<P>If the machine is running Solaris 2.6 or the 32-bit version of
Solaris 7 and is not to act as an NFS/AFS translator:
<PRE> # <B>cp -p modload/libafs.nonfs.o /kernel/fs/afs</B>
</PRE>
<P>If the machine is running Solaris 2.6 or the 32-bit version of
Solaris 7 and is to act as an NFS/AFS translator, in which case its kernel
must support NFS server functionality and the <B>nfsd</B> process must be
running:
<PRE> # <B>cp -p modload/libafs.o /kernel/fs/afs</B>
</PRE>
<P>If the machine is running the 64-bit version of Solaris 7 and is not
to act as an NFS/AFS translator:
<PRE> # <B>cp -p modload/libafs64.nonfs.o /kernel/fs/sparcv9/afs</B>
</PRE>
<P>If the machine is running the 64-bit version of Solaris 7 and is to
act as an NFS/AFS translator, in which case its kernel must support NFS server
functionality and the <B>nfsd</B> process must be running:
<PRE> # <B>cp -p modload/libafs64.o /kernel/fs/sparcv9/afs</B>
</PRE>
<P><LI>Verify the existence of the symbolic links specified in the following
commands, which incorporate the AFS initialization script into the Solaris
startup and shutdown sequence. If necessary, issue the commands to
create the links.
<PRE> # <B>ln -s ../init.d/afs /etc/rc3.d/S99afs</B>
# <B>ln -s ../init.d/afs /etc/rc0.d/K66afs</B>
</PRE>
<P><LI><B>(Optional)</B> If the machine is configured as a client, there are
now copies of the AFS initialization file in both the <B>/usr/vice/etc</B>
and <B>/etc/init.d</B> directories. If you want to avoid
potential confusion by guaranteeing that they are always the same, create a
link between them. You can always retrieve the original script from the
AFS distribution if necessary.
<PRE> # <B>cd /usr/vice/etc</B>
# <B>rm afs.rc</B>
# <B>ln -s /etc/init.d/afs afs.rc</B>
</PRE>
<P><LI>Reboot the machine.
<PRE> # <B>shutdown -i6 -g0 -y</B>
</PRE>
<P><LI>If you are upgrading a server machine, login again as the local superuser
<B>root</B>, then return to Step <A HREF="#LISV-UP-PRUNE">6</A> in <A HREF="#HDRSV-UP">Upgrading Server Machines</A>.
<PRE> login: <B>root</B>
Password: <VAR>root_password</VAR>
</PRE>
</OL>
<HR><H2><A NAME="HDRDOC_VOL" HREF="aurns002.htm#ToC_45">Storing AFS Documents in AFS</A></H2>
<P>This section explains how to create and mount a volume to
house AFS documents. The recommended mount point for the volume is
<B>/afs/</B><VAR>cellname</VAR><B>/afsdoc</B>. If you ran AFS
3.5, the volume possibly already exists. You can choose to
overwrite its contents with the AFS 3.6 version of documents, or can
create a new volume for the AFS 3.6 documents and mount it at
<B>/afs/</B><VAR>cellname</VAR><B>/afsdoc</B> instead of the volume of
AFS 3.5 documents. Alter the following instructions as
necessary.
<P>If you wish, you can create a link to the mount point on each client
machine's local disk, called <B>/usr/afsdoc</B>.
Alternatively, you can create a link to the mount point in each user's
home directory. You can also choose to permit users to access only
certain documents (most probably, the <I>IBM AFS User Guide</I>) by
creating different mount points or setting different ACLs on different
document directories.
<P>To create a new volume for storing AFS documents:
<OL TYPE=1>
<P><LI>Issue the <B>vos create</B> command to create a volume for storing the
AFS documentation. Include the <B>-maxquota</B> argument to set an
unlimited quota on the volume.
<P>If you wish, you can set the volume's quota to a finite value after
you complete the copying operations. At that point, use the <B>vos
examine</B> command to determine how much space the volume is
occupying. Then issue the <B>fs setquota</B> command to set a quota
value that is slightly larger.
<PRE> % <B>vos create</B> &lt;<VAR>machine&nbsp;name</VAR>> &lt;<VAR>partition&nbsp;name</VAR>> <B>afsdoc -maxquota 0</B>
</PRE>
<P><LI>Issue the <B>fs mkmount</B> command to mount the new volume. If
your <B>root.cell</B> volume is replicated, you must precede the
<I>cellname</I> with a period to specify the read/write mount point, as
shown. Then issue the <B>vos release</B> command to release a new
replica of the <B>root.cell</B> volume, and the <B>fs
checkvolumes</B> command to force the local Cache Manager to access
them.
<PRE> % <B>fs mkmount -dir /afs/.</B><VAR>cellname</VAR><B>/afsdoc</B> <B>-vol</B> <B>afsdoc</B>
% <B>vos release root.cell</B>
% <B>fs checkvolumes</B>
</PRE>
<P><LI>Issue the <B>fs setacl</B> command to grant the <B>rl</B>
permissions to the <B>system:anyuser</B> group on the new
directory's ACL.
<PRE> % <B>cd /afs/.</B><VAR>cellname</VAR><B>/afsdoc</B>
% <B>fs setacl . system:anyuser rl</B>
</PRE>
<P><LI>Access the documents via one of the sources listed in <A HREF="#HDRDOC">Accessing the AFS Binary Distribution and Documentation</A>. Copy the documents in one more formats from a
<VAR>source_format</VAR> directory into subdirectories of the
<B>/afs/</B><VAR>cellname</VAR><B>/afsdoc</B> directory. Repeat
the commands for each format. Suggested substitutions for the
<VAR>format_name</VAR> variable are <B>HTML</B> and <B>PDF</B>.
<PRE> # <B>mkdir</B> <VAR>format_name</VAR>
# <B>cd</B> <VAR>format_name</VAR>
# <B>cp -rp /cdrom/Documentation/</B><VAR>language_code</VAR><B>/</B><VAR>source_format</VAR> <B>.</B>
</PRE>
<P>If you copy the HTML version of the documents, note that in addition to a
subdirectory for each document there are several files with a
<B>.gif</B> extension, which enable readers to move easily between
sections of a document. The file called <B>index.htm</B> is
an introductory HTML page that has a hyperlink to the documents. For
HTML viewing to work properly, these files must remain in the top-level HTML
directory (the one named, for example,
<B>/afs/</B><VAR>cellname</VAR><B>/afsdoc/Html</B>).
<P><LI><B>(Optional)</B> If you believe it is helpful to your users to access
AFS documents via a local disk directory, create <B>/usr/afsdoc</B> on the
local disk as a symbolic link to the directory housing the desired format
(probably HTML or PDF).
<PRE> # <B>ln -s /afs/</B><VAR>cellname</VAR><B>/afsdoc/</B><VAR>format_name</VAR> <B>/usr/afsdoc</B>
</PRE>
<P>An alternative is to create a link in each user's home directory to
the documentation directory in AFS.
</OL>
<HR><H2><A NAME="HDRREFPAGES" HREF="aurns002.htm#ToC_46">Reference Pages</A></H2>
<P>Following are reference pages that include new
information not included in <I>IBM AFS Administration
Reference</I>.
<P>
<H3><A NAME="HDRCFG" HREF="aurns002.htm#ToC_47">CFG_<I>tcid</I></A></H3>
<P><STRONG>Purpose</STRONG>
<P>Defines Tape Coordinator configuration instructions for automated tape
devices, backup data files, or XBSA server programs
<P><STRONG>Description</STRONG>
<P>A <B>CFG_</B><VAR>tcid</VAR> file includes instructions that configure a
Tape Coordinator for more automated operation and for transferring AFS data to
and from a certain type of backup media:
<UL>
<P><LI>An automated tape device, such as a stacker or jukebox. The file is
optional for a Tape Coordinator that writes to such a device, and unnecessary
if the default value for all types of instruction are appropriate for the
device.
<P><LI>A <I>backup data file</I> on a local disk device. The
configuration file is mandatory and must include the <B>FILE</B>
instruction at least.
<P><LI>A third-party backup utility that implements the Open Group's Backup
Service API (XBSA), hereafter referred to as an <I>XBSA server</I>.
The file is mandatory and must include the <B>SERVER</B>, <B>TYPE</B>,
and <B>PASSFILE</B> or <B>PASSWORD</B> instructions. The
General Availability release of AFS 3.6 can communicate with one XBSA
server, the Tivoli Storage Manager (TSM).
</UL>
<P>The configuration file is in ASCII-format and must reside in the
<B>/usr/afs/backup</B> directory on the Tape Coordinator machine.
Each Tape Coordinator has its own configuration file (multiple Tape
Coordinators cannot use the same file), and only a single Tape Coordinator in
a cell can write to a given tape device or backup data file. Multiple
Tape Coordinators can interact with the same XBSA server if the server has
sufficient capacity, and in this case the configuration file for each Tape
Coordinator refers to the same XBSA server.
<P>The Tape Coordinator for a tape device or backup data file must also have
an entry in the Backup Database and in the
<B>/usr/afs/backup/tapeconfig</B> file on the Tape Coordinator
machine. The Tape Coordinator for an XBSA server has only an entry in
the Backup Database, not in the <B>tapeconfig</B> file.
<P><B>Naming the Configuration File</B>
<P>For a Tape Coordinator that communicates with an XBSA server, the
<VAR>tcid</VAR> portion of the configuration file's name is the Tape
Coordinator's port offset number as defined in the Backup
Database. An example filename is <B>CFG_22</B>.
<P>For the Tape Coordinator for a tape device or backup data file, there are
two possible types of values for the <VAR>tcid</VAR> portion of the
filename. The Tape Coordinator first attempts to open a file with a
<VAR>tcid</VAR> portion that is the Tape Coordinator's port offset number
as defined in the Backup Database and <B>tapeconfig</B> file. If
there is no such file, the Tape Coordinator attempts to access a file with a
<VAR>tcid</VAR> portion that is based on the tape device's device name the
backup data file's filename. To enable the Tape Coordinator to
locate the file, construct the <VAR>tcid</VAR> portion of the filename as
follows:
<UL>
<P><LI>For a tape device, strip off the initial <B>/dev/</B> string from the
device name, and replace any other slashes in the name with
underscores. For example, <B>CFG_rmt_4m</B> is the appropriate
filename for a device called <B>/dev/rmt/4m</B>.
<P><LI>For a backup data file, strip off the initial slash (/) and replace any
other slashes in the name with underscores. For example,
<B>CFG_var_tmp_FILE</B> is the appropriate filename for a backup data file
called <B>/var/tmp/FILE</B>.
</UL>
<P><B>Summary of Instructions</B>
<P>The following list briefly describes the instructions that can appear in a
configuration file. Each instruction appears on its own line, in any
order. Unless otherwise noted, the instructions apply to all backup
media (automated tape device, backup data file, and XBSA server). A
more detailed description of each instruction follows the list.
<DL>
<P><DT><B>ASK
</B><DD>Controls whether the Tape Coordinator prompts for guidance when it
encounters error conditions.
<P><DT><B>AUTOQUERY
</B><DD>Controls whether the Tape Coordinator prompts for the first tape.
Does not apply to XBSA servers.
<P><DT><B>BUFFERSIZE
</B><DD>Sets the size of the memory buffer the Tape Coordinator uses when dumping
data to or restoring data from a backup medium.
<P><DT><B>CENTRALLOG
</B><DD>Names a log file in which to record a status message as each dump or
restore operation completes. The Tape Coordinator also writes to its
standard log and error files.
<P><DT><B>FILE
</B><DD>Determines whether the Tape Coordinator uses a backup data file as the
backup medium.
<P><DT><B>GROUPID
</B><DD>Sets an identification number recorded in the Backup Database for all
dumps performed by the Tape Coordinator.
<P><DT><B>LASTLOG
</B><DD>Controls whether the Tape Coordinator creates and writes to a separate log
file during its final pass through the set of volumes to be included in a
dump.
<P><DT><B>MAXPASS
</B><DD>Specifies how many times the Tape Coordinator attempts to access a volume
during a dump operation if the volume is inaccessible on the first attempt
(which is included in the count).
<P><DT><B>MGMTCLASS
</B><DD>Specifies which of an XBSA server's management classes to use, which
often indicates the type of backup medium the XBSA server uses. Applies
only to XBSA servers.
<P><DT><B>MOUNT
</B><DD>Identifies the file that contains routines for inserting tapes into a tape
device or controlling how the Tape Coordinator handles a backup data
file. Does not apply to XBSA servers.
<P><DT><B>NAME_CHECK
</B><DD>Controls whether the Tape Coordinator verifies that a tape or backup data
file has the expected name. Does not apply to XBSA servers.
<P><DT><B>NODE
</B><DD>Names which node associated with an XBSA server to use. Applies
only to XBSA servers.
<P><DT><B>PASSFILE
</B><DD>Names the file that contains the password or security code for the Tape
Coordinator to pass to an XBSA server. Applies only to XBSA
servers.
<P><DT><B>PASSWORD
</B><DD>Specifies the password or security code for the Tape Coordinator to pass
to an XBSA server. Applies only to XBSA servers.
<P><DT><B>SERVER
</B><DD>Names the XBSA server machine with which the Tape Coordinator
communicates. Applies only to XBSA servers.
<P><DT><B>STATUS
</B><DD>Controls how often the Tape Coordinator writes a status message in its
window during an operation.
<P><DT><B>TYPE
</B><DD>Defines which XBSA-compliant program (third-party backup utility) is
running on the XBSA server. Applies only to XBSA servers.
<P><DT><B>UNMOUNT
</B><DD>Identifies the file that contains routines for removing tapes from a tape
device or controlling how the Tape Coordinator handles a backup data
file. Does not apply to XBSA servers.
</DL>
<P><B>The ASK Instruction</B>
<P>The <B>ASK</B> instruction takes a boolean value as its argument, in
the following format:
<PRE> ASK {<B>YES</B> | <B>NO</B>}
</PRE>
<P>When the value is <B>YES</B>, the Tape Coordinator generates a prompt
in its window, requesting a response to the error cases described in the
following list. This is the default behavior if the <B>ASK</B>
instruction does not appear in the <B>CFG_</B><VAR>tcid</VAR> file.
<P>When the value is <B>NO</B>, the Tape Coordinator does not prompt in
error cases, but instead uses the automatic default responses described in the
following list. The Tape Coordinator also logs the error in its
<B>/usr/afs/backup/TE_</B><VAR>tcid</VAR> file. Suppressing the
prompts enables the Tape Coordinator to run unattended, though it still
prompts for insertion of tapes unless the <B>MOUNT</B> instruction is
used.
<P>The error cases controlled by this instruction are the following:
<UL>
<P><LI>The Backup System is unable to dump a volume while running the <B>backup
dump</B> command. With a <B>YES</B> value, the Tape Coordinator
prompts to offer three choices: try to dump the volume again
immediately, omit the volume from the dump but continue the operation, or
terminate the operation. With a <B>NO</B> value, the Tape
Coordinator omits the volume from the dump and continues the operation.
<P><LI>The Backup System is unable to restore a volume while running the
<B>backup diskrestore</B>, <B>backup volrestore</B>, or <B>backup
volsetrestore</B> command. With a <B>YES</B> value, the Tape
Coordinator prompts to offer two choices: omit the volume and continue
restoring the other volumes, or terminate the operation. With a
<B>NO</B> value, it continues the operation without prompting, omitting
the problematic volume but restoring the remaining ones.
<P><LI>The Backup System cannot determine if the dump set includes any more
tapes, while running the <B>backup scantape</B> command (the reference
page for that command discusses possible reasons for this problem).
With a <B>YES</B> value, the Tape Coordinator prompts to ask if there are
more tapes to scan. With a <B>NO</B> value, it proceeds as though
there are more tapes and invokes the routine named by the <B>MOUNT</B>
instruction in the configuration file, or prompts the operator to insert the
next tape.
<P><LI>The Backup System determines that the tape contains an unexpired dump
while running the <B>backup labeltape</B> command. With a
<B>YES</B> value, the Tape Coordinator prompts to offer two choices:
continue or terminate the labeling operation. With a <B>NO</B>
value, it terminates the operation without relabeling the tape.
</UL>
<P><B>The AUTOQUERY Instruction</B>
<P>The <B>AUTOQUERY</B> instruction takes a boolean value as its argument,
in the following format:
<PRE> AUTOQUERY {<B>YES</B> | <B>NO</B>}
</PRE>
<P>When the value is <B>YES</B>, the Tape Coordinator checks for the
<B>MOUNT</B> instruction in the configuration file when it needs to read
the first tape involved in an operation. As described for that
instruction, it then either prompts for the tape or invokes the specified
routine to mount the tape. This is the default behavior if the
<B>AUTOQUERY</B> instruction does not appear in the configuration
file.
<P>When the value is <B>NO</B>, the Tape Coordinator assumes that the
first tape required for an operation is already in the drive. It does
not prompt the operator or invoke the <B>MOUNT</B> routine unless there is
an error in accessing the first tape. This setting is equivalent in
effect to including the <B>-noautoquery</B> flag to the <B>butc</B>
command.
<P>Note that the setting of the <B>AUTOQUERY</B> instruction controls the
Tape Coordinator's behavior only with respect to the first tape required
for an operation. For subsequent tapes, the Tape Coordinator always
checks for the <B>MOUNT</B> instruction. It also refers to the
<B>MOUNT</B> instruction if it encounters an error while attempting to
access the first tape. The instruction does not apply to XBSA
servers.
<P><B>The BUFFERSIZE Instruction</B>
<P>The <B>BUFFERSIZE</B> instruction takes an integer or decimal value,
and optionally units, in the following format:
<PRE> BUFFERSIZE <VAR>size</VAR>[{<B>k</B> | <B>K</B> | <B>m</B> | <B>M</B> | <B>g</B> | <B>G</B> | <B>t</B> | <B>T</B>}]
</PRE>
<P>where <VAR>size</VAR> specifies the amount of memory the Tape Coordinator
allocates to use as a buffer during both dump and restore operations.
If <VAR>size</VAR> is a decimal number, the number of digits after the decimal
point must not translate to fractions of bytes. The default unit is
bytes, but use <B>k</B> or <B>K</B> to specify kilobytes, <B>m</B>
or <B>M</B> for megabytes, <B>g</B> or <B>G</B> for gigabytes, and
<B>t</B> or <B>T</B> for terabytes. There is no space between
the <VAR>size</VAR> value and the units letter.
<P>As the Tape Coordinator receives volume data from the Volume Server during
a dump operation, it gathers the specified amount of data in the buffer before
transferring the entire amount to the backup medium. Similarly, during
a restore operation the Tape Coordinator by default buffers data from the
backup medium before transferring the entire amount to the Volume Server for
restoration into the file system.
<P>The default buffer size is 16 KB, which is usually large enough to promote
tape streaming in a normal network configuration. If the network
connection between the Tape Coordinator machine and file server machines is
slow, it can help to increase the buffer size.
<P>For XBSA servers, the range of acceptable values is <B>1K</B> through
<B>64K</B>. For tape devices and backup data files, the minimum
acceptable value is <B>16K</B>, and if the specified value is not a
multiple of 16 KB, the Tape Coordinator automatically rounds it up to the next
such multiple.
<P><B>The CENTRALLOG Instruction</B>
<P>The <B>CENTRALLOG</B> instruction takes a pathname as its argument, in
the following format:
<PRE> CENTRALLOG <VAR>filename</VAR>
</PRE>
<P>where <VAR>filename</VAR> is the full pathname of a local disk file in which
to record a status message as each dump or restore operation completes.
It is acceptable to have multiple Tape Coordinators write to the same log
file. Each Tape Coordinator also writes to its own standard error and
log files (the <B>TE_</B><VAR>tcid</VAR> and <B>TL_</B><VAR>tcid</VAR>
files in the <B>/usr/afs/backup</B> directory). This instruction is
always optional.
<P>The line for each dump operation has the following format:
<PRE> <VAR>task_ID</VAR> <VAR>start_time</VAR> <VAR>complete_time</VAR> <VAR>duration</VAR> <VAR>volume_set</VAR> \
<VAR>success</VAR> of <VAR>total</VAR> volumes dumped (<VAR>data_dumped</VAR> KB)
</PRE>
<P>The line for each restore operation has the following format:
<PRE> <VAR>task_ID</VAR> <VAR>start_time</VAR> <VAR>complete_time</VAR> <VAR>duration</VAR> <VAR>success</VAR> of <VAR>total</VAR> volumes restored
</PRE>
<P>where
<DL>
<P><DT><B><VAR>task_ID</VAR>
</B><DD>Is the task identification number assigned to the operation by the Tape
Coordinator. The first digits in the number are the Tape
Coordinator's port offset number.
<P><DT><B><VAR>start_time</VAR>
</B><DD>The time at which the operation started, in the format
<VAR>month</VAR>/<VAR>day</VAR>/<VAR>year</VAR>
<VAR>hours</VAR>:<VAR>minutes</VAR>:<VAR>seconds</VAR>.
<P><DT><B><VAR>complete_time</VAR>
</B><DD>Is the time at which the operation completed, in the same format as the
<VAR>start_time</VAR> field.
<P><DT><B><VAR>duration</VAR>
</B><DD>Is the amount of time it took to complete the operation, in the format
<VAR>hours</VAR>:<VAR>minutes</VAR>:<VAR>seconds</VAR>.
<P><DT><B><VAR>volume_set</VAR>
</B><DD>Is the name of the volume set being dumped during this operation (for dump
operations only).
<P><DT><B><VAR>success</VAR>
</B><DD>Is the number of volumes successfully dumped or restored.
<P><DT><B><VAR>total</VAR>
</B><DD>Is the total number of volumes the Tape Coordinator attempted to dump or
restore.
<P><DT><B><VAR>data_dumped</VAR>
</B><DD>Is the number of kilobytes of data transferred to the backup medium (for
dump operations only).
</DL>
<P><B>The FILE Instruction</B>
<P>The <B>FILE</B> instruction takes a boolean value as its argument, in
the following format:
<PRE> FILE {<B>NO</B> | <B>YES</B>}
</PRE>
<P>When the value is <B>NO</B> and the <B>SERVER</B> instruction does
not appear in the configuration file, the Tape Coordinator uses a tape device
as the backup medium. If the <B>SERVER</B> instruction does appear,
the Tape Coordinator communicates with the XBSA server that it names.
This is the default behavior if the <B>FILE</B> instruction does not
appear in the file.
<P>When the value is <B>YES</B>, the Tape Coordinator uses a backup data
file on the local disk as the backup medium. If the file does not exist
when the Tape Coordinator attempts to write a dump, the Tape Coordinator
creates it. For a restore operation to succeed, the file must exist and
contain volume data previously written to it by a <B>backup dump</B>
operation.
<P>When the value is <B>YES</B>, the backup data file's complete
pathname must appear (instead of a tape drive device name) in the third field
of the corresponding port offset entry in the local
<B>/usr/afs/backup/tapeconfig</B> file. If the field instead refers
to a tape device, dump operations appear to succeed but are
inoperative. It is not possible to restore data that is accidently
dumped to a tape device while the <B>FILE</B> instruction is set to
<B>YES</B>. (In the same way, if the <B>FILE</B> instruction is
set to <B>NO</B> and there is no <B>SERVER</B> instruction, the
<B>tapeconfig</B> entry must refer to an actual tape device.)
<P>Rather than put an actual file pathname in the third field of the
<B>tapeconfig</B> file, however, the recommended configuration is to
create a symbolic link in the <B>/dev</B> directory that points to the
actual file pathname, and record the symbolic link's name in this
field. This configuration has a couple of advantages:
<UL>
<P><LI>It makes the <VAR>tcid</VAR> portion of the <B>CFG_</B><VAR>tcid</VAR>,
<B>TE_</B><VAR>tcid</VAR>, and <B>TL_</B><VAR>tcid</VAR> names as short as
possible. Because the symbolic link is in the <B>/dev</B> directory
as though it were a tape device, the device configuration file's name is
constructed by stripping off the entire <B>/dev/</B> prefix, instead of
just the initial slash. If, for example, the symbolic link is called
<B>/dev/FILE</B>, the device configuration file name is
<B>CFG_FILE</B>, whereas if the actual pathname <B>/var/tmp/FILE</B>
appears in the <B>tapeconfig</B> file, the file's name must be
<B>CFG_var_tmp_FILE</B>.
<P><LI>It provides for a more graceful, and potentially automated, recovery if
the Tape Coordinator cannot write a complete dump into the backup data file
(because the partition housing the backup data file becomes full, for
example). The Tape Coordinator's reaction to this problem is to
invoke the <B>MOUNT</B> script, or to prompt the operator if the
<B>MOUNT</B> instruction does not appear in the configuration file.
<UL>
<P><LI>If there is a <B>MOUNT</B> routine, the operator can prepare for this
situation by adding a subroutine that changes the symbolic link to point to
another backup data file on a partition where there is space available.
<P><LI>If there is no <B>MOUNT</B> instruction, the prompt enables the
operator manually to change the symbolic link to point to another backup data
file, then press &lt;<B>Return</B>> to signal that the Tape Coordinator
can continue the operation.
</UL>
</UL>
<P>If the third field in the <B>tapeconfig</B> file names the actual file,
there is no way to recover from exhausting the space on the partition that
houses the backup data file. It is not possible to change the
<B>tapeconfig</B> file in the middle of an operation.
<P>When writing to a backup data file, the Tape Coordinator writes data at 16
KB offsets. If a given block of data (such as the marker that signals
the beginning or end of a volume) does not fill the entire 16 KB, the Tape
Coordinator still skips to the next offset before writing the next
block. In the output of a <B>backup dumpinfo</B> command issued
with the <B>-id</B> option, the value in the <TT>Pos</TT> column is the
ordinal of the 16-KB offset at which the volume data begins, and so is not
generally only one higher than the position number on the previous line, as it
is for dumps to tape.
<P><B>The GROUPID Instruction</B>
<P>The <B>GROUPID</B> instruction takes an integer as its argument, in the
following format:
<PRE> GROUPID <VAR>integer</VAR>
</PRE>
<P>where <VAR>integer</VAR> is in the range from <B>1</B> through
<B>2147483647</B> (one less than 2 GB). The value is recorded in
the Backup Database record for each dump created by this Tape
Coordinator. It appears in the <TT>Group id</TT> field in the output
from the <B>backup dumpinfo</B> command when the command's
<B>-verbose</B> and <B>-id</B> options are provided. It can be
specified as the value of the <B>-groupid</B> argument to the <B>backup
deletedump</B> command to delete only records marked with the group
ID. This instruction is always optional.
<P><B>The LASTLOG Instruction</B>
<P>The <B>LASTLOG</B> instruction takes a boolean value as its argument,
in the following format:
<PRE> LASTLOG {<B>YES</B> | <B>NO</B>}
</PRE>
<P>When the value is <B>YES</B>, the Tape Coordinator creates and writes
to a separate log file during the final pass through the volumes to be
included in a dump operation. The log file name is
<B>/usr/afs/backup/TL_</B><VAR>tcid</VAR><B>.lp</B>, where
<VAR>tcid</VAR> is either the Tape Coordinator's port offset number or a
value derived from the device name or backup data filename.
<P>When the value is <B>NO</B>, the Tape Coordinator writes to its
standard log files (the <B>TE_</B><VAR>tcid</VAR> and
<B>TL_</B><VAR>tcid</VAR> files in the <B>/usr/afs/backup</B> directory)
for all passes. This is the behavior if the instruction does not appear
in the file.
<P><B>The MAXPASS Instruction</B>
<P>The <B>MAXPASS</B> instruction takes an integer as its argument, in the
following format:
<PRE> MAXPASS <VAR>integer</VAR>
</PRE>
<P>where <VAR>integer</VAR> specifies how many times the Tape Coordinator
attempts to access a volume during a dump operation if the volume is
inaccessible on the first attempt (which is included in the count).
Acceptable values are in the range from <B>1</B> through
<B>10</B>. The default value is <B>2</B> if this instruction
does not appear in the file.
<P><B>The MGMTCLASS Instruction</B>
<P>The <B>MGMTCLASS</B> instruction takes a character string as its
argument, in the following format:
<PRE> MGMTCLASS <VAR>class_name</VAR>
</PRE>
<P>where <VAR>class_name</VAR> is the XBSA server's management class, which
often indicates the type of backup medium it is using. For a list of
the possible management classes, see the XBSA server documentation.
This instruction applies only to XBSA servers and is always optional;
there is no default value if it is omitted.
<P><B>The MOUNT Instruction</B>
<P>The <B>MOUNT</B> instruction takes a pathname as its argument, in the
following format:
<PRE> MOUNT <VAR>filename</VAR>
</PRE>
<P>where <VAR>filename</VAR> is the full pathname of an executable file on the
local disk that contains a shell script or program (for clarity, the following
discussion refers to scripts only). If the configuration file is for an
automated tape device, the script invokes the routine or command provided by
the device's manufacturer for mounting a tape (inserting it into the tape
reader). If the configuration file is for a backup data file, it can
instruct the Tape Coordinator to switch automatically to another backup data
file when the current one becomes full; for further discussion, see the
preceding description of the <B>FILE</B> instruction. This
instruction does not apply to XBSA servers.
<P>The administrator must write the script, including the appropriate routines
and logic. The AFS distribution does not include any scripts, although
an example appears in the following <B>Examples</B> section. The
command or routines invoked by the script inherit the local identity (UNIX
UID) and AFS tokens of the <B>butc</B> command's issuer.
<P>When the Tape Coordinator needs to mount a tape or access another backup
data file, it checks the configuration file for a <B>MOUNT</B>
instruction. If there is no instruction, the Tape Coordinator prompts
the operator to insert a tape before it attempts to open the tape
device. If there is a <B>MOUNT</B> instruction, the Tape
Coordinator executes the routine in the referenced script.
<P>There is an exception to this sequence: if the <B>AUTOQUERY
NO</B> instruction appears in the configuration file, or the
<B>-noautoquery</B> flag was included on the <B>butc</B> command, then
the Tape Coordinator assumes that the operator has already inserted the first
tape needed for a given operation. It attempts to read the tape
immediately, and only checks for the <B>MOUNT</B> instruction or prompts
the operator if the tape is missing or is not the required one.
<P>The Tape Coordinator passes the following parameters to the script
indicated by the <B>MOUNT</B> instruction, in the indicated order:
<OL TYPE=1>
<P><LI>The tape device or backup data file's pathname, as recorded in the
<B>/usr/afs/backup/tapeconfig</B> file.
<P><LI>The tape operation, which generally matches the <B>backup</B> command
operation code used to initiate the operation (the following list notes the
exceptional cases) :
<UL>
<P><LI><B>appenddump</B> (when a <B>backup dump</B> command includes the
<B>-append</B> flag)
<P><LI><B>dump</B> (when a <B>backup dump</B> command does not include
the <B>-append</B> flag)
<P><LI><B>labeltape</B>
<P><LI><B>readlabel</B>
<P><LI><B>restore</B> (for a <B>backup diskrestore</B>, <B>backup
volrestore</B>, or <B>backup volsetrestore</B> command)
<P><LI><B>restoredb</B>
<P><LI><B>savedb</B>
<P><LI><B>scantape</B>
</UL>
<P><LI>The number of times the Tape Coordinator has attempted to open the tape
device or backup data file. If the open attempt returns an error, the
Tape Coordinator increments this value by one and again invokes the
<B>MOUNT</B> instruction.
<P><LI>The tape name. For some operations, the Tape Coordinator passes the
string <TT>none</TT>, because it does not know the tape name (when running
the <B>backup scantape</B> or <B>backup readlabel</B>, for example),
or because the tape does not necessarily have a name (when running the
<B>backup labeltape</B> command, for example).
<P><LI>The tape ID recorded in the Backup Database. As with the tape name,
the Backup System passes the string <TT>none</TT> for operations where it
does not know the tape ID or the tape does not necessarily have an ID.
</OL>
<P>The routine invoked by the <B>MOUNT</B> instruction must return an exit
code to the Tape Coordinator:
<UL>
<P><LI>Code <B>0</B> (zero) indicates that the routine successfully mounted
the tape or opened the backup data file. The Tape Coordinator continues
the backup operation. If the routine invoked by the <B>MOUNT</B>
instruction does not return this exit code, the Tape Coordinator never calls
the <B>UNMOUNT</B> instruction.
<P><LI>Code <B>1</B> (one) indicates that the routine failed to mount the
tape or open the backup data file. The Tape Coordinator terminates the
operation.
<P><LI>Any other code indicates that the routine was not able to access the
correct tape or backup data file. The Tape Coordinator prompts the
operator to insert the correct tape.
</UL>
<P>If the <B>backup</B> command was issued in interactive mode and the
operator issues the <B>(backup) kill</B> command while the
<B>MOUNT</B> routine is running, the Tape Coordinator passes the
termination signal to the routine; the entire operation
terminates.
<P><B>The NAME_CHECK Instruction</B>
<P>The <B>NAME_CHECK</B> instruction takes a boolean value as its
argument, in the following format:
<PRE> NAME_CHECK {<B>YES</B> | <B>NO</B>}
</PRE>
<P>When the value is <B>YES</B> and there is no permanent name on the
label of the tape or backup data file, the Tape Coordinator checks the AFS
tape name on the label when dumping a volume in response to the <B>backup
dump</B> command. The AFS tape name must be <TT>&lt;NULL></TT> or
match the name that the <B>backup dump</B> operation constructs based on
the volume set and dump level names. This is the default behavior if
the <B>NAME_CHECK</B> instruction does not appear in the configuration
file.
<P>When the value is <B>NO</B>, the Tape Coordinator does not check the
AFS tape name before writing to the tape.
<P>The Tape Coordinator always checks that all dumps on the tape are expired,
and refuses to write to a tape that contains unexpired dumps. This
instruction does not apply to XBSA servers.
<P><B>The NODE Instruction</B>
<P>The <B>NODE</B> instruction takes a character string as its argument,
in the following format:
<PRE> NODE <VAR>node_name</VAR>
</PRE>
<P>where <VAR>node_name</VAR> names the node associated with the XBSA server
named by the <B>SERVER</B> instruction. To determine if the XBSA
server uses nodes, see its documentation. This instruction applies only
to XBSA servers, and there is no default if it is omitted. However, TSM
requires that a NODENAME instruction appear in its <B>dsm.sys</B>
configuration file in that case.
<P><B>The PASSFILE Instruction</B>
<P>The <B>PASSFILE</B> instruction takes a pathname as its argument, in
the following format:
<PRE> PASSFILE <VAR>filename</VAR>
</PRE>
<P>where <VAR>filename</VAR> is the full pathname of a file on the local disk
that records the password for the Tape Coordinator to use when communicating
with the XBSA server. The password string must appear on the first line
in the file, and have a newline character only at the end. The mode
bits on the file must enable the Tape Coordinator to read it.
<P>This instruction applies only to XBSA servers, and either it or the
<B>PASSWORD</B> instruction must be provided along with the
<B>SERVER</B> instruction. (If both this instruction and the
<B>PASSWORD</B> instruction are included, the Tape Coordinator uses only
the one that appears first in the file.)
<P><B>The PASSWORD Instruction</B>
<P>The <B>PASSWORD</B> instruction takes a character string as its
argument, in the following format:
<PRE> PASSWORD <VAR>string</VAR>
</PRE>
<P>where <VAR>string</VAR> is the password for the Tape Coordinator to use when
communicating with the XBSA server. It must appear on the first line in
the file, and have a newline character only at the end.
<P>This instruction applies only to XBSA servers, and either it or the
<B>PASSFILE</B> instruction must be provided along with the
<B>SERVER</B> instruction. (If both this instruction and the
<B>PASSFILE</B> instruction are included, the Tape Coordinator uses only
the one that appears first in the file.)
<P><B>The SERVER Instruction</B>
<P>The <B>SERVER</B> instruction takes a character string as its argument,
in the following format:
<PRE> SERVER <VAR>machine_name</VAR>
</PRE>
<P>where <VAR>machine_name</VAR> is the fully qualified hostname of the machine
where an XBSA server is running. This instruction is required for XBSA
servers, and applies only to them.
<P><B>The STATUS Instruction</B>
<P>The <B>STATUS</B> instruction takes an integer as its argument, in the
following format:
<PRE> STATUS <VAR>integer</VAR>
</PRE>
<P>where <VAR>integer</VAR> expresses how often the Tape Coordinator writes a
status message to its window during an operation, in terms of the number of
buffers of data that have been dumped or restored. Acceptable values
range from <B>1</B> through <B>8192</B>. The size of the
buffers is determined by the <B>BUFFERSIZE</B> instruction if it is
included.
<P>As an example, the value <B>512</B> means that the Tape Coordinator
writes a status message after each 512 buffers of data. It also writes
a status message as it completes the dump of each volume.
<P>The message has the following format:
<PRE> <VAR>time_stamp</VAR>: Task <VAR>task_ID</VAR>: <VAR>total</VAR> KB: <VAR>volume</VAR>: <VAR>volume_total</VAR> B
</PRE>
<P>where
<DL>
<P><DT><B><VAR>time_stamp</VAR>
</B><DD>Records the time at which the message is printed, in the format
<VAR>hours</VAR>:<VAR>minutes</VAR>:<VAR>seconds</VAR>.
<P><DT><B><VAR>task_ID</VAR>
</B><DD>Is the task identification number assigned to the operation by the Tape
Coordinator. The first digits in the number are the Tape
Coordinator's port offset number.
<P><DT><B><VAR>total</VAR>
</B><DD>Is the total number of kilobytes transferred to the backup medium during
the current dump operation.
<P><DT><B><VAR>volume</VAR>
</B><DD>Names the volume being dumped as the message is written.
<P><DT><B><VAR>volume_total</VAR>
</B><DD>Is the total number of bytes dumped so far from the volume named in the
<VAR>volume</VAR> field.
</DL>
<P>This instruction is intended for use with XBSA servers. For tape
devices and backup data files, the value in the <VAR>volume_total</VAR> field is
not necessarily as expected. It does not include certain kinds of
Backup System metadata (markers at the beginning and end of each volume, for
example), so summing together the final <VAR>volume_total</VAR> value for each
volume does not necessarily equal the running total in the <VAR>total</VAR>
field. Also, the Tape Coordinator does not write a message at all if it
is dumping metadata rather than actual volume data as it reaches the end of
the last buffer in each set of <VAR>integer</VAR> buffers.
<P><B>The TYPE Instruction</B>
<P>The <B>TYPE</B> instruction takes a character string as its argument,
in the following format:
<PRE> TYPE <VAR>program_name</VAR>
</PRE>
<P>where <VAR>program_name</VAR> names the XBSA server program that is running
on the machine named by the <B>SERVER</B> instruction. This
instruction is mandatory when the <B>SERVER</B> instruction appears in the
file. The acceptable values depend on which XBSA servers are supported
in the current AFS release. In the General Availability release of AFS
3.6, the only acceptable value is <B>tsm</B>.
<P><B>The UNMOUNT Instruction</B>
<P>The <B>UNMOUNT</B> instruction takes a pathname as its argument, in the
following format:
<PRE> UNMOUNT <VAR>filename</VAR>
</PRE>
<P>where <VAR>filename</VAR> is the full pathname of an executable file on the
local disk that contains a shell script or program (for clarity, the following
discussion refers to scripts only). If the configuration file is for an
automated tape device, the script invokes the routine or command provided by
the device's manufacturer for unmounting a tape (removing it from the
tape reader). If the configuration file is for a backup data file, it
can instruct the Tape Coordinator to perform additional actions after closing
the backup data file. This instruction does not apply to XBSA
servers.
<P>The administrator must write the script, including the appropriate routines
and logic. The AFS distribution does not include any scripts, although
an example appears in the following <B>Examples</B> section. The
command or routines invoked by the script inherit the local identity (UNIX
UID) and AFS tokens of the <B>butc</B> command's issuer.
<P>After closing a tape device or backup data file, the Tape Coordinator
checks the configuration file for an <B>UNMOUNT</B> instruction, whether
or not the <B>close</B> operation succeeds. If there is no
<B>UNMOUNT</B> instruction, the Tape Coordinator takes no action, in which
case the operator must take the action necessary to remove the current tape
from the drive before another can be inserted. If there is an
<B>UNMOUNT</B> instruction, the Tape Coordinator executes the referenced
file. It invokes the routine only once, passing in the following
parameters:
<UL>
<P><LI>The tape device pathname (as specified in the
<B>/usr/afs/backup/tapeconfig</B> file)
<P><LI>The tape operation (always <B>unmount</B>)
</UL>
<P><STRONG>Privilege Required</STRONG>
<P>The file is protected by UNIX mode bits. Creating the file requires
the <B>w</B> (<B>write</B>) and <B>x</B> (<B>execute</B>)
permissions on the <B>/usr/afs/backup</B> directory. Editing the
file requires the <B>w</B> (<B>write</B>) permission on the
file.
<P><STRONG>Examples</STRONG>
<P>The following example configuration files demonstrate one way to structure
a configuration file for a stacker or backup dump file. The examples
are not necessarily appropriate for a specific cell; if using them as
models, be sure to adapt them to the cell's needs and equipment.
<P><B>Example</B> <B>CFG_</B><VAR>tcid</VAR> <B>File for
Stackers</B>
<P>In this example, the administrator creates the following entry for a tape
stacker called <B>stacker0.1</B> in the
<B>/usr/afs/backup/tapeconfig</B> file. It has port offset
0.
<PRE> 2G 5K /dev/stacker0.1 0
</PRE>
<P>The administrator includes the following five lines in the
<B>/usr/afs/backup/CFG_stacker0.1</B> file. To review the
meaning of each instruction, see the preceding <B>Description</B>
section.
<PRE> MOUNT /usr/afs/backup/stacker0.1
UNMOUNT /usr/afs/backup/stacker0.1
AUTOQUERY NO
ASK NO
NAME_CHECK NO
</PRE>
<P>Finally, the administrator writes the following executable routine in the
<B>/usr/afs/backup/stacker0.1</B> file referenced by the
<B>MOUNT</B> and <B>UNMOUNT</B> instructions in the
<B>CFG_stacker0.1</B> file.
<PRE> #! /bin/csh -f
set devicefile = $1
set operation = $2
set tries = $3
set tapename = $4
set tapeid = $5
set exit_continue = 0
set exit_abort = 1
set exit_interactive = 2
#--------------------------------------------
if (${tries} > 1) then
echo "Too many tries"
exit ${exit_interactive}
endif
if (${operation} == "unmount") then
echo "UnMount: Will leave tape in drive"
exit ${exit_continue}
endif
if ((${operation} == "dump") |\
(${operation} == "appenddump") |\
(${operation} == "savedb")) then
stackerCmd_NextTape ${devicefile}
if (${status} != 0)exit${exit_interactive}
echo "Will continue"
exit ${exit_continue}
endif
if ((${operation} == "labeltape") |\
(${operation} == "readlabel")) then
echo "Will continue"
exit ${exit_continue}
endif
echo "Prompt for tape"
exit ${exit_interactive}
</PRE>
<P>This routine uses two of the parameters passed to it by the Backup
System: <TT>tries</TT> and <TT>operation</TT>. It follows the
recommended practice of prompting for a tape if the value of the
<TT>tries</TT> parameter exceeds one, because that implies that the stacker
is out of tapes.
<P>For a <B>backup dump</B> or <B>backup savedb</B> operation, the
routine calls the example <B>stackerCmd_NextTape</B> function provided by
the stacker's manufacturer. Note that the final lines in the file
return the exit code that prompts the operator to insert a tape; these
lines are invoked when either the stacker cannot load a tape or the operation
being performed is not one of those explicitly mentioned in the file (such as
a restore operation).
<P><B>Example CFG_</B><VAR>tcid</VAR> <B>File for Dumping to a Backup Data
File</B>
<P>In this example, the administrator creates the following entry for a backup
data file called <B>HSM_device</B> in the
<B>/usr/afs/backup/tapeconfig</B> file. It has port offset
20.
<PRE> 1G 0K /dev/HSM_device 20
</PRE>
<P>The administrator chooses to name the configuration file
<B>/usr/afs/backup/CFG_20</B>, using the port offset number rather than
deriving the <VAR>tcid</VAR> portion of the name from the backup data
file's name. She includes the following lines in the file.
To review the meaning of each instruction, see the preceding
<B>Description</B> section.
<PRE> MOUNT /usr/afs/backup/file
FILE YES
ASK NO
</PRE>
<P>Finally, the administrator writes the following executable routine in the
<B>/usr/afs/backup/file</B> file referenced by the <B>MOUNT</B>
instruction in the <B>CFG_HSM_device</B> file, to control how the Tape
Coordinator handles the file.
<PRE> #! /bin/csh -f
set devicefile = $1
set operation = $2
set tries = $3
set tapename = $4
set tapeid = $5
set exit_continue = 0
set exit_abort = 1
set exit_interactive = 2
#--------------------------------------------
if (${tries} > 1) then
echo "Too many tries"
exit ${exit_interactive}
endif
if (${operation} == "labeltape") then
echo "Won't label a tape/file"
exit ${exit_abort}
endif
if ((${operation} == "dump") |\
(${operation} == "appenddump") |\
(${operation} == "restore") |\
(${operation} == "savedb") |\
(${operation} == "restoredb")) then
/bin/rm -f ${devicefile}
/bin/ln -s /hsm/${tapename}_${tapeid} ${devicefile}
if (${status} != 0) exit ${exit_abort}
endif
exit ${exit_continue}
</PRE>
<P>Like the example routine for a tape stacker, this routine uses the
<TT>tries</TT> and <TT>operation</TT> parameters passed to it by the
Backup System. The <TT>tries</TT> parameter tracks how many times the
Tape Coordinator has attempted to access the file. A value greater than
one indicates that the Tape Coordinator cannot access it, and the routine
returns exit code 2 (<TT>exit_interactive</TT>), which results in a prompt
for the operator to load a tape. The operator can use this opportunity
to change the name of the backup data file specified in the
<B>tapeconfig</B> file.
<P>The primary function of this routine is to establish a link between the
device file and the file to be dumped or restored. When the Tape
Coordinator is executing a <B>backup dump</B>, <B>backup restore</B>,
<B>backup savedb</B>, or <B>backup restoredb</B> operation, the
routine invokes the UNIX <B>ln -s</B> command to create a symbolic link
from the backup data file named in the <B>tapeconfig</B> file to the
actual file to use (this is the recommended method). It uses the value
of the <TT>tapename</TT> and <TT>tapeid</TT> parameters to construct the
file name.
<P><B>Example</B> <B>CFG_</B><VAR>tcid</VAR> <B>File for an XBSA
Server</B>
<P>The following is an example of a configuration file called
<B>/usr/afs/backup/CFG_22</B>, for a Tape Coordinator with port offset 22
that communicates with an Tivoli Storage Management (TSM) server. The
combination of <B>BUFFERSIZE</B> and <B>STATUS</B> instructions
results in a status message after each 16 MB of data are dumped. To
review the meaning of the other instructions, see the preceding
<B>Description</B> section.
<PRE> SERVER tsmserver1.abc.com
TYPE tsm
PASSWORD TESTPASS
NODE testnode
MGMTCLASS standard
MAXPASS 1
GROUPID 1000
CENTRALLOG /usr/afs/backup/centrallog
BUFFERSIZE 16K
STATUS 1024
</PRE>
<P><STRONG>Related Information</STRONG>
<P><B>tapeconfig</B>
<P><B>backup deletedump</B>
<P><B>backup diskrestore</B>
<P><B>backup dump</B>
<P><B>backup dumpinfo</B>
<P><B>backup restoredb</B>
<P><B>backup savedb</B>
<P><B>backup volrestore</B>
<P><B>backup volsetrestore</B>
<P>
<H3><A NAME="HDRCLI_NETRESTRICT" HREF="aurns002.htm#ToC_48">NetRestrict (client version)</A></H3>
<P><STRONG>Purpose</STRONG>
<P>Defines client interfaces not to register with the File Server
<P><STRONG>Description</STRONG>
<P>The <B>NetRestrict</B> file, if present in a client machine's
<B>/usr/vice/etc</B> directory, defines the IP addresses of the interfaces
that the local Cache Manager does not register with a File Server when first
establishing a connection to it. For an explanation of how the File
Server uses the registered interfaces, see the reference page for the client
version of the <B>NetInfo</B> file.
<P>As it initializes, the Cache Manager constructs a list of interfaces to
register, from the <B>/usr/vice/etc/NetInfo</B> file if it exists, or from
the list of interfaces configured with the operating system otherwise.
The Cache Manager then removes from the list any addresses that appear in the
<B>NetRestrict</B> file, if it exists. The Cache Manager records
the resulting list in kernel memory.
<P>The <B>NetRestrict</B> file is in ASCII format. One IP address
appears on each line, in dotted decimal format. The order of the
addresses is not significant.
<P>To display the addresses the Cache Manager is currently registering with
File Servers, use the <B>fs getclientaddrs</B> command.
<P><STRONG>Related Information</STRONG>
<P><B>NetInfo</B> (client version)
<P><B>fs getclientaddrs</B>
<P>
<H3><A NAME="HDRSV_NETRESTRICT" HREF="aurns002.htm#ToC_49">NetRestrict (server version)</A></H3>
<P><STRONG>Purpose</STRONG>
<P>Defines interfaces that File Server does not register in VLDB and Ubik does
not use for database server machines
<P><STRONG>Description</STRONG>
<P>The <B>NetRestrict</B> file, if present in the
<B>/usr/afs/local</B> directory, defines the following:
<UL>
<P><LI>On a file server machine, the local interfaces that the File Server
(<B>fileserver</B> process) does not register in the Volume Location
Database (VLDB) at initialization time
<P><LI>On a database server machine, the local interfaces that the Ubik
synchronization library does not use when communicating with the database
server processes running on other database server machines
</UL>
<P>As it initializes, the File Server constructs a list of interfaces to
register, from the <B>/usr/afs/local/NetInfo</B> file if it exists, or
from the list of interfaces configured with the operating system
otherwise. The File Server then removes from the list any addresses
that appear in the <B>NetRestrict</B> file, if it exists. The File
Server records the resulting list in the <B>/usr/afs/local/sysid</B> file
and registers the interfaces in the VLDB. The database server processes
use a similar procedure when initializing, to determine which interfaces to
use for communication with the peer processes on other database machines in
the cell.
<P>The <B>NetRestrict</B> file is in ASCII format. One IP address
appears on each line, in dotted decimal format. The order of the
addresses is not significant.
<P>To display the File Server interface addresses registered in the VLDB, use
the <B>vos listaddrs</B> command.
<P><STRONG>Related Information</STRONG>
<P><B>NetInfo</B> (server version)
<P><B>sysid</B>
<P><B>vldb.DB0</B> and <B>vldb.DBSYS1</B>
<P><B>fileserver</B>
<P><B>vos listaddrs</B>
<P>
<H3><A NAME="HDRBK_DELETEDUMP" HREF="aurns002.htm#ToC_50">backup deletedump</A></H3>
<P><STRONG>Purpose</STRONG>
<P>Deletes one or more dump records from the Backup Database
<P><STRONG>Synopsis</STRONG>
<PRE><B>backup deletedump</B> [<B>-dumpid</B> &lt;<VAR>dump&nbsp;id</VAR>><SUP>+</SUP>] [<B>-from</B> &lt;<VAR>date&nbsp;time</VAR>><SUP>+</SUP>] [<B>-to</B> &lt;<VAR>date&nbsp;time</VAR>><SUP>+</SUP>]
[<B>-port</B> &lt;<VAR>TC&nbsp;port&nbsp;offset</VAR>>] [<B>-groupid</B> &lt;<VAR>group&nbsp;ID</VAR>>]
[<B>-dbonly</B>] [<B>-force</B>] [<B>-noexecute</B>]
[<B>-localauth</B>] [<B>-cell</B> &lt;<VAR>cell&nbsp;name</VAR>>] [<B>-help</B>]
<B>backup dele</B> [<B>-du</B> &lt;<VAR>dump&nbsp;id</VAR>><SUP>+</SUP>] [<B>-fr</B> &lt;<VAR>date&nbsp;time</VAR>><SUP>+</SUP>] [<B>-t</B> &lt;<VAR>date&nbsp;time</VAR>><SUP>+</SUP>]
[<B>-p</B> &lt;<VAR>TC&nbsp;port&nbsp;offset</VAR>>] [<B>-g</B> &lt;<VAR>group&nbsp;ID</VAR>>] [<B>-db</B>] [<B>-fo</B>] [<B>-n</B>]
[<B>-l</B>] [<B>-c</B> &lt;<VAR>cell&nbsp;name</VAR>>] [<B>-h</B>]
</PRE>
<P><STRONG>Description</STRONG>
<P>The <B>backup deletedump</B> command deletes one or more dump records
from the Backup Database. Using this command is appropriate when dump
records are incorrect (possibly because a dump operation was interrupted or
failed), or when they represent dumps that are expired or otherwise no longer
needed.
<P>To specify the records to delete, use one of the following arguments or
combinations of arguments:
<UL>
<P><LI>The <B>-dumpid</B> argument deletes the record for each specified dump
ID number.
<P><LI>The <B>-groupid</B> argument deletes each record with the specified
group ID number. A group ID number is associated with a record if the
<B>GROUPID</B> instruction appears in the Tape Coordinator's <B>
/usr/afs/backup/CFG_</B><VAR>tcid</VAR> file when the dump is created.
To display a dump set's group ID, include the <B>-verbose</B> and
<B>-id</B> options to the <B>backup dumpinfo</B> command; the
group ID appears in the output's <TT>Group id</TT> field.
<P><LI>The <B>-from</B> and <B>-to</B> arguments delete the records for
all regular dumps created during the time period bracketed by the specified
values. The <B>-from</B> argument can be omitted, in which case the
command deletes records created before the time specified by the
<B>-to</B> argument.
<P><LI>The combination of the <B>-groupid</B>, <B>-to</B> and optionally
<B>-from</B> arguments deletes the records for all regular dumps created
during the specified time period that are also marked with the specified group
ID number.
</UL>
<P>The command can also delete dump records maintained by an XBSA server at
the same time as the corresponding Backup Database records. (An
<I>XBSA server</I> is a third-party backup utility that implements the
Open Group's Backup Service API [XBSA].) Include the
<B>-port</B> argument to identify the Tape Coordinator that communicates
with the XBSA server. To delete the Backup Database records without
attempting to delete the records at the XBSA server, include the
<B>-dbonly</B> flag. To delete the Backup Database records even if
an attempt to delete the records at the XBSA server fails, include the
<B>-force</B> flag.
<P><STRONG>Cautions</STRONG>
<P>The only way to remove the dump record for an appended dump is to remove
the record for its initial dump, and doing so removes the records for all
dumps appended to the initial dump.
<P>The only way to remove the record for a Backup Database dump (created with
the <B>backup savedb</B> command) is to specify its dump ID number with
the <B>-dumpid</B> argument. Using the <B>-from</B> and
<B>-to</B> arguments never removes database dump records.
<P>Removing a dump's record makes it impossible to restore data from it
or from any dump that refers to the deleted dump as its parent, directly or
indirectly. That is, restore operations must begin with a full dump and
continue with each incremental dump in order. If the records for a
specific dump are removed, it is not possible to restore data from later
incremental dumps. If necessary, use the <B>-dbadd</B> flag to the
<B>backup scantape</B> command to regenerate a dump record so that the
dump can act as a parent again.
<P>If a dump set contains any dumps that were created outside the time range
specified by the <B>-from</B> and <B>-to</B> arguments, the command
does not delete any of the records associated with the dump set, even if some
of them represent dumps created during the time range.
<P><STRONG>Options</STRONG>
<DL>
<P><DT><B>-dumpid
</B><DD>Specifies the dump ID of each dump record to delete. The
corresponding dumps must be initial dumps; it is not possible to delete
appended dump records directly, but only by deleting the record of their
associated initial dump. Using this argument is the only way to delete
records of Backup Database dumps (created with the <B>backup savedb</B>
command).
<P>Provide either this argument, the <B>-to</B> (and optionally
<B>-from</B>) argument, or the <B>-groupid</B> argument.
<P><DT><B>-from
</B><DD>Specifies the beginning of a range of dates; the record for any dump
created during the indicated period of time is deleted.
<P>Omit this argument to indicate the default of midnight (00:00 hours)
on 1 January 1970 (UNIX time zero), or provide a date value in the format
<VAR>mm/dd/yyyy</VAR> [<VAR>hh:MM</VAR>]. The month (<VAR>mm</VAR>),
day (<VAR>dd</VAR>), and year (<VAR>yyyy</VAR>) are required. The hour and
minutes (<VAR>hh</VAR>:<VAR>MM</VAR>) are optional, but if provided must be
in 24-hour format (for example, the value <B>14:36</B> represents
2:36 p.m.). If omitted, the time defaults to
midnight (00:00 hours).
<P>The <B>-to</B> argument must be provided along with this one.
<TABLE><TR><TD ALIGN="LEFT" VALIGN="TOP"><B>Note:</B></TD><TD ALIGN="LEFT" VALIGN="TOP">A plus sign follows this argument in the command's syntax statement
because it accepts a multiword value which does not need to be enclosed in
double quotes or other delimiters, not because it accepts multiple
dates. Provide only one date (and optionally, time) definition.
</TD></TR></TABLE>
<P><DT><B>-to
</B><DD>Specifies the end of a range of dates; the record of any regular dump
created during the range is deleted from the Backup Database.
<P>Provide either the value <B>NOW</B> to indicate the current date and
time, or a date value in the same format as for the <B>-from</B>
argument. Valid values for the year (<VAR>yyyy</VAR>) range from
<B>1970</B> to <B>2037</B>; higher values are not valid because
the latest possible date in the standard UNIX representation is in February
2038. The command interpreter automatically reduces any later date to
the maximum value.
<P>If the time portion (<VAR>hh:MM</VAR>) is omitted, it defaults to 59
seconds after midnight (00:00:59 hours). Similarly, the
<B>backup</B> command interpreter automatically adds 59 seconds to any
time value provided. In both cases, adding 59 seconds compensates for
how the Backup Database and <B>backup dumpinfo</B> command represent dump
creation times in hours and minutes only. For example, the Database
records a creation timestamp of <TT>20:55</TT> for any dump operation
that begins between 20:55:00 and 20:55:59.
Automatically adding 59 seconds to a time thus includes the records for all
dumps created during that minute.
<P>Provide either this argument, the <B>-dumpid</B> argument, or the
<B>-groupid</B> argument, or combine this argument and the
<B>-groupid</B> argument. This argument is required if the
<B>-from</B> argument is provided.
<P><B>Caution:</B> Specifying the value <B>NOW</B> for this
argument when the <B>-from</B> argument is omitted deletes all dump
records from the Backup Database (except for Backup Database dump records
created with the <B>backup savedb</B> command).
<TABLE><TR><TD ALIGN="LEFT" VALIGN="TOP"><B>Note:</B></TD><TD ALIGN="LEFT" VALIGN="TOP">A plus sign follows this argument in the command's syntax statement
because it accepts a multiword value which does not need to be enclosed in
double quotes or other delimiters, not because it accepts multiple
dates. Provide only one date (and optionally, time) definition.
</TD></TR></TABLE>
<P><DT><B>-port
</B><DD>Specifies the port offset number of the Tape Coordinator that communicates
with the XBSA server that maintains the records to delete. It must be
the Tape Coordinator that transferred AFS data to the XBSA server when the
dump was created. The corresponding records in the Backup Database are
also deleted.
<P>This argument is meaningful only when deleting records maintained by an
XBSA server. Do not combine it with the <B>-dbonly</B> flag.
If this argument is omitted when other options pertinent to an XBSA server are
included, the Tape Coordinator with port offset 0 (zero) is used.
<P><DT><B>-groupid
</B><DD>Specifies the group ID number that is associated with the records to
delete. The Tape Coordinator ignores group IDs if this argument is
omitted.
<P>Provide either this argument, the <B>-dumpid</B> argument, or the
<B>-to</B> argument, or combine this argument and the <B>-to</B>
argument with any options other than the <B>-dumpid</B> argument.
<P><DT><B>-dbonly
</B><DD>Deletes records from the Backup Database without attempting to delete the
corresponding records maintained by an XBSA server. Do not combine this
flag with the <B>-port</B> argument or the <B>-force</B> flag.
<P><DT><B>-force
</B><DD>Deletes the specified records from the Backup Database even when the
attempt to delete the corresponding records maintained by an XBSA server
fails. Do not combine this flag with the <B>-dbonly</B>
flag. To identify the Tape Coordinator when this argument is used,
either provide the <B>-port</B> argument or omit it to specify the Tape
Coordinator with port offset 0 (zero).
<P><DT><B>-noexecute
</B><DD>Displays a list of the dump records to be deleted, without actually
deleting them. Combine it with the options to be included on the actual
command.
<P><DT><B>-localauth
</B><DD>Constructs a server ticket using a key from the local
<B>/usr/afs/etc/KeyFile</B> file. The <B>backup</B> command
interpreter presents it to the Backup Server, Volume Server and VL Server
during mutual authentication. Do not combine this flag with the
<B>-cell</B> argument. For more details, see the introductory
<B>backup</B> reference page.
<P><DT><B>-cell
</B><DD>Names the cell in which to run the command. Do not combine this
argument with the <B>-localauth</B> flag. For more details, see the
introductory <B>backup</B> reference page.
<P><DT><B>-help
</B><DD>Prints the online help for this command. All other valid options
are ignored.
</DL>
<P><STRONG>Output</STRONG>
<P>If the <B>-noexecute</B> flag is not included, the output generated at
the conclusion of processing lists the dump IDs of all deleted dump records,
in the following format:
<PRE> The following dumps were deleted:
<VAR>dump ID 1</VAR>
<VAR>dump ID 2</VAR>
<VAR>etc.</VAR>
</PRE>
<P>If the <B>-noexecute</B> flag is included, the output instead lists the
dump IDs of all dump records to be deleted, in the following format:
<PRE> The following dumps would have been deleted:
<VAR>dump ID 1</VAR>
<VAR>dump ID 2</VAR>
<VAR>etc.</VAR>
</PRE>
<P>The notation <TT>Appended Dump</TT> after a dump ID indicates that the
dump is to be deleted because it is appended to an initial dump that also
appears in the list, even if the appended dump's dump ID or group ID
number was not specified on the command line. For more about deleting
appended dumps, see the preceding <B>Cautions</B> section of this
reference page.
<P><STRONG>Examples</STRONG>
<P>The following command deletes the dump record with dump ID 653777462, and
for any appended dumps associated with it:
<PRE> % <B>backup deletedump -dumpid 653777462</B>
The following dumps were deleted:
653777462
</PRE>
<P>The following command deletes the Backup Database record of all dumps
created between midnight on 1 January 1999 and 23:59:59 hours on
31 December 1999:
<PRE> % <B>backup deletedump -from 01/01/1999 -to 12/31/1999</B>
The following dumps were deleted:
598324045
598346873
...
...
653777523
653779648
</PRE>
<P><STRONG>Privilege Required</STRONG>
<P>The issuer must be listed in the <B>/usr/afs/etc/UserList</B> file on
every machine where the Backup Server is running, or must be logged onto a
server machine as the local superuser <B>root</B> if the
<B>-localauth</B> flag is included.
<P><STRONG>Related Information</STRONG>
<P><B>CFG_</B><VAR>tcid</VAR>
<P><B>backup</B>
<P><B>backup dumpinfo</B>
<P><B>backup scantape</B>
<P>
<H3><A NAME="HDRBK_DUMPINFO" HREF="aurns002.htm#ToC_51">backup dumpinfo</A></H3>
<P><STRONG>Purpose</STRONG>
<P>Displays a dump record from the Backup Database
<P><STRONG>Synopsis</STRONG>
<PRE><B>backup dumpinfo</B> [<B>-ndumps</B> &lt;<VAR>no.&nbsp;of&nbsp;dumps</VAR>>] [<B>-id</B> &lt;<VAR>dump&nbsp;id</VAR>>]
[<B>-verbose</B>] [<B>-localauth</B>] [<B>-cell</B> &lt;<VAR>cell&nbsp;name</VAR>>] [<B>-help</B> ]
<B>backup dumpi</B> [<B>-n</B> &lt;<VAR>no.&nbsp;of&nbsp;dumps</VAR>>] [<B>-i</B> &lt;<VAR>dump&nbsp;id</VAR>>]
[<B>-v</B>] [<B>-l</B>] [<B>-c</B> &lt;<VAR>cell&nbsp;name</VAR>>] [<B>-h</B>]
</PRE>
<P><STRONG>Description</STRONG>
<P>The <B>backup dumpinfo</B> command formats and displays the Backup
Database record for the specified dumps. To specify how many of the
most recent dumps to display, starting with the newest one and going back in
time, use the <B>-ndumps</B> argument. To display more detailed
information about a single dump, use the <B>-id</B> argument. To
display the records for the 10 most recent dumps, omit both the
<B>-ndumps</B> and <B>-id</B> arguments.
<P>The <B>-verbose</B> flag produces very detailed information that is
useful mostly for debugging purposes. It can be combined only with the
<B>-id</B> argument.
<P><STRONG>Options</STRONG>
<DL>
<P><DT><B>-ndumps
</B><DD>Displays the Backup Database record for each of the specified number of
dumps that were most recently performed. If the database contains fewer
dumps than are requested, the output includes the records for all existing
dumps. Do not combine this argument with the <B>-id</B> or
<B>-verbose</B> options; omit all options to display the records for
the last 10 dumps.
<P><DT><B>-id
</B><DD>Specifies the dump ID number of a single dump for which to display the
Backup Database record. Precede the <VAR>dump id</VAR> value with the
<B>-id</B> switch; otherwise, the command interpreter interprets it
as the value of the <B>-ndumps</B> argument. Combine this argument
with the <B>-verbose</B> flag if desired, but not with the
<B>-ndumps</B> argument; omit all options to display the records for
the last 10 dumps.
<P><DT><B>-verbose
</B><DD>Provides more detailed information about the dump specified with the
<B>-id</B> argument, which must be provided along with it. Do not
combine this flag with the <B>-ndumps</B> argument.
<P><DT><B>-localauth
</B><DD>Constructs a server ticket using a key from the local
<B>/usr/afs/etc/KeyFile</B> file. The <B>backup</B> command
interpreter presents it to the Backup Server, Volume Server and VL Server
during mutual authentication. Do not combine this flag with the
<B>-cell</B> argument. For more details, see the introductory
<B>backup</B> reference page.
<P><DT><B>-cell
</B><DD>Names the cell in which to run the command. Do not combine this
argument with the <B>-localauth</B> flag. For more details, see the
introductory <B>backup</B> reference page.
<P><DT><B>-help
</B><DD>Prints the online help for this command. All other valid options
are ignored.
</DL>
<P><STRONG>Output</STRONG>
<P>If the <B>-ndumps</B> argument is provided, the output presents the
following information in table form, with a separate line for each dump:
<DL>
<P><DT><B><TT>dumpid</TT>
</B><DD>The dump ID number.
<P><DT><B><TT>parentid</TT>
</B><DD>The dump ID number of the dump's parent dump. A value of
<TT>0</TT> (zero) identifies a full dump.
<P><DT><B><TT>lv</TT>
</B><DD>The depth in the dump hierarchy of the dump level used to create the
dump. A value of <TT>0</TT> (zero) identifies a full dump, in which
case the value in the <TT>parentid</TT> field is also <TT>0</TT>. A
value of <TT>1</TT> or greater indicates an incremental dump made at the
corresponding level in the dump hierarchy.
<P><DT><B><TT>created</TT>
</B><DD>The date and time at which the Backup System started the dump operation
that created the dump.
<P><DT><B><TT>nt</TT>
</B><DD>The number of tapes that contain the data in the dump. A value of
<TT>0</TT> (zero) indicates that the dump operation was terminated or
failed. Use the <B>backup deletedump</B> command to remove such
entries.
<P><DT><B><TT>nvols</TT>
</B><DD>The number of volumes from which the dump includes data. If a
volume spans tapes, it is counted twice. A value of <TT>0</TT> (zero)
indicates that the dump operation was terminated or failed; the value in
the <TT>nt</TT> field is also <TT>0</TT> in this case.
<P><DT><B><TT>dump name</TT>
</B><DD>The dump name in the form
<PRE> <VAR>volume_set_name</VAR>.<VAR>dump_level_name</VAR> (<VAR>initial_dump_ID</VAR>)
</PRE>
<P>
<P>where <VAR>volume_set_name</VAR> is the name of the volume set, and
<VAR>dump_level_name</VAR> is the last element in the dump level pathname at
which the volume set was dumped.
<P>The <VAR>initial_dump_ID</VAR>, if displayed, is the dump ID of the initial
dump in the dump set to which this dump belongs. If there is no value
in parentheses, the dump is the initial dump in a dump set that has no
appended dumps.
</DL>
<P>If the <B>-id</B> argument is provided alone, the first line of output
begins with the string <TT>Dump</TT> and reports information for the entire
dump in the following fields:
<DL>
<P><DT><B><TT>id</TT>
</B><DD>The dump ID number.
<P><DT><B><TT>level</TT>
</B><DD>The depth in the dump hierarchy of the dump level used to create the
dump. A value of <TT>0</TT> (zero) identifies a full dump. A
value of <TT>1</TT> (one) or greater indicates an incremental dump made at
the specified level in the dump hierarchy.
<P><DT><B><TT>volumes</TT>
</B><DD>The number of volumes for which the dump includes data.
<P><DT><B><TT>created</TT>
</B><DD>The date and time at which the dump operation began.
</DL>
<P>If an XBSA server was the backup medium for the dump (rather than a tape
device or backup data file), the following line appears next:
<PRE> Backup Service: <VAR>XBSA_program</VAR>: Server: <VAR>hostname</VAR>
</PRE>
<P>where <VAR>XBSA_program</VAR> is the name of the XBSA-compliant program and
<VAR>hostname</VAR> is the name of the machine on which the program runs.
<P>Next the output includes an entry for each tape that houses volume data
from the dump. Following the string <TT>Tape</TT>, the first two
lines of each entry report information about that tape in the following
fields:
<DL>
<P><DT><B><TT>name</TT>
</B><DD>The tape's permanent name if it has one, or its AFS tape name
otherwise, and its tape ID number in parentheses.
<P><DT><B><TT>nVolumes</TT>
</B><DD>The number of volumes for which this tape includes dump data.
<P><DT><B><TT>created</TT>
</B><DD>The date and time at which the Tape Coordinator began writing data to this
tape.
</DL>
<P>Following another blank line, the tape-specific information concludes with
a table that includes a line for each volume dump on the tape. The
information appears in columns with the following headings:
<DL>
<P><DT><B><TT>Pos</TT>
</B><DD>The relative position of each volume in this tape or file. On a
tape, the counter begins at position 2 (the tape label occupies position 1),
and increments by one for each volume. For volumes in a backup data
file, the position numbers start with 1 and do not usually increment only by
one, because each is the ordinal of the 16 KB offset in the file at which the
volume's data begins. The difference between the position numbers
therefore indicates how many 16 KB blocks each volume's data
occupies. For example, if the second volume is at position 5 and the
third volume in the list is at position 9, that means that the dump of the
second volume occupies 64 KB (four 16-KB blocks) of space in the file.
<P><DT><B><TT>Clone time</TT>
</B><DD>For a backup or read-only volume, the time at which it was cloned from its
read/write source. For a Read/Write volume, it is the same as the dump
creation date reported on the first line of the output.
<P><DT><B><TT>Nbytes</TT>
</B><DD>The number of bytes of data in the dump of the volume.
<P><DT><B><TT>Volume</TT>
</B><DD>The volume name, complete with <TT>.backup</TT> or
<TT>.readonly</TT> extension if appropriate.
</DL>
<P>If both the <B>-id</B> and <B>-verbose</B> options are provided,
the output is divided into several sections:
<UL>
<P><LI>The first section, headed by the underlined string <TT>Dump</TT>,
includes information about the entire dump. The fields labeled
<TT>id</TT>, <TT>level</TT>, <TT>created</TT>, and <TT>nVolumes</TT>
report the same values (though in a different order) as appear on the first
line of output when the <B>-id</B> argument is provided by itself.
Other fields of potential interest to the backup operator are:
<DL>
<P><DT><B><TT>Group id</TT>
</B><DD>The dump's <I>group ID number</I>, which is recorded in the
dump's Backup Database record if the <B>GROUPID</B> instruction
appears in the Tape Coordinator's <B>
/usr/afs/backup/CFG_</B><VAR>tcid</VAR> file when the dump is created.
<P><DT><B><TT>maxTapes</TT>
</B><DD>The number of tapes that contain the dump set to which this dump
belongs.
<P><DT><B><TT>Start Tape Seq</TT>
</B><DD>The ordinal of the tape on which this dump begins in the set of tapes that
contain the dump set.
</DL>
<P><LI>For each tape that contains data from this dump, there follows a section
headed by the underlined string <TT>Tape</TT>. The fields labeled
<TT>name</TT>, <TT>written</TT>, and <TT>nVolumes</TT> report the same
values (though in a different order) as appear on the second and third lines
of output when the <B>-id</B> argument is provided by itself. Other
fields of potential interest to the backup operator are:
<DL>
<P><DT><B><TT>expires</TT>
</B><DD>The date and time when this tape can be recycled, because all dumps it
contains have expired.
<P><DT><B><TT>nMBytes Data</TT> and <TT>nBytes Data</TT>
</B><DD>Summed together, these fields represent the total amount of dumped data
actually from volumes (as opposed to labels, filemarks, and other
markers).
<P><DT><B><TT>KBytes Tape Used</TT>
</B><DD>The number of kilobytes of tape (or disk space, for a backup data file)
used to store the dump data. It is generally larger than the sum of the
values in the <TT>nMBytes Data</TT> and <TT>nBytes Data</TT> fields,
because it includes the space required for the label, file marks and other
markers, and because the Backup System writes data at 16 KB offsets, even if
the data in a given block doesn't fill the entire 16 KB.
</DL>
<P><LI>For each volume on a given tape, there follows a section headed by the
underlined string <TT>Volume</TT>. The fields labeled
<TT>name</TT>, <TT>position</TT>, <TT>clone</TT>, and <TT>nBytes</TT>
report the same values (though in a different order) as appear in the table
that lists the volumes in each tape when the <B>-id</B> argument is
provided by itself. Other fields of potential interest to the backup
operator are:
<DL>
<P><DT><B><TT>id</TT>
</B><DD>The volume ID.
<P><DT><B><TT>tape</TT>
</B><DD>The name of the tape containing this volume data.
</DL>
</UL>
<P><STRONG>Examples</STRONG>
<P>The following example displays information about the last five dumps:
<P>The following example displays a more detailed record for a single
dump.
<PRE> % <B>backup dumpinfo -id 922097346</B>
Dump: id 922097346, level 0, volumes 1, created Mon Mar 22 05:09:06 1999
Tape: name monday.user.backup (922097346)
nVolumes 1, created 03/22/1999 05:09
Pos Clone time Nbytes Volume
1 03/22/1999 04:43 27787914 user.pat.backup
</PRE>
<P>The following example displays even more detailed information about the
dump displayed in the previous example (dump ID 922097346). This
example includes only one exemplar of each type of section (<TT>Dump</TT>,
<TT>Tape</TT>, and <TT>Volume</TT>):
<PRE> % <B>backup dumpinfo -id 922097346 -verbose</B>
Dump
----
id = 922097346
Initial id = 0
Appended id = 922099568
parent = 0
level = 0
flags = 0x0
volumeSet = user
dump path = /monday1
name = user.monday1
created = Mon Mar 22 05:09:06 1999
nVolumes = 1
Group id = 10
tapeServer =
format= user.monday1.%d
maxTapes = 1
Start Tape Seq = 1
name = pat
instance =
cell =
Tape
----
tape name = monday.user.backup
AFS tape name = user.monday1.1
flags = 0x20
written = Mon Mar 22 05:09:06 1999
expires = NEVER
kBytes Tape Used = 121
nMBytes Data = 0
nBytes Data = 19092
nFiles = 0
nVolumes = 1
seq = 1
tapeid = 0
useCount = 1
dump = 922097346
Volume
------
name = user.pat.backup
flags = 0x18
id = 536871640
server =
partition = 0
nFrags = 1
position = 2
clone = Mon Mar 22 04:43:06 1999
startByte = 0
nBytes = 19092
seq = 0
dump = 922097346
tape = user.monday1.1
</PRE>
<P><STRONG>Privilege Required</STRONG>
<P>The issuer must be listed in the <B>/usr/afs/etc/UserList</B> file on
every machine where the Backup Server is running, or must be logged onto a
server machine as the local superuser <B>root</B> if the
<B>-localauth</B> flag is included.
<P><STRONG>Related Information</STRONG>
<P><B>backup</B>
<P><B>backup deletedump</B>
<P>
<H3><A NAME="HDRBK_STATUS" HREF="aurns002.htm#ToC_52">backup status</A></H3>
<P><STRONG>Purpose</STRONG>
<P>Reports a Tape Coordinator's status
<P><STRONG>Synopsis</STRONG>
<PRE><B>backup status</B> [<B>-portoffset</B> &lt;<VAR>TC&nbsp;port&nbsp;offset</VAR>>]
[<B>-localauth</B>] [<B>-cell</B> &lt;<VAR>cell&nbsp;name</VAR>>] [<B>-help</B>]
<B>backup st</B> [<B>-p</B> &lt;<VAR>TC&nbsp;port&nbsp;offset</VAR>>] [<B>-l</B>] [<B>-c</B> &lt;<VAR>cell&nbsp;name</VAR>>] [<B>-h</B>]
</PRE>
<P><STRONG>Description</STRONG>
<P>The <B>backup status</B> command displays which operation, if any, the
indicated Tape Coordinator is currently executing.
<P><STRONG>Options</STRONG>
<DL>
<P><DT><B>-portoffset
</B><DD>Specifies the port offset number of the Tape Coordinator for which to
report the status.
<P><DT><B>-localauth
</B><DD>Constructs a server ticket using a key from the local
<B>/usr/afs/etc/KeyFile</B> file. The <B>backup</B> command
interpreter presents it to the Backup Server, Volume Server and VL Server
during mutual authentication. Do not combine this flag with the
<B>-cell</B> argument. For more details, see the introductory
<B>backup</B> reference page.
<P><DT><B>-cell
</B><DD>Names the cell in which to run the command. Do not combine this
argument with the <B>-localauth</B> flag. For more details, see the
introductory <B>backup</B> reference page.
<P><DT><B>-help
</B><DD>Prints the online help for this command. All other valid options
are ignored.
</DL>
<P><STRONG>Output</STRONG>
<P>The following message indicates that the Tape Coordinator is not currently
performing an operation:
<PRE> Tape coordinator is idle
</PRE>
<P>Otherwise, the output includes a message of the following format for each
running or pending operation:
<PRE> Task <VAR>task_ID</VAR>: <VAR>operation</VAR>: <VAR>status</VAR>
</PRE>
<P>where
<DL>
<P><DT><B><VAR>task_ID</VAR>
</B><DD>Is a task identification number assigned by the Tape Coordinator.
It begins with the Tape Coordinator's port offset number.
<P><DT><B><VAR>operation</VAR>
</B><DD>Identifies the operation the Tape Coordinator is performing, which is
initiated by the indicated command:
<UL>
<P><LI><TT>Dump</TT> (the <B>backup dump</B> command)
<P><LI><TT>Restore</TT> (the <B>backup diskrestore</B>, <B>backup
volrestore</B>, or <B>backup volsetrestore</B> commands)
<P><LI><TT>Labeltape</TT> (the <B>backup labeltape</B> command)
<P><LI><TT>Scantape</TT> (the <B>backup scantape</B> command)
<P><LI><TT>SaveDb</TT> (the <B>backup savedb</B> command)
<P><LI><TT>RestoreDb</TT> (the <B>backup restoredb</B> command)
</UL>
<P><DT><B><VAR>status</VAR>
</B><DD>Indicates the job's current status in one of the following
messages.
<DL>
<P><DT><B><VAR>number</VAR> <TT>Kbytes transferred, volume</TT> <VAR>volume_name</VAR>
</B><DD>For a running dump operation, indicates the number of kilobytes copied to
tape or a backup data file so far, and the volume currently being
dumped.
<P><DT><B><VAR>number</VAR> <TT>Kbytes, restore.volume</TT>
</B><DD>For a running restore operation, indicates the number of kilobytes copied
into AFS from a tape or a backup data file so far.
<P><DT><B><TT>[abort requested]</TT>
</B><DD>The <B>(backup) kill</B> command was issued, but the termination
signal has yet to reach the Tape Coordinator.
<P><DT><B><TT>[abort sent]</TT>
</B><DD>The operation is canceled by the <B>(backup) kill</B> command.
Once the Backup System removes an operation from the queue or stops it from
running, it no longer appears at all in the output from the command.
<P><DT><B><TT>[butc contact lost]</TT>
</B><DD>The <B>backup</B> command interpreter cannot reach the Tape
Coordinator. The message can mean either that the Tape Coordinator
handling the operation was terminated or failed while the operation was
running, or that the connection to the Tape Coordinator timed out.
<P><DT><B><TT>[done]</TT>
</B><DD>The Tape Coordinator has finished the operation.
<P><DT><B><TT>[drive wait]</TT>
</B><DD>The operation is waiting for the specified tape drive to become
free.
<P><DT><B><TT>[operator wait]</TT>
</B><DD>The Tape Coordinator is waiting for the backup operator to insert a tape
in the drive.
</DL>
</DL>
<P>If the Tape Coordinator is communicating with an XBSA server (a third-party
backup utility that implements the Open Group's Backup Service API
[XBSA]), the following message appears last in the output:
<PRE> <VAR>XBSA_program</VAR> Tape coordinator
</PRE>
<P>where <VAR>XBSA_program</VAR> is the name of the XBSA-compliant
program.
<P><STRONG>Examples</STRONG>
<P>The following example shows that the Tape Coordinator with port offset 4
has so far dumped about 1.5 MB of data for the current dump operation,
and is currently dumping the volume named
<B>user.pat.backup</B>:
<PRE> % <B>backup status -portoffset 4</B>
Task 4001: Dump: 1520 Kbytes transferred, volume user.pat.backup
</PRE>
<P><STRONG>Privilege Required</STRONG>
<P>The issuer must be listed in the <B>/usr/afs/etc/UserList</B> file on
every machine where the Backup Server is running, or must be logged onto a
server machine as the local superuser <B>root</B> if the
<B>-localauth</B> flag is included.
<P><STRONG>Related Information</STRONG>
<P><B>backup</B>
<P><B>butc</B>
<P>
<H3><A NAME="HDRVOS_DELENTRY" HREF="aurns002.htm#ToC_53">vos delentry</A></H3>
<P><STRONG>Purpose</STRONG>
<P>Removes a volume entry from the VLDB.
<P><STRONG>Synopsis</STRONG>
<PRE><B>vos delentry</B> [<B>-id</B> &lt;<VAR>volume&nbsp;name&nbsp;or&nbsp;ID</VAR>><SUP>+</SUP>]
[<B>-prefix</B> &lt;<VAR>prefix&nbsp;of&nbsp;volume&nbsp;whose&nbsp;VLDB&nbsp;entry&nbsp;is&nbsp;to&nbsp;be&nbsp;deleted</VAR>>]
[<B>-server</B> &lt;<VAR>machine&nbsp;name</VAR>>] [<B>-partition</B> &lt;<VAR>partition&nbsp;name</VAR>>]
[<B>-cell</B> &lt;<VAR>cell&nbsp;name</VAR>>] [<B>-noauth</B>] [<B>-localauth</B>] [<B>-verbose</B>] [<B>-help</B>]
<B>vos de</B> [<B>-i</B> &lt;<VAR>volume&nbsp;name&nbsp;or&nbsp;ID</VAR>><SUP>+</SUP>]
[<B>-pr</B> &lt;<VAR>prefix&nbsp;of&nbsp;volume&nbsp;whose&nbsp;VLDB&nbsp;entry&nbsp;is&nbsp;to&nbsp;be&nbsp;deleted</VAR>>]
[<B>-s</B> &lt;<VAR>machine&nbsp;name</VAR>>] [<B>-pa</B> &lt;<VAR>partition&nbsp;name</VAR>>] [<B>-c</B> &lt;<VAR>cell&nbsp;name</VAR>>]
[<B>-n</B>] [<B>-l</B>] [<B>-v</B>] [<B>-h</B>]
</PRE>
<P><STRONG>Description</STRONG>
<P>The <B>vos delentry</B> command removes the Volume Location Database
(VLDB) entry for each specified volume. Specify one or more read/write
volumes; specifying a read-only or backup volume results in an
error. The command has no effect on the actual volumes on file server
machines, if they exist.
<P>This command is useful if a volume removal operation did not update the
VLDB (perhaps because the <B>vos zap</B> command was used), but the system
administrator does not feel it is necessary to use the <B>vos syncserv</B>
and <B>vos syncvldb</B> commands to synchronize an entire file server
machine.
<P>To remove the VLDB entry for a single volume, use the <B> -id</B>
argument. To remove groups of volumes, combine the <B> -prefix</B>,
<B>-server</B>, and <B>-partition</B> arguments. The following
list describes how to remove the VLDB entry for the indicated group of
volumes:
<UL>
<P><LI>For every volume whose name begins with a certain character string (for
example, <B>sys.</B> or <B>user.</B>): use the
<B>-prefix</B> argument.
<P><LI>Every volume for which the VLDB lists a site on a certain file server
machine: specify the file server name with the <B>-server</B>
argument.
<P><LI>Every volume for which the VLDB lists a site on a partition of the same
name (for instance, on the <B>/vicepa</B> partition on any file server
machine): specify the partition name with the <B> -partition</B>
argument.
<P><LI>Every volume for which the VLDB lists a site one a specific partition of a
file server machine: specify both the <B>-server</B> and
<B>-partition</B> arguments.
<P><LI>Every volume whose name begins with a certain prefix and for which the
VLDB lists a site on a file server machine: combine the
<B>-prefix</B> and <B>-server</B> arguments. Combine the
<B>-prefix</B> argument with the <B>-partition</B> argument, or both
the <B>-server</B> and <B>-partition</B> arguments, to remove a more
specific group of volumes.
</UL>
<P><STRONG>Cautions</STRONG>
<P>A single VLDB entry represents all versions of a volume (read/write,
readonly, and backup). The command removes the entire entry even though
only the read/write volume is specified.
<P>Do not use this command to remove a volume in normal circumstances; it
does not remove a volume from the file server machine, and so is likely to
make the VLDB inconsistent with state of the volumes on server
machines. Use the <B>vos remove</B> command to remove both the
volume and its VLDB entry.
<P><STRONG>Options</STRONG>
<DL>
<P><DT><B>-id
</B><DD>Specifies the complete name or volume ID number of each read/write volume
for which to remove the VLDB entry. The entire entry is removed.
Provide this argument or some combination of the <B>-prefix</B>,
<B>-server</B>, and <B>-partition</B> arguments.
<P><DT><B>-prefix
</B><DD>Specifies a character string of any length; the VLDB entry for a
volume whose name begins with the string is removed. Include field
separators (such as periods) if appropriate. Combine this argument with
the <B>-server</B> argument, <B>-partition</B> argument, or
both.
<P><DT><B>-server
</B><DD>Identifies a file server machine; if a volume's VLDB entry lists
a site on the machine, the entry is removed. Provide the machine's
IP address or its host name (either fully qualified or using an unambiguous
abbreviation). For details, see the introductory reference page for the
<B>vos</B> command suite.
<P>Combine this argument with the <B>-prefix</B> argument, the
<B>-partition</B> argument, or both.
<P><DT><B>-partition
</B><DD>Identifies a partition; if a volume's VLDB entry lists a site on
the partition, the entry is removed. Provide the partition's
complete name with preceding slash (for example, <B>/vicepa</B>) or use
one of the three acceptable abbreviated forms. For details, see the
introductory reference page for the <B>vos</B> command suite.
<P>Combine this argument with the <B>-prefix</B> argument, the
<B>-server</B> argument, or both.
<P><DT><B>-cell
</B><DD>Names the cell in which to run the command. Do not combine this
argument with the <B>-localauth</B> flag. For more details, see the
introductory <B>vos</B> reference page.
<P><DT><B>-noauth
</B><DD>Assigns the unprivileged identity <B>anonymous</B> to the
issuer. Do not combine this flag with the <B>-localauth</B>
flag. For more details, see the introductory <B>vos</B> reference
page.
<P><DT><B>-localauth
</B><DD>Constructs a server ticket using a key from the local
<B>/usr/afs/etc/KeyFile</B> file. The <B>vos</B> command
interpreter presents it to the Volume Server and Volume Location Server during
mutual authentication. Do not combine this flag with the
<B>-cell</B> argument or <B>-noauth</B> flag. For more details,
see the introductory <B>vos</B> reference page.
<P><DT><B>-verbose
</B><DD>Produces on the standard output stream a detailed trace of the
command's execution. If this argument is omitted, only warnings
and error messages appear.
<P><DT><B>-help
</B><DD>Prints the online help for this command. All other valid options
are ignored.
</DL>
<P><STRONG>Output</STRONG>
<P>The following message confirms the success of the command by indicating how
many VLDB entries were removed.
<PRE> Deleted <VAR>number</VAR> VLDB entries
</PRE>
<P><STRONG>Examples</STRONG>
<P>The following command removes the VLDB entry for the volume
<B>user.temp</B>.
<PRE> % <B>vos delentry user.temp</B>
</PRE>
<P>The following command removes the VLDB entry for every volume whose name
begins with the string <B>test</B> and for which the VLDB lists a site on
the file server machine <B>fs3.abc.com</B>.
<PRE> % <B>vos delentry -prefix test -server fs3.abc.com</B>
</PRE>
<P><STRONG>Privilege Required</STRONG>
<P>The issuer must be listed in the <B>/usr/afs/etc/UserList</B> file on
the machine specified with the <B>-server</B> argument and on each
database server machine. If the <B>-localauth</B> flag is included,
the issuer must instead be logged on to a server machine as the local
superuser <B>root</B>.
<P><STRONG>Related Information</STRONG>
<P><B>vos</B>
<P><B>vos remove</B>
<P><B>vos syncserv</B>
<P><B>vos syncvldb</B>
<P><B>vos zap</B>
<HR><P ALIGN="center"> <A HREF="../index.htm"><IMG SRC="../books.gif" BORDER="0" ALT="[Return to Library]"></A> <A HREF="aurns002.htm#ToC"><IMG SRC="../toc.gif" BORDER="0" ALT="[Contents]"></A> <A HREF="aurns003.htm"><IMG SRC="../prev.gif" BORDER="0" ALT="[Previous Topic]"></A> <A HREF="#Top_Of_Page"><IMG SRC="../top.gif" BORDER="0" ALT="[Top of Topic]"></A> <P>
<!-- Begin Footer Records ========================================== -->
<P><HR><B>
<br>&#169; <A HREF="http://www.ibm.com/">IBM Corporation 2000.</A> All Rights Reserved
</B>
<!-- End Footer Records ============================================ -->
<A NAME="Bot_Of_Page"></A>
</BODY></HTML>