Issues in Cell Configuration and AdministrationThis chapter discusses many of the issues to consider when
configuring and administering a cell, and directs you to detailed
related information available elsewhere in this guide. It is assumed you
are already familiar with the material in An
Overview of OpenAFS Administration.It is best to read this chapter before installing your cell's
first file server machine or performing any other administrative
task.
AFSdifferences from UNIX summarizedUNIXdifferences from AFS summarizeddifferencesbetween AFS and UNIX, summarizedDifferences between AFS and UNIX: A SummaryAFS behaves like a standard UNIX file system in most respects,
while also making file sharing easy within and between cells. This
section describes some differences between AFS and the UNIX file
system, referring you to more detailed information as
appropriate.protectionAFS compared to UNIXDifferences in File and Directory ProtectionAFS augments the standard UNIX file protection mechanism in
two ways: it associates an access control list
(ACL) with each directory, and it enables users to define
a large number of their own groups, which can be placed on
ACLs.AFS uses ACLs to protect files and directories, rather than
relying exclusively on the mode bits. This has several implications,
which are discussed further in the indicated sections:
AFS ACLs use seven access permissions rather than the
three UNIX mode bits. See The AFS ACL
Permissions.For directories, AFS ignores the UNIX mode bits. For
files, AFS uses only the first set of mode bits (the owner bits), and their meaning
interacts with permissions on the directory's ACL. See How AFS Interprets the UNIX Mode
Bits.A directory's ACL protects all of the files in a
directory in the same manner. To apply a more restrictive set
of AFS permissions to certain file, place it in directory with
a different ACL. If a directory must contain files with
different permissions, use symbolic links to point to files
stored in directories with different ACLs.Moving a file to a different directory changes its
protection. See Differences Between
UFS and AFS Data Protection.An ACL can include about 20 entries granting different
combinations of permissions to different users or groups,
rather than only the three UNIX entities represented by the
three sets of mode bits. See Differences Between UFS and AFS Data
Protection.You can designate an AFS file as write-only as in the
UNIX file system, by setting only the w (write) mode bit. You cannot designate
an AFS directory as write-only, because AFS ignores the mode
bits on a directory. See How AFS
Interprets the UNIX Mode Bits.AFS enables users to create groups and add other users to
those groups. Placing these groups on ACLs extends the same
permissions to a number of exactly specified users at the same time,
which is much more convenient than placing the individuals on the
ACLs directly. See Administering the
Protection Database.There are also system-defined groups, system:anyuser and system:authuser, whose presence on an ACL
extends access to a wide range of users at once. See The System Groups and Using Groups on ACLs.authenticationAFS compared to UNIXpasswordAFS compared to UNIXDifferences in AuthenticationJust as the AFS filespace is distinct from each machine's
local file system, AFS authentication is separate from local
login. This has two practical implications, which will already be
familiar to users and system administrators who use Kerberos for
authentication.
To access AFS files, users must log into the local
machine as normal, obtain Kerberos tickets, and then obtain
AFS tokens. This process can often be automated through the
system authentication configuration so that the user logs into
the system as normal and obtains Kerberos tickets and AFS
tokens transparently. If you cannot or chose not to configure
the system this way, your users must login and authenticate in
separate steps, as detailed in the OpenAFS User
Guide.Passwords may be stored in two separate places: the
Kerberos KDC and, optionally, each machine's local user
database (/etc/passwd or
equivalent) for the local system. A user's passwords in the
two places can differ if desired.Differences in the Semantics of Standard UNIX
CommandsThis section summarizes how AFS modifies the functionality of
some UNIX commands.
The chmod
commandchmod commandAFS compared to UNIXcommandschmod (AFS compared to UNIX)setuid programssetting mode bitsOnly members of the system:administrators group can use
this command to turn on the setuid, setgid or sticky mode
bits on AFS files. For more information, see Determining if a Client Can Run Setuid
Programs.The chown
commandchown commandAFS compared to UNIXcommandschown (AFS compared to UNIX)Only members of the system:administrators group can issue
this command on AFS files.The chgrp
commandchgrp commandAFS compared to UNIXcommandschgrp (AFS compared to UNIX)Only members of the system:administrators can issue this
command on AFS files and directories.The groups and id
commandsgroups commandAFS compared to UNIXid commandAFS compared to UNIXcommandsgroups (AFS compared to UNIX)commandsid (AFS compared to UNIX)If the user's AFS tokens are associated with a process
authentication group (PAG), the output of these commands may
include one or two large numbers. These are artificial
groups used by the OpenAFS Cache Manager to track the PAG on
some platforms. Other platforms may use other methods, such
as native kernel support for a PAG or a similar concept, in
which case the large GIDs may not appear. To learn about
PAGs, see Identifying AFS Tokens by
PAG.The ln commandln commandAFS compared to UNIXcommandsln (AFS compared to UNIX)This command cannot create hard links between files in
different AFS directories. See Creating Hard Links.The sshd daemon and ssh
commandsshd commandAFS compared to UNIXcommandssshd (AFS compared to UNIX)ssh commandAFS compared to UNIXcommandsssh (AFS compared to UNIX)In order for a user to have access to files stored in
AFS, that user needs to have Kerberos tickets and an AFS token
on the system from which they're accessing AFS. This has an
implication for users who log in remotely via protocols such
as Secure Shell (SSH): that log-in process must create local
Kerberos tickets and an AFS token on the system, or the user
will have to separately authenticate to Kerberos and AFS
after logging in.The OpenSSH
project provides an SSH client and server that uses
the GSS-API protocol to pass Kerberos tickets between
machines. With a suitable SSH client, this allows users to
delegate their Kerberos tickets to the remote machine, and
that machine to store those tickets and obtain AFS tokens as
part of the log-in process.fsck commandAFS compared to UNIXfile server machineinode-basedfile server machinenamei-basednameidefinitioncommandsfsck (AFS compared to UNIX)fsck commandAFS versioncommandsfsck (AFS version)directorieslost+foundlost+found directoryThe AFS version of the fsck Command and inode-based
fileserversThe fileserver uses either of two formats for storing data
on disk. The inode-based format uses a combination of regular
files and extra fields stored in the inode data structures that
are normally reserved for use by the operating system. The namei
format uses normal file storage and does not use special
structures. The choice of storage formats is chosen at compile
time and the two formats are incompatible. The inode format is
only available on certain platforms. The storage format must be
consistent for the fileserver binaries and all vice partitions on
a given file server machine.This section on fsck advice only applies to the inode-based
fileserver binaries. On servers using namei-based binaries, the
vendor-supplied fsck can be used as normal.If you are using AFS fileserver binaries compiled with the
inode-based format, never run the standard UNIX fsck command on an AFS file server
machine. It does not understand how the File Server organizes volume
data on disk, and so moves all AFS data into the lost+found directory on the partition.Instead, use the version of the fsck program that is included in the AFS
distribution. The OpenAFS Quick Start Guide
explains how to replace the vendor-supplied fsck program with the AFS version as you
install each server machine.The AFS version functions like the standard fsck program on data stored on both UFS and
AFS partitions. The appearance of a banner like the following as the
fsck program initializes confirms
that you are running the correct one:
--- AFS (R) version fsck---
where version is the AFS version. For
correct results, it must match the AFS version of the server
binaries in use on the machine.If you ever accidentally run the standard version of the
program, contact your AFS support provider, contact the OpenAFS
mailing lists, or refer to the OpenAFS support web
page for support options. It is sometimes possible to
recover volume data from the lost+found directory. If the data is not
recoverabled, then restoring from backup is recommended.Running the fsck binary supplied by the operating system
vendor on an fileserver using inode-based file storage will result
in data corruption!Creating Hard Linkshard linkAFS restrictions onrestrictionson hard links in AFSAFS does not allow hard links (created with the UNIX ln command) between files that reside in
different directories, because in that case it is unclear which of
the directory's ACLs to associate with the link.AFS also does not allow hard links to directories, in order to
keep the file system organized as a tree.It is possible to create symbolic links (with the UNIX
ln -s command) between elements in
two different AFS directories, or even between an element in AFS and
one in a machine's local UNIX file system. Do not create a symbolic
link in AFS to a file whose name begins with either a number sign
(#) or a percent sign (%), however. The Cache Manager interprets
such links as a mount point to a regular or read/write volume,
respectively.AFS Implements Save on Closefsync system callfor files saved on AFS clientclose system callfor files saved on AFS clientwritesystem call for files saved on AFS clientWhen an application issues the UNIX close system call on a file, the Cache
Manager performs a synchronous write of the data to the File Server
that maintains the central copy of the file. It does not return
control to the application until the File Server has acknowledged
receipt of the data. For the fsync
system call, control does not return to the application until the
File Server indicates that it has written the data to non-volatile
storage on the file server machine.When an application issues the UNIX write system call, the Cache Manager writes
modifications to the local AFS client cache only. If the local
machine crashes or an application program exits without issuing the
close system call, it is possible
that the modifications are not recorded in the central copy of the
file maintained by the File Server. The Cache Manager does sometimes
write this type of modified data from the cache to the File Server
without receiving the close or
fsync system call, such as when it
needs to free cache chunks for new data. However, it is not
generally possible to predict when the Cache Manager transfers
modified data to the File Server in this way.The implication is that if an application's Save option invokes the write system call rather than close or fsync, the changes are not necessarily stored
permanently on the File Server machine. Most application programs
issue the close system call for
save operations, as well as when they finish handling a file and
when they exit.Setuid Programssetuid programsrestrictions onThe UNIX setuid bit is ignored by default for programs run
from AFS, but can be enabled by the system administrator on a client
machine. The fs setcell command
determines whether setuid programs that originate in a particular
cell can run on a given client machine. Running setuid binaries from
AFS poses a security risk due to weaknesses in the integrity checks
of the AFS protocol and should normally not be permitted. See Determining if a Client Can Run Setuid
Programs.Set the UNIX setuid bit only for files whose owner is UID 0
(the local superuser root). This
does not present an automatic security risk: the local superuser has
no special privilege in AFS, but only in the local machine's UNIX
file system and kernel. Setting the UNIX setuid bit for files owned
with a different UID will have unpredictable resuilts, since that
UID will be interpreted as possibly different users on each AFS
client machine.Any file can be marked with the setuid bit, but only members
of the system:administrators group
can issue the chown system call or
the chown command, or issue the
chmod system call or the chmod command to set the setuid bit.Choosing a Cell Namecellnamechoosingchoosingnamecellconventionscell nameInternetconventions for cell nameThis section explains how to choose a cell name and explains why
choosing an appropriate cell name is important.Your cell name must distinguish your cell from all others in the
AFS global namespace. By convention, the cell name is the second
element in any AFS pathname; therefore, a unique cell name guarantees
that every AFS pathname uniquely identifies a file, even if cells use
the same directory names at lower levels in their local AFS
filespace. For example, both the Example Corporation cell and the Example
Organization cell can have a home directory for the user pat, because the pathnames are distinct:
/afs/example.com/usr/pat and /afs/example.org/usr/pat.By convention, cell names follow the Domain Name System (DNS)
conventions for domain names. If you are already an Internet site,
then it is simplest and strongly recommended to choose your Internet
domain name as the cell name.If you are not an Internet site, it is best to choose a unique
DNS-style name, particularly if you plan to connect to the Internet in
the future. There are a few constraints on AFS cell names:
It can contain as many as 64 characters, but shorter names
are better because the cell name frequently is part of machine
and file names. If your cell name is long, you can reduce
pathname length either by creating a symbolic link to the
complete cell name, at the second level in your file tree or by
using the CellAlias
configuration file on a client machine. See The Second (Cellname) Level.To guarantee it is suitable for different operating system
types, the cell name can contain only lowercase characters,
numbers, underscores, dashes, and periods. Do not include
command shell metacharacters.It can include any number of fields, which are
conventionally separated by periods (see the examples
below).How to Set the Cell Namesettingcell namecellnamesettingserver machinesetting home cellclient machinesetting home cellThe cell name is recorded in two files on the local disk of
each file server and client machine. Among other functions, these
files define the machine's cell membership and so affect how
programs and processes run on the machine; see Why Choosing the Appropriate Cell Name is
Important. The procedure for setting the cell name is
different for the two types of machines.For file server machines, the two files that record the cell
name are the /usr/afs/etc/ThisCell
and /usr/afs/etc/CellServDB
files. As described more explicitly in the OpenAFS Quick
Start Guide, you set the cell name in both by issuing the
bos setcellname command on the
first file server machine you install in your cell. It is not
usually necessary to issue the command again. If you use the Update
Server, it distributes its copy of the ThisCell and CellServDB files to additional server
machines that you install. If you do not use the Update Server, the
OpenAFS Quick Start Guide explains how to copy
the files manually.For client machines, the two files that record the cell name
are the /usr/vice/etc/ThisCell and
/usr/vice/etc/CellServDB files. You
create these files on a per-client basis, either with a text editor
or by copying them onto the machine from a central source in AFS.
See Maintaining Knowledge of Database
Server Machines for details.Change the cell name in these files only when you want to
transfer the machine to a different cell (client machines can only
have one default cell at a time and server machines can only belong
to one cell at a time). If the machine is a file server, follow the
complete set of instructions in the OpenAFS Quick Start
Guide for configuring a new cell. If the machine is a
client, all you need to do is change the files appropriately and
reboot the machine. The next section explains further the negative
consequences of changing the name of an existing cell.To set the default cell name used by most AFS commands without
changing the local /usr/vice/etc/ThisCell file, set the AFSCELL
environment variable in the command shell. It is worth setting this
variable if you need to complete significant administrative work in
a foreign cell.The fs checkservers and
fs mkmount commands do not use
the AFSCELL variable. The fs
checkservers command always defaults to the cell named
in the ThisCell file, unless the
-cell argument is used. The
fs mkmount command defaults to
the cell in which the parent directory of the new mount point
resides.Why Choosing the Appropriate Cell Name is ImportantThisCell file (client)how used by programsTake care to select a cell name that is suitable for long-term
use. Changing a cell name later is complicated. An appropriate cell
name is important because it is the second element in the pathname
of all files in a cell's file tree. Because each cell name is
unique, its presence in an AFS pathname makes the pathname unique in
the AFS global namespace, even if multiple cells use similar
filespace organization at lower levels. For instance, it means that
every cell can have a home directory called /afs/cellname/usr/pat without causing a conflict. The
presence of the cell name in pathnames also means that users in
every cell use the same pathname to access a file, whether the file
resides in their local cell or in a foreign cell.Another reason to choose the correct cell name early in the
process of installing your cell is that the cell membership defined
in each machine's ThisCell file
affects the performance of many programs and processes running on
the machine. For instance, AFS commands (fs, pts, and
vos commands, for example) by
default execute in the cell of the machine on which they are
issued. The command interpreters check the ThisCell file on the local disk and then
contact the database server machines listed in the CellServDB file or configured in DNS for the
indicated cell. (The bos commands
work differently because the issuer always has to name of the
machine on which to run the command.)The ThisCell file also
normally determines the cell for which a user receives an AFS token
when he or she logs in to a machine.If you change the cell name, you must change the ThisCell and CellServDB files on every server and client
machine. Failure to change them all will cause many commands from
the AFS suites to not work as expected.Participating in the AFS Global Namespaceparticipationin AFS global namespaceAFSglobal namespaceglobal namespaceParticipating in the AFS global namespace makes your cell's
local file tree visible to AFS users in foreign cells and makes other
cells' file trees visible to your local users. It makes file sharing
across cells just as easy as sharing within a cell. This section
outlines the procedures necessary for participating in the global
namespace.
Participation in the global namespace is not
mandatory. Some cells use AFS primarily to facilitate file
sharing within the cell, and are not interested in providing
their users with access to foreign cells.Making your file tree visible does not mean making it
vulnerable. You control how foreign users access your cell using
the same protection mechanisms that control local users'
access. See Granting and Denying Foreign
Users Access to Your Cell.The two aspects of participation are independent. A cell
can make its file tree visible without allowing its users to see
foreign cells' file trees, or can enable its users to see other
file trees without advertising its own.You make your cell visible to others by advertising your
database server machines and allowing users at other sites to
access your database server and file server machines. See Making Your Cell Visible to
Others.You control access to foreign cells on a per-client
machine basis. In other words, it is possible to make a foreign
cell accessible from one client machine in your cell but not
another. See Making Other Cells Visible
in Your Cell.What the Global Namespace Looks LikeconventionsAFS pathnamesAFSroot directory (/afs)on client machinedirectories/afsdirectories/afs/cellnamecellnameat second level in file treeThe AFS global namespace appears the same to all AFS cells
that participate in it, because they all agree to follow a small set
of conventions in constructing pathnames.The first convention is that all AFS pathnames begin with the
string /afs to indicate that they
belong to the AFS global namespace.The second convention is that the cell name is the second
element in an AFS pathname; it indicates where the file resides
(that is, the cell in which a file server machine houses the
file). As noted, the presence of a cell name in pathnames makes the
global namespace possible, because it guarantees that all AFS
pathnames are unique even if cells use the same directory names at
lower levels in their AFS filespace.What appears at the third and lower levels in an AFS pathname
depends on how a cell has chosen to arrange its filespace. There
are some suggested conventional directories at the third level; see
The Third Level.Making Your Cell Visible to Otherscellmaking local visible to foreignlocal cellmaking visible to foreign cellsforeign cellmaking local cell visibleYou make your cell visible to others by advertising your cell
name and database server machines. Just like client machines in the
local cell, the Cache Manager on machines in foreign cells use the
information to reach your cell's Volume Location (VL) Servers when
they need volume and file location information. For authenticated
access, foreign clients must be configured with the necessary
Kerberos version 5 domain-to-realm mappings and Key Distribution
Center (KDC) location information for both the local and remote
Kerberos version 5 realms.There are two places you can make this information available:
filesglobal CellServDBCellServDB file maintained by the AFS
Registraras global update sourceIn the global CellServDB file maintained by the AFS
Registrar. This file lists the name and database server
machines of every cell that has agreed to make this
information available to other cells. This file is available
at http://grand.central.org/csdb.htmlTo add or change your cell's listing in this file,
follow the instructions at http://grand.central.org/csdb.html.
It is a good policy to check the file for changes on a
regular schedule. An updated copy of this file is included
with new releases of OpenAFS.filesCellServDB.localCellServDB.local fileA file called CellServDB.local in the /afs/cellname/service/etc directory of your cell's
filespace. List only your cell's database server
machines.Update the files whenever you change the identity of your
cell's database server machines. Also update the copies of the
CellServDB files on all of your
server machines (in the /usr/afs/etc directory) and client machines
(in the /usr/vice/etc
directory). For instructions, see Maintaining the Server CellServDB File and
Maintaining Knowledge of Database Server
Machines.Once you have advertised your database server machines, it can
be difficult to make your cell invisible again. You can remove the
CellServDB.local file and ask the
AFS Registrar to remove your entry from the global CellServDB file, but other cells probably
have an entry for your cell in their local CellServDB files already. To make those
entries invalid, you must change the names or IP addresses of your
database server machines.Your cell does not have to be invisible to be inaccessible,
however. To make your cell completely inaccessible to foreign users,
remove the system:anyuser group
from all ACLs at the top three levels of your filespace; see Granting and Denying Foreign Users Access to Your
Cell.cellmaking foreign visible to locallocal cellmaking foreign cells visible inforeign cellmaking visible in local cellclient machinemaking foreign cell visibleMaking Other Cells Visible in Your CellTo make a foreign cell's filespace visible on a client machine
in your cell that is not configured for Freelance Mode or Dynamic Root mode, perform the following
three steps:
Mount the cell's root.cell volume at the second level in
your cell's filespace just below the /afs directory. Use the fs mkmount command with the -cell argument as instructed in To create a cellular mount
point.Mount AFS at the /afs
directory on the client machine. The afsd program, which initializes the
Cache Manager, performs the mount automatically at the
directory named in the first field of the local /usr/vice/etc/cacheinfo file or by the
command's -mountdir
argument. Mounting AFS at an alternate location makes it
impossible to reach the filespace of any cell that mounts its
root.afs and root.cell volumes at the conventional
locations. See Displaying and Setting
the Cache Size and Location.Create an entry for the cell in the list of database
server machines which the Cache Manager maintains in kernel
memory.The /usr/vice/etc/CellServDB file on every
client machine's local disk lists the database server machines
for the local and foreign cells. The afsd program reads the contents of the
CellServDB file into kernel
memory as it initializes the Cache Manager. You can also use
the fs newcell command to add
or alter entries in kernel memory directly between reboots of
the machine. See Maintaining
Knowledge of Database Server Machines.Non-windows client machines may enable Dynamic Root Mode by using the -dynroot option to afsd. When this option is enabled, all cells
listed in the CellServDB file will
appear in the /afs directory. The
contents of the root.afs volume
will be ignored. Windows client machines may enable Freelance Mode during client installation or
by setting the FreelanceClient
setting under Service Parameters in
the Windows Registry as mentioned in the Release
Notes. When this option is enabled, the root.afs volume is ignored and a mounpoint
for each cell is automatically created in the the \\AFS directory when the folder \\AFS\cellname is
accessed and the foreign Volume Location servers can be reached.
Note that making a foreign cell visible to client
machines does not guarantee that your users can access its
filespace. The ACLs in the foreign cell must also grant them the
necessary permissions.cellgranting local access to foreign userslocal cellgranting foreign users access toGranting and Denying Foreign Users Access to Your
CellMaking your cell visible in the AFS global namespace does not
take away your control over the way in which users from foreign
cells access your file tree.By default, foreign users access your cell as the user
anonymous, which means they have
only the permissions granted to the system:anyuser group on each directory's
ACL. Normally these permissions are limited to the l (lookup)
and r (read) permissions.There are three ways to grant wider access to foreign users:
Grant additional permissions to the system:anyuser group on certain
ACLs. Keep in mind, however, that all users can then access
that directory in the indicated way (not just specific foreign
users you have in mind).Enable automatic registration for users in the foreign
cell. This may be done by creating a cross-realm trust in the
Kerberos Database. Then add a
PTS group named system:authuser@FOREIGN.REALM
and give it a group quota greater than the number of foreign
users expected to be registered. After the cross-realm trust
and the PTS group are created, the aklog
command will automatically register foreign users as
needed. Consult the documentation for your Kerberos Server for instructions on how
to establish a cross-realm trust. Create a local authentication account for specific
foreign users, by creating entries in the Protection Database,
the Kerberos Database, and the local password file.cellfilespace configuration issuesconfiguringfilespace, issuesfile treeconventionsfor configuringConfiguring Your AFS FilespaceThis section summarizes the issues to consider when configuring
your AFS filespace. For a discussion of creating volumes that
correspond most efficiently to the filespace's directory structure,
see Creating Volumes to Simplify
Administration.For Windows users: Windows
uses a backslash (\) rather than a
forward slash (/) to separate the
elements in a pathname. The hierarchical organization of the
filespace is however the same as on a UNIX machine.AFS pathnames must follow a few conventions so the AFS global
namespace looks the same from any AFS client machine. There are
corresponding conventions to follow in building your file tree, not
just because pathnames reflect the structure of a file tree, but also
because the AFS Cache Manager expects a certain configuration.AFSroot directory (/afs)in cell filespacedirectories/afsThe Top /afs LevelThe first convention is that the top level in your file tree
be called the /afs directory. If
you name it something else, then you must use the -mountdir argument with the afsd program to get Cache Managers to mount
AFS properly. You cannot participate in the AFS global namespace in
that case.cellnameat second level in file treedirectories/afs/cellnamesymbolic linkat second level of AFS pathnameThe Second (Cellname) LevelThe second convention is that just below the /afs directory you place directories
corresponding to each cell whose file tree is visible and accessible
from the local cell. Minimally, there must be a directory for the
local cell. Each such directory is a mount point to the indicated
cell's root.cell volume. For
example, in the Example Corporation cell, /afs/example.com is a mount point for the cell's
own root.cell volume and example.org is a mount point for the Example
Organization cell's root.cell
volume. The fs lsmount command
displays the mount points.
% fs lsmount /afs/example.com
'/afs/example.com' is a mount point for volume '#root.cell'
% fs lsmount /afs/example.org
'/afs/example.org' is a mount point for volume '#example.org:root.cell'
To reduce the amount of typing necessary in pathnames, you can
create a symbolic link with an abbreviated name to the mount point
of each cell your users frequently access (particularly the home
cell). In the Example Corporation cell, for instance, /afs/example is a symbolic link to the /afs/example.com mount point, as the fs lsmount command reveals.
% fs lsmount /afs/example
'/afs/example' is a symbolic link, leading to a mount point for volume
'#root.cell' file treeconventionsthird leveldirectoriesconventional under /afs/cellnameThe Third LevelYou can organize the third level of your cell's file tree any
way you wish. The following list describes directories that appear
at this level in the conventional configuration:
commonThis directory contains programs and files needed by
users working on machines of all system types, such as text
editors, online documentation files, and so on. Its
/etc subdirectory is a
logical place to keep the central update sources for files
used on all of your cell's client machines, such as the
ThisCell and CellServDB files.publicA directory accessible to anyone who can access your
filespace, because its ACL grants the l (lookup) and r (read) permissions to the system:anyuser group. It is useful if
you want to enable your users to make selected information
available to everyone, but do not want to grant foreign
users access to the contents of the usr directory which houses user home
directories (and is also at this level). It is conventional
to create a subdirectory for each of your cell's
users.serviceThis directory contains files and subdirectories that
help cells coordinate resource sharing. For a list of the
proposed standard files and subdirectories to create, call
or write to AFS Product Support.As an example, files that other cells expect to find
in this directory's etc
subdirectory can include the following: CellServDB.export, a list of
database server machines for many cellsCellServDB.local, a list of the
cell's own database server machinespasswd, a copy
of the local password file (/etc/passwd or equivalent) kept
on the local disk of the cell's client machinesgroup, a copy
of the local groups file (/etc/group or equivalent) kept
on the local disk of the cell's client machinessys_typeA separate directory for storing the server and client
binaries for each system type you use in the cell.
Configuration is simplest if you use the system type names
assigned in the AFS distribution, particularly if you wish
to use the @sys variable in
pathnames (see Using the @sys
Variable in Pathnames). The OpenAFS Release
Notes lists the conventional name for each
supported system type.Within each such directory, create directories named
bin, etc, usr, and so on, to store the programs
normally kept in the /bin,
/etc and /usr directories on a local
disk. Then create symbolic links from the local directories
on client machines into AFS; see Configuring the Local Disk. Even if
you do not choose to use symbolic links in this way, it can
be convenient to have central copies of system binaries in
AFS. If binaries are accidentally removed from a machine,
you can recopy them onto the local disk from AFS rather than
having to recover them from tapeusrThis directory contains home directories for your
local users. As discussed in the previous entry for the
public directory, it is
often practical to protect this directory so that only
locally authenticated users can access it. This keeps the
contents of your user's home directories as secure as
possible.If your cell is quite large, directory lookup can be
slowed if you put all home directories in a single usr directory. For suggestions on
distributing user home directories among multiple grouping
directories, see Grouping Home
Directories.volume nameconventions forconventionsvolume namesvolumeseparate for each top level directoryfile treecreating volumes to match top level
directoriesCreating Volumes to Simplify AdministrationThis section discusses how to create volumes in ways that make
administering your system easier.At the top levels of your file tree (at least through the third
level), each directory generally corresponds to a separate
volume. Some cells also configure the subdirectories of some third
level directories as separate volumes. Common examples are the
/afs/cellname/common and /afs/cellname/usr directories.You do not have to create a separate volume for every directory
level in a tree, but the advantage is that each volume tends to be
smaller and easier to move for load balancing. The overhead for a
mount point is no greater than for a standard directory, nor does the
volume structure itself require much disk space. Most cells find that
below the fourth level in the tree, using a separate volume for each
directory is no longer efficient. For instance, while each user's home
directory (at the fourth level in the tree) corresponds to a separate
volume, all of the subdirectories in the home directory normally
reside in the same volume.Keep in mind that only one volume can be mounted at a given
directory location in the tree. In contrast, a volume can be mounted
at several locations, though this is not recommended because it
distorts the hierarchical nature of the file tree, potentially causing
confusion.volume namerestrictionsrestrictionson volume namesvolume nametwo requiredvolumeroot (root.afs and root.cell)root volumes (root.afs and root.cell)Assigning Volume NamesYou can name your volumes anything you choose, subject to a
few restrictions:
Read/write volume names can be up to 22 characters in
length. The maximum length for volume names is 31 characters,
and there must be room to add the .readonly extension on read-only
volumes.Do not add the .readonly and .backup extensions to volume names
yourself, even if they are appropriate. The Volume Server adds
them automatically as it creates a read-only or backup version
of a volume.There must be volumes named root.afs and root.cell, mounted respectively at the
top (/afs) level in the
filespace and just below that level, at the cell's name (for
example, at /afs/example.com in
the Example Corporation cell).Deviating from these names only creates confusion and
extra work. Changing the name of the root.afs volume, for instance, means
that you must use the -rootvol argument to the afsd program on every client machine,
to name the alternate volume.Similarly, changing the root.cell volume name prevents users in
foreign cells from accessing your filespace, if the mount
point for your cell in their filespace refers to the
conventional root.cell
name. Of course, this is one way to make your cell invisible
to other cells.It is best to assign volume names that indicate the type of
data they contain, and to use similar names for volumes with similar
contents. It is also helpful if the volume name is similar to (or at
least has elements in common with) the name of the directory at
which it is mounted. Understanding the pattern then enables you
accurately to guess what a volume contains and where it is
mounted.Many cells find that the most effective volume naming scheme
puts a common prefix on the names of all related volumes. Table 1 describes the recommended
prefixing scheme.
Suggested volume prefixesPrefixContentsExample NameExample Mount
Pointcommon.popular programs and filescommon.etc/afs/cellname/common/etcsrc.source codesrc.afs/afs/cellname/src/afsproj.project dataproj.portafs/afs/cellname/proj/portafstest.testing or other temporary datatest.smith/afs/cellname/usr/smith/testuser.user home directory datauser.terry/afs/cellname/usr/terrysys_type.programs compiled for an operating system
typers_aix42.bin/afs/cellname/rs_aix42/bin
Table 2 is a more
specific example for a cell's rs_aix42 system volumes and
directories:
Example volume-prefixing schemeExample NameExample Mount
Pointrs_aix42.bin/afs/cellname/rs_aix42/bin, /afs/cellname/rs_aix42/binrs_aix42.etc/afs/cellname/rs_aix42/etcrs_aix42.usr/afs/cellname/rs_aix42/usrrs_aix42.usr.afsws/afs/cellname/rs_aix42/usr/afswsrs_aix42.usr.lib/afs/cellname/rs_aix42/usr/librs_aix42.usr.bin/afs/cellname/rs_aix42/usr/binrs_aix42.usr.etc/afs/cellname/rs_aix42/usr/etcrs_aix42.usr.inc/afs/cellname/rs_aix42/usr/incrs_aix42.usr.man/afs/cellname/rs_aix42/usr/manrs_aix42.usr.sys/afs/cellname/rs_aix42/usr/sysrs_aix42.usr.local/afs/cellname/rs_aix42/usr/local
There are several advantages to this scheme:
The volume name is similar to the mount point name in
the filespace. In all of the entries in Table 2, for example, the
only difference between the volume and mount point name is
that the former uses periods as separators and the latter uses
slashes. Another advantage is that the volume name indicates
the contents, or at least suggests the directory on which to
issue the ls command to learn
the contents.It makes it easy to manipulate groups of related volumes
at one time. In particular, the vos
backupsys command's -prefix argument enables you to create
a backup version of every volume whose name starts with the
same string of characters. Making a backup version of each
volume is one of the first steps in backing up a volume with
the AFS Backup System, and doing it for many volumes with one
command saves you a good deal of typing. For instructions for
creating backup volumes, see Creating
Backup Volumes, For information on the AFS Backup
System, see Configuring the AFS
Backup System and Backing Up
and Restoring AFS Data.It makes it easy to group related volumes together on a
partition. Grouping related volumes together has several
advantages of its own, discussed in Grouping Related Volumes on a
Partition.volumegrouping related on same partitiondisk partitiongrouping related volumes onGrouping Related Volumes on a PartitionIf your cell is large enough to make it practical, consider
grouping related volumes together on a partition. In general, you
need at least three file server machines for volume grouping to be
effective. Grouping has several advantages, which are most obvious
when the file server machine becomes inaccessible:
If you keep a hardcopy record of the volumes on a
partition, you know which volumes are unavailable. You can
keep such a record without grouping related volumes, but a
list composed of unrelated volumes is much harder to maintain.
Note that the record must be on paper, because the outage can
prevent you from accessing an online copy or from issuing the
vos listvol command, which
gives you the same information.The effect of an outage is more localized. For example,
if all of the binaries for a given system type are on one
partition, then only users of that system type are
affected. If a partition houses binary volumes from several
system types, then an outage can affect more people,
particularly if the binaries that remain available are
interdependent with those that are not available.The advantages of grouping related volumes on a partition do
not necessarily extend to the grouping of all related volumes on one
file server machine. For instance, it is probably unwise in a cell
with two file server machines to put all system volumes on one
machine and all user volumes on the other. An outage of either
machine probably affects everyone.Admittedly, the need to move volumes for load balancing
purposes can limit the practicality of grouping related volumes.
You need to weigh the complementary advantages case by case.replicationappropriate volumesvolumetype to replicatevolumewhere to place replicatedread-only volumeselecting siteWhen to Replicate VolumesAs discussed in Replication,
replication refers to making a copy, or clone, of a read/write
source volume and then placing the copy on one or more additional
file server machines. Replicating a volume can increase the
availability of the contents. If one file server machine housing the
volume becomes inaccessible, users can still access the copy of the
volume stored on a different machine. No one machine is likely to
become overburdened with requests for a popular file, either,
because the file is available from several machines.However, replication is not appropriate for all cells. If a
cell does not have much disk space, replication can be unduly
expensive, because each clone not on the same partition as the
read/write source takes up as much disk space as its source volume
did at the time the clone was made. Also, if you have only one file
server machine, replication uses up disk space without increasing
availability.Replication is also not appropriate for volumes that change
frequently. You must issue the vos
release command every time you need to update a read-only
volume to reflect changes in its read/write source.For both of these reasons, replication is appropriate only for
popular volumes whose contents do not change very often, such as
system binaries and other volumes mounted at the upper levels of
your filespace. User volumes usually exist only in a read/write
version since they change so often.If you are replicating any volumes, you must replicate the
root.afs and root.cell volumes, preferably at two or three
sites each (even if your cell only has two or three file server
machines). The Cache Manager needs to pass through the directories
corresponding to the root.afs and
root.cell volumes as it interprets
any pathname. The unavailability of these volumes makes all other
volumes unavailable too, even if the file server machines storing
the other volumes are still functioning.Another reason to replicate the root.afs volume is that it can lessen the
load on the File Server machine. The Cache Manager has a bias to
access a read-only version of the root.afs volume if it is replicate, which
puts the Cache Manager onto the read-only path
through the AFS filespace. While on the read-only path, the Cache
Manager attempts to access a read-only copy of replicated
volumes. The File Server needs to track only one callback per Cache
Manager for all of the data in a read-only volume, rather than the
one callback per file it must track for read/write volumes. Fewer
callbacks translate into a smaller load on the File Server.If the root.afs volume is not
replicated, the Cache Manager follows a read/write path through the
filespace, accessing the read/write version of each volume. The File
Server distributes and tracks a separate callback for each file in a
read/write volume, imposing a greater load on it.For more on read/write and read-only paths, see The Rules of Mount Point Traversal.It also makes sense to replicate system binary volumes in many
cases, as well as the volume corresponding to the /afs/cellname/usr directory and the volumes corresponding
to the /afs/cellname/common directory and its
subdirectories.It is a good idea to place a replica on the same partition as
the read/write source. In this case, the read-only volume is a clone
(like a backup volume): it is a copy of the source volume's vnode
index, rather than a full copy of the volume contents. Only if the
read/write volume moves to another partition or changes
substantially does the read-only volume consume significant disk
space. Read-only volumes kept on other servers' partitions always
consume the full amount of disk space that the read/write source
consumed when the read-only volume was created.You cannot have a replica volume on a different partition of
the same server hosting the read/write volume. "Cheap" read-only
volumes must be on the same partition as the read/write; all other
read-only volumes must be on different servers.The Default Quota and ACL on a New VolumeEvery AFS volume has associated with it a quota that limits
the amount of disk space the volume is allowed to use. To set and
change quota, use the commands described in Setting and Displaying Volume Quota and Current
Size.By default, every new volume is assigned a space quota of 5000
KB blocks unless you include the -maxquota argument to the vos create command. Also by default, the ACL
on the root directory of every new volume grants all permissions to
the members of the system:administrators group. To learn how to
change these values when creating an account with individual
commands, see To create one user account
with individual commands. When using uss commands to create accounts, you can
specify alternate ACL and quota values in the template file's
V instruction; see Creating a Volume with the V
Instruction.server machineconfiguration issuesconfiguringfile server machine, issuesroles for server machinesummaryserver machineroles forsummaryserver machinefirst installedConfiguring Server MachinesThis section discusses some issues to consider when configuring
server machines, which store AFS data, transfer it to client machines
on request, and house the AFS administrative databases. To learn about
client machines, see Configuring Client
Machines.If your cell has more than one AFS server machine, you can
configure them to perform specialized functions. A machine can assume
one or more of the roles described in the following list. For more
details, see The Four Roles for File Server
Machines.
A simple file server machine runs
only the processes that store and deliver AFS files to client
machines. You can run as many simple file server machines as you
need to satisfy your cell's performance and disk space
requirements.A database server machine runs the
four database server processes that maintain AFS's replicated
administrative databases: the Authentication, Backup,
Protection, and Volume Location (VL) Server processes.A binary distribution machine
distributes the AFS server binaries for its system type to all
other server machines of that system type.The single system control machine
distributes common server configuration files to all other
server machines in the cell.
The OpenAFS Quick Beginnings explains how
to configure your cell's first file server machine to assume all four
roles. The OpenAFS Quick Beginnings chapter on
installing additional server machines also explains how to configure
them to perform one or more roles.database server machinereason to run threedistributionof databasesdatabases, distributeddistributed databasesReplicating the OpenAFS Administrative DatabasesThe AFS administrative databases are housed on database server
machines and store information that is crucial for correct cell
functioning. Both server processes and Cache Managers access the
information frequently:
Every time a Cache Manager fetches a file from a
directory that it has not previously accessed, it must look up
the file's location in the Volume Location Database
(VLDB).Every time a user obtains an AFS token from the
Authentication Server, the server looks up the user's password
in the Authentication Database.The first time that a user accesses a volume housed on a
specific file server machine, the File Server contacts the
Protection Server for a list of the user's group memberships
as recorded in the Protection Database.Every time you back up a volume using the AFS Backup
System, the Backup Server creates records for it in the Backup
Database.Maintaining your cell is simplest if the first machine has the
lowest IP address of any machine you plan to use as a database
server machine. If you later decide to use a machine with a lower IP
address as a database server machine, you must update the CellServDB file on all clients before
introducing the new machine.If your cell has more than one server machine, it is best to
run more than one as a database server machine (but more than three
are rarely necessary). Replicating the administrative databases in
this way yields the same benefits as replicating volumes: increased
availability and reliability. If one database server machine or
process stops functioning, the information in the database is still
available from others. The load of requests for database information
is spread across multiple machines, preventing any one from becoming
overloaded.Unlike replicated volumes, however, replicated databases do
change frequently. Consistent system performance demands that all
copies of the database always be identical, so it is not acceptable
to record changes in only some of them. To synchronize the copies of
a database, the database server processes use AFS's distributed
database technology, Ubik. See Replicating
the OpenAFS Administrative Databases.If your cell has only one file server machine, it must also
serve as a database server machine. If you cell has two file server
machines, it is not always advantageous to run both as database
server machines. If a server, process, or network failure interrupts
communications between the database server processes on the two
machines, it can become impossible to update the information in the
database because neither of them can alone elect itself as the
synchronization site.server machineprotecting directories on local disklocal diskprotecting on file server machineAFS Files on the Local DiskIt is generally simplest to store the binaries for all AFS
server processes in the /usr/afs/bin directory on every file server
machine, even if some processes do not actively run on the
machine. This makes it easier to reconfigure a machine to fill a new
role.For security reasons, the /usr/afs directory on a file server machine
and all of its subdirectories and files must be owned by the local
superuser root and have only the
first w (write) mode bit turned on. Some files even
have only the first r (read) mode bit turned on (for example, the
/usr/afs/etc/KeyFile file, which
lists the AFS server encryption keys). Each time the BOS Server
starts, it checks that the mode bits on certain files and
directories match the expected values. For a list, see the
OpenAFS Quick Beginnings section about
protecting sensitive AFS directories, or the discussion of the
output from the bos status command
in To display the status of server
processes and their BosConfig entries.For a description of the contents of all AFS directories on a
file server machine's local disk, see Administering Server Machines.Configuring Partitions to Store AFS DataThe partitions that house AFS volumes on a file server machine
must be mounted at directories named/vicepindexwhere index is one or two lowercase
letters. By convention, the first AFS partition created is mounted
at the /vicepa directory, the
second at the /vicepb directory,
and so on through the /vicepz
directory. The names then continue with /vicepaa through /vicepaz, /vicepba through /vicepbz, and so on, up to the maximum
supported number of server partitions, which is specified in the
OpenAFS Release Notes.Each /vicepx directory must
correspond to an entire partition or logical volume, and must be a
subdirectory of the root directory (/). It is not acceptable to
configure part of (for example) the /usr partition as an AFS server partition and
mount it on a directory called /usr/vicepa.Also, do not store non-AFS files on AFS server partitions. The
File Server and Volume Server expect to have available all of the
space on the partition. Sharing space also creates competition
between AFS and the local UNIX file system for access to the
partition, particularly if the UNIX files are frequently
used.server machinemonitoringfile server machinerebooting, aboutrebootingfile server machine, limitingweekly restart of BOS Server (automatic)aboutrestart times for BOS ServeraboutMonitoring, Rebooting and Automatic Process RestartsAFS provides several tools for monitoring the File Server,
including the scout and afsmonitor programs. You can configure them
to alert you when certain threshold values are exceeded, for example
when a server partition is more than 95% full. See Monitoring and Auditing AFS
Performance.Rebooting a file server machine requires shutting down the AFS
processes and so inevitably causes a service outage. Reboot file
server machines as infrequently as possible. For instructions, see
Rebooting a Server Machine.The BOS Server checks each morning at 5:00 a.m. for any newly
installed binary files in the /usr/afs/bin directory. It compares the
timestamp on each binary file to the time at which the corresponding
process last restarted. If the timestamp on the binary is later, the
BOS Server restarts the corresponding process to start using
it.The BOS server also supports performing a weekly restart of
all AFS server processes, including itself. This functionality is
disabled on new installs, but historically it was set to 4:00am on
Sunday. Administrators may find that installations predating OpenAFS
1.6.0 have weekly restarts enabled.The default times are in the early morning hours when the
outage that results from restarting a process is likely to disturb
the fewest number of people. You can display the restart times for
each machine with the bos
getrestart command, and set them with the bos setrestart command. The latter command
enables you to disable automatic restarts entirely, by setting the
time to never. See Setting the BOS Server's Restart
Times.client machineconfiguration issuesconfiguringclient machine, issuesConfiguring Client MachinesThis section summarizes issues to consider as you install and
configure client machines in your cell.client machinefiles required on local disklocal diskfiles required on client machinefilerequired on client machine local diskConfiguring the Local DiskYou can often free up significant amounts of local disk space
on AFS client machines by storing standard UNIX files in AFS and
creating symbolic links to them from the local disk. The @sys pathname variable can be useful in links
to system-specific files; see Using the @sys
Variable in Pathnames.There are two types of files that must actually reside on the
local disk: boot sequence files needed before the afsd program is invoked, and files that can
be helpful during file server machine outages.During a reboot, AFS is inaccessible until the afsd program executes and initializes the
Cache Manager. (In the conventional configuration, the AFS
initialization file is included in the machine's initialization
sequence and invokes the afsd
program.) Files needed during reboot prior to that point must reside
on the local disk. They include the following, but this list is not
necessarily exhaustive.
Standard UNIX utilities including the following or their
equivalents:
Machine initialization files (stored in the
/etc or /sbin directory on many system
types)The fstab
fileThe mount command
binaryThe umount
command binaryAll subdirectories and files in the /usr/vice directory, including the
following:
The /usr/vice/cache directoryThe /usr/vice/etc/afsd command
binaryThe /usr/vice/etc/cacheinfo fileThe /usr/vice/etc/CellServDB
fileThe /usr/vice/etc/ThisCell fileFor more information on these files, see Configuration and Cache-Related Files on
the Local Disk.The other type of files and programs to retain on the local
disk are those you need when diagnosing and fixing problems caused
by a file server outage, because the outage can make inaccessible
the copies stored in AFS. Examples include the binaries for a text
editor (such as ed or vi) and for the fs and bos
commands. Store copies of AFS command binaries in the /usr/vice/etc directory as well as including
them in the /usr/afsws directory,
which is normally a link into AFS. Then place the /usr/afsws directory before the /usr/vice/etc directory in users'
PATH environment variable definition. When AFS is
functioning normally, users access the copy in the /usr/afsws directory, which is more likely to
be current than a local copy.Enabling Access to Foreign Cellsclient machineenabling access to foreign cellAs detailed in Making Other Cells
Visible in Your Cell, you enable the Cache Manager to access
a cell's AFS filespace by storing a list of the cell's database
server machines in the local /usr/vice/etc/CellServDB file. The Cache
Manager reads the list into kernel memory at reboot for faster
retrieval. You can change the list in kernel memory between reboots
by using the fs newcell command.Because each client machine maintains its own copy of the
CellServDB file, you can in theory
enable access to different foreign cells on different client
machines. This is not usually practical, however, especially if
users do not always work on the same machine.at-sys (@sys) variable in pathnamessys (@sys) variable in pathnamesvariables@sys in pathnamesUsing the @sys Variable in PathnamesWhen creating symbolic links into AFS on the local disk, it is
often practical to use the @sys variable in pathnames. The Cache
Manager automatically substitutes the local machine's AFS system
name (CPU/operating system type) for the @sys variable. This means
you can place the same links on machines of various system types and
still have each machine access the binaries for its system type. For
example, the Cache Manager on a machine running AIX 4.2 converts
/afs/example.com/@sys to /afs/example.com/rs_aix42, whereas a machine
running Solaris 10 converts it to /afs/example.com/sun4x_510.If you want to use the @sys variable, it is simplest to use
the conventional AFS system type names as specified in the OpenAFS
Release Notes. The Cache Manager records the local machine's system
type name in kernel memory during initialization. If you do not use
the conventional names, you must use the fs
sysname command to change the value in kernel memory from
its default just after Cache Manager initialization, on every client
machine of the relevant system type. The fs
sysname command also displays the current value; see
Displaying and Setting the System Type
Name.In pathnames in the AFS filespace itself, use the @sys
variable carefully and sparingly, because it can lead to unexpected
results. It is generally best to restrict its use to only one level
in the filespace. The third level is a common choice, because that
is where many cells store the binaries for different machine
types.Multiple instances of the @sys variable in a pathname are
especially dangerous to people who must explicitly change
directories (with the cd command,
for example) into directories that store binaries for system types
other than the machine on which they are working, such as
administrators or developers who maintain those directories. After
changing directories, it is recommended that such people verify they
are in the desired directory.Setting Server PreferencesThe Cache Manager stores a table of preferences for file
server machines in kernel memory. A preference rank pairs a file
server machine interface's IP address with an integer in the range
from 1 to 65,534. When it needs to access a file, the Cache Manager
compares the ranks for the interfaces of all machines that house the
file, and first attempts to access the file via the interface with
the best rank. As it initializes, the Cache Manager sets default
ranks that bias it to access files via interfaces that are close to
it in terms of network topology. You can adjust the preference ranks
to improve performance if you wish.The Cache Manager also uses similar preferences for Volume
Location (VL) Server machines. Use the fs
getserverprefs command to display preference ranks and
the fs setserverprefs command to
set them. See Maintaining Server Preference
Ranks.user accountconfiguration issuesConfiguring AFS User AccountsThis section discusses some of the issues to consider when
configuring AFS user accounts. Because AFS is separate from the UNIX
file system, a user's AFS account is separate from her UNIX
account.The preferred method for creating a user account is with the
uss suite of commands. With a single
command, you can create all the components of one or many accounts,
after you have prepared a template file that guides the account
creation. See Creating and Deleting User
Accounts with the uss Command Suite.Alternatively, you can issue the individual commands that create
each component of an account. For instructions, along with
instructions for removing user accounts and changing user passwords,
user volume quotas and usernames, see Administering User Accounts.When users leave your system, it is often good policy to remove
their accounts. Instructions appear in Deleting Individual Accounts with the uss delete
Command and Removing a User
Account.An AFS user account consists of the following components, which
are described in greater detail in The
Components of an AFS User Account.
A Protection Database entryAn Authentication Database entryA volumeA home directory at which the volume is mountedOwnership of the home directory and full permissions on
its ACLAn entry in the local password file (/etc/passwd or equivalent) of each
machine the user needs to log intoOptionally, standard files and subdirectories that make
the account more usefulBy creating some components but not others, you can create
accounts at different levels of functionality, using either uss commands as described in Creating and Deleting User Accounts with the uss
Command Suite or individual commands as described in Administering User Accounts. The levels of
functionality include the following
An authentication-only account enables the user to obtain
AFS tokens and so to access protected AFS data and to issue
privileged commands. It consists only of entries in the
Authentication and Protection Database. This type of account is
suitable for administrative accounts and for users from foreign
cells who need to access protected data. Local users generally
also need a volume and home directory.A basic user account includes a volume for the user, in
addition to Authentication and Protection Database entries. The
volume is mounted in the AFS filespace as the user's home
directory, and provides a repository for the user's personal
files.A full account adds configuration files for basic
functions such as logging in, printing, and mail delivery to a
basic account, making it more convenient and useful. For a
discussion of some useful types of configuration files, see
Creating Standard Files in New AFS
Accounts.If your users have UNIX user accounts that predate the
introduction of AFS in the cell, you possibly want to convert them
into AFS accounts. There are three main issues to consider:
Making UNIX and AFS UIDs matchSetting the password field in the local password file
appropriatelyMoving files from the UNIX file system into AFSFor further discussion, see Converting
Existing UNIX Accounts with uss or Converting Existing UNIX Accounts.usernamechoosingusernameusernamechoosingnameuseranonymous userAFS UID reservedAFS UIDreservedanonymous userChoosing Usernames and Naming Other Account
ComponentsThis section suggests schemes for choosing usernames, AFS
UIDs, user volume names and mount point names, and also outlines
some restrictions on your choices.UsernamesAFS imposes very few restrictions on the form of
usernames. It is best to keep usernames short, both because many
utilities and applications can handle usernames of no more than
eight characters and because by convention many components of and
AFS account incorporate the name. These include the entries in the
Protection and Authentication Databases, the volume, and the mount
point. Depending on your electronic mail delivery system, the
username can become part of the user's mailing address. The
username is also the string that the user types when logging in to
a client machine.Some common choices for usernames are last names, first names,
initials, or a combination, with numbers sometimes added. It is
also best to avoid using the following characters, many of which
have special meanings to the command shell.
The comma (,)The colon (:), because
AFS reserves it as a field separator in protection group
names; see The Two Types of
User-Defined GroupsThe semicolon (;)The "at-sign" (@); this
character is reserved for Internet mailing addressesSpacesThe newline characterThe period (.); it is
conventional to use this character only in the special
username that an administrator adopts while performing
privileged tasks, such as pat.adminAFS UIDs and UNIX UIDsAFS associates a unique identification number, the AFS UID,
with every username, recording the mapping in the user's
Protection Database entry. The AFS UID functions within AFS much
as the UNIX UID does in the local file system: the AFS server
processes and the Cache Manager use it internally to identify a
user, rather than the username.Every AFS user also must have a UNIX UID recorded in the local
password file (/etc/passwd or
equivalent) of each client machine they log onto. Both
administration and a user's AFS access are simplest if the AFS UID
and UNIX UID match. One important consequence of matching UIDs is
that the owner reported by the ls
-l command matches the AFS username.It is usually best to allow the Protection Server to allocate
the AFS UID as it creates the Protection Database entry. However,
both the pts createuser command and
the uss commands that create user
accounts enable you to assign AFS UIDs explicitly. This is
appropriate in two cases:
You wish to group together the AFS UIDs of related
usersYou are converting an existing UNIX account into an AFS
account and want to make the AFS UID match the existing UNIX
UIDAfter the Protection Server initializes for the first time on
a cell's first file server machine, it starts assigning AFS UIDs at
a default value. To change the default before creating any user
accounts, or at any time, use the pts
setmax command to reset the max user id
counter. To display the counter, use the pts listmax command. See Displaying and Setting the AFS UID and GID
Counters.AFS reserves one AFS UID, 32766, for the user anonymous. The AFS server processes assign
this identity and AFS UID to any user who does not possess a token
for the local cell. Do not assign this AFS UID to any other user or
hardcode its current value into any programs or a file's owner
field, because it is subject to change in future releases.usernamepart of volume namechoosingnameuser volumeUser Volume NamesLike any volume name, a user volume's base (read/write) name
cannot exceed 22 characters in length or include the .readonly or .backup extension. See Creating Volumes to Simplify
Administration. By convention, user volume names have the
format user.username. Using the
user. prefix not only makes it
easy to identify the volume's contents, but also to create a
backup version of all user volumes by issuing a single vos backupsys command.mount pointchoosing name for user volumechoosingnameuser volume mount pointMount Point NamesBy convention, the mount point for a user's volume is named
after the username. Many cells follow the convention of mounting
user volumes in the /afs/cellname/usr directory, as discussed in The Third Level. Very large cells
sometimes find that mounting all user volumes in the same
directory slows directory lookup, however; for suggested
alternatives, see the following section.directoriesfor grouping user home directoriesuser accountsuggestions for grouping home directoriesGrouping Home DirectoriesMounting user volumes in the /afs/cellname/usr directory is an AFS-appropriate
variation on the standard UNIX practice of putting user home
directories under the /usr
subdirectory. However, cells with more than a few hundred users
sometimes find that mounting all user volumes in a single directory
results in slow directory lookup. The solution is to distribute user
volume mount points into several directories; there are a number of
alternative methods to accomplish this.
Distribute user home directories into multiple
directories that reflect organizational divisions, such as
academic or corporate departments. For example, a company can
create group directories called usr/marketing, usr/research, usr/finance. A good feature of this
scheme is that knowing a user's department is enough to find
the user's home directory. Also, it makes it easy to set the
ACL to limit access to members of the department only. A
potential drawback arises if departments are of sufficiently
unequal size that users in large departments experience slower
lookup than users in small departments. This scheme is also
not appropriate in cells where users frequently change between
divisions.Distribute home directories into alphabetic
subdirectories of the usr
directory (the usr/a
subdirectory, the usr/b
subdirectory, and so on), based on the first letter of the
username. If the cell is very large, create subdirectories
under each letter that correspond to the second letter in the
user name. This scheme has the same advantages and
disadvantages of a department-based scheme. Anyone who knows
the user's username can find the user's home directory, but
users with names that begin with popular letters sometimes
experience slower lookup.Distribute home directories randomly but evenly into
more than one grouping directory. One cell that uses this
scheme has over twenty such directories called the usr1 directory, the usr2 directory, and so on. This scheme
is especially appropriate in cells where the other two schemes
do not seem feasible. It eliminates the potential problem of
differences in lookup speed, because all directories are about
the same size. Its disadvantage is that there is no way to
guess which directory a given user's volume is mounted in, but
a solution is to create a symbolic link in the regular
usr directory that references
the actual mount point. For example, if user smith's volume is mounted at the
/afs/bigcell.example.com/usr17/smith
directory, then the /afs/bigcell.example.com/usr/smith
directory is a symbolic link to the ../usr17/smith directory. This way, if
someone does not know which directory the user smith is in, he or she can access it
through the link called usr/smith; people who do know the
appropriate directory save lookup time by specifying
it.For instructions on how to implement the various schemes when
using the uss program to create
user accounts, see Evenly Distributing User
Home Directories with the G Instruction and Creating a Volume with the V
Instruction.Making a Backup Version of User Volumes AvailableMounting the backup version of a user's volume is a simple way
to enable users themselves to restore data they have accidentally
removed or deleted. It is conventional to mount the backup version
at a subdirectory of the user's home directory (called perhaps the
OldFiles subdirectory), but other
schemes are possible. Once per day you create a new backup version
to capture the changes made that day, overwriting the previous day's
backup version with the new one. Users can always retrieve the
previous day's copy of a file without your assistance, freeing you
to deal with more pressing tasks.Users sometimes want to delete the mount point to their backup
volume, because they erroneously believe that the backup volume's
contents count against their quota. Remind them that the backup
volume is separate, so the only space it uses in the user volume is
the amount needed for the mount point.For further discussion of backup volumes, see Backing Up AFS Data and Creating Backup Volumes.filecreating standard ones in new user accountuser accountcreatingstandard files increatingstandard files in new user accountCreating Standard Files in New AFS AccountsFrom your experience as a UNIX administrator, you are probably
familiar with the use of login and shell initialization files (such
as the .login and .cshrc files) to make an account easier to
use.It is often practical to add some AFS-specific directories to
the definition of the user's PATH environment
variable, including the following:
The path to a bin
subdirectory in the user's home directory for binaries the
user has created (that is, /afs/cellname/usr/username/bin)The /usr/afsws/bin
path, which conventionally includes programs like fs, klog, kpasswd, pts, tokens, and unlogThe /usr/afsws/etc
path, if the user is an administrator; it usually houses the
AFS command suites that require privilege (the backup, butc, kas, uss, vos commands), and others.If you are not using an AFS-modified login utility, it can be
helpful to users to invoke the klog
command in their .login file so
that they obtain AFS tokens as part of logging in. In the following
example command sequence, the first line echoes the string
klog to the standard output stream,
so that the user understands the purpose of the
Password: prompt that appears when
the second line is executed. The -setpag flag associates the new tokens with a
process authentication group (PAG), which is discussed further in
Identifying AFS Tokens by PAG.
echo -n "klog " klog -setpag
The following sequence of commands has a similar effect,
except that the pagsh command forks
a new shell with which the PAG and tokens are associated.
pagsh echo -n "klog " klog
If you use an AFS-modified login utility, this sequence is not
necessary, because such utilities both log a user in locally and
obtain AFS tokens.groupAFS GIDgrouprestrictionsgroupprivacy flagsprivacy flags on Protection Database entryUsing AFS Protection GroupsAFS enables users to define their own groups of other users or
machines. The groups are placed on ACLs to grant the same permissions
to many users without listing each user individually. For group
creation instructions, see Administering the
Protection Database.Groups have AFS ID numbers, just as users do, but an AFS group
ID (GID) is a negative integer whereas a user's AFS UID is a positive
integer. By default, the Protection Server allocates a new group's AFS
GID automatically, but members of the system:administrators group can assign a GID
when issuing the pts creategroup
command. Before explicitly assigning a GID, it is best to verify that
it is not already in use.A group cannot belong to another group, but it can own another
group or even itself as long as it (the owning group) has at least one
member. The current owner of a group can transfer ownership of the
group to another user or group, even without the new owner's
permission. At that point the former owner loses administrative
control over the group.By default, each user can create 20 groups. A system
administrator can increase or decrease this group creation quota with
the pts setfields command.Each Protection Database entry (group or user) is protected by a
set of five privacy flagswhich limit who can administer the entry and
what they can do. The default privacy flags are fairly restrictive,
especially for user entries. See Setting the
Privacy Flags on Database Entries.system:administrators groupaboutsystem:anyuser groupaboutsystem:authuser groupaboutgroupsystem-definedThe Three System GroupsAs the Protection Server initializes for the first time on a
cell's first database server machine, it automatically creates three
group entries: the system:anyuser,
system:authuser, and system:administrators groups.AFS UIDreservedsystem-defined groupsThe first two system groups are unlike any other groups in the
Protection Database in that they do not have a stable membership:
The system:anyuser
group includes everyone who can access a cell's AFS filespace:
users who have tokens for the local cell, users who have
logged in on a local AFS client machine but not obtained
tokens (such as the local superuser root), and users who have connected to
a local machine from outside the cell. Placing the system:anyuser group on an ACL grants
access to the widest possible range of users. It is the only
way to extend access to users from foreign AFS cells that do
not have local accounts.The system:authuser
group includes everyone who has a valid token obtained from
the cell's AFS authentication service.Because the groups do not have a stable membership, the
pts membership command produces no
output for them. Similarly, they do not appear in the list of groups
to which a user belongs.The system:administrators
group does have a stable membership, consisting of the cell's
privileged administrators. Members of this group can issue any
pts command, and are the only ones
who can issue several other restricted commands (such as the
chown command on AFS files). By
default, they also implicitly have the a (administer) and l (lookup)
permissions on every ACL in the filespace. For information about
changing this default, see Administering
the system:administrators Group.For a discussion of how to use system groups effectively on
ACLs, see Using Groups on
ACLs.The Two Types of User-Defined GroupsAll users can create regular groups. A regular group name has
two fields separated by a colon, the first of which must indicate
the group's ownership. The Protection Server refuses to create or
change the name of a group if the result does not accurately
indicate the ownership.Members of the system:administrators group can create
prefix-less groups whose names do not have the first field that
indicates ownership. For suggestions on using the two types of
groups effectively, see Using Groups
Effectively.authenticationAFS separate from UNIXAFSauthentication separate from UNIXLogin and Authentication in AFSAs explained in Differences in
Authentication, AFS authentication is separate from UNIX
authentication because the two file systems are separate. The
separation has two practical implications:
To access AFS files, users must both log into the local
file system and authenticate with the AFS authentication
service. (Logging into the local file system is necessary
because the only way to access the AFS filespace is through a
Cache Manager, which resides in the local machine's
kernel.)Passwords are stored in two separate places: in the
Kerberos Database for AFS and in the each machine's local
password file (the /etc/passwd
file or equivalent) for the local file system.When a user successfully authenticates, the AFS authentication
service passes a token to the user's Cache Manager. The token is a
small collection of data that certifies that the user has correctly
provided the password associated with a particular AFS identity. The
Cache Manager presents the token to AFS server processes along with
service requests, as proof that the user is genuine. To learn about
the mutual authentication procedure they use to establish identity,
see A More Detailed Look at Mutual
Authentication.The Cache Manager stores tokens in the user's credential
structure in kernel memory. To distinguish one user's credential
structure from another's, the Cache Manager identifies each one either
by the user's UNIX UID or by a process authentication group (PAG),
which is an identification number guaranteed to be unique in the
cell. For further discussion, see Identifying
AFS Tokens by PAG.tokensone-per-cell ruleA user can have only one token per cell in each separately
identified credential structure. To obtain a second token for the same
cell, the user must either log into a different machine or obtain
another credential structure with a different identifier than any
existing credential structure, which is most easily accomplished by
issuing the pagsh command (see Identifying AFS Tokens by PAG). In a single
credential structure, a user can have one token for each of many cells
at the same time. As this implies, authentication status on one
machine or PAG is independent of authentication status on another
machine or PAG, which can be very useful to a user or system
administrator.The AFS distribution includes library files that enable each
system type's login utility to authenticate users with AFS and log
them into the local file system in one step. If you do not configure
an AFS-modified login utility on a client machine, its users must
issue the klog command to
authenticate with AFS after logging in.The AFS-modified libraries do not necessarily support all
features available in an operating system's proprietary login
utility. In some cases, it is not possible to support a utility at
all. For more information about the supported utilities in each AFS
version, see the OpenAFS Release Notes.commandspagshpagsh commandcommandsklog with -setpag flagklog commandwith -setpag flagPAGcreating with klog or pagsh commandcreatingPAG with klog or pagsh commandprocess authentication groupPAGIdentifying AFS Tokens by PAGAs noted, the Cache Manager identifies user credential
structures either by UNIX UID or by PAG. Using a PAG is preferable
because it guaranteed to be unique: the Cache Manager allocates it
based on a counter that increments with each use. In contrast,
multiple users on a machine can share or assume the same UNIX UID,
which creates potential security problems. The following are two
common such situations:
The local superuser root can always assume any other user's
UNIX UID simply by issuing the su command, without providing the
user's password. If the credential structure is associated
with the user's UNIX UID, then assuming the UID means
inheriting the AFS tokens.Two users working on different NFS client machines can
have the same UNIX UID in their respective local file
systems. If they both access the same NFS/AFS Translator
machine, and the Cache Manager there identifies them by their
UNIX UID, they become indistinguishable. To eliminate this
problem, the Cache Manager on a translator machine
automatically generates a PAG for each user and uses it,
rather than the UNIX UID, to tell users apart.Yet another advantage of PAGs over UIDs is that processes
spawned by the user inherit the PAG and so share the token; thus
they gain access to AFS as the authenticated user. In many
environments, for example, printer and other daemons run under
identities (such as the local superuser root) that the AFS server processes recognize
only as the anonymous user. Unless
PAGs are used, such daemons cannot access files for which the
system:anyuser group does not have
the necessary ACL permissions.Once a user has a PAG, any new tokens the user obtains are
associated with the PAG. The PAG expires two hours after any
associated tokens expire or are discarded. If the user issues the
klog command before the PAG
expires, the new token is associated with the existing PAG (the PAG
is said to be recycled in this case).AFS-modified login utilities automatically generate a PAG, as
described in the following section. If you use a standard login
utility, your users must issue the pagsh command before the klog command, or include the latter command's
-setpag flag. For instructions, see
Using Two-Step Login and
Authentication.Users can also use either command at any time to create a new
PAG. The difference between the two commands is that the klog command replaces the PAG associated with
the current command shell and tokens. The pagsh command initializes a new command shell
before creating a new PAG. If the user already had a PAG, any
running processes or jobs continue to use the tokens associated with
the old PAG whereas any new jobs or processes use the new PAG and
its associated tokens. When you exit the new shell (by pressing
<Ctrl-d>, for example), you
return to the original PAG and shell. By default, the pagsh command initializes a Bourne shell, but
you can include the -c argument to
initialize a C shell (the /bin/csh
program on many system types) or Korn shell (the /bin/ksh program) instead.login utilityAFS versionUsing an AFS-modified login UtilityAs previously mentioned, an AFS-modified login utility
simultaneously obtains an AFS token and logs the user into the local
file system. This section outlines the login and authentication
process and its interaction with the value in the password field of
the local password file.An AFS-modified login utility performs a sequence of steps
similar to the following; details can vary for different operating
systems:
It checks the user's entry in the local password file
(the /etc/passwd file or
equivalent).If no entry exists, or if an asterisk
(*) appears in the entry's
password field, the login attempt fails. If the entry exists,
the attempt proceeds to the next step.The utility obtains a PAG.The utility converts the password
provided by the user into an encryption key and encrypts a
packet of data with the key. It sends the packet to the AFS
authentication service (the AFS Authentication Server in the
conventional configuration).The authentication service decrypts the packet and,
depending on the success of the decryption, judges the
password to be correct or incorrect. (For more details, see
A More Detailed Look at Mutual
Authentication.)
If the authentication service judges the password
incorrect, the user does not receive an AFS token. The
PAG is retained, ready to be associated with any tokens
obtained later. The attempt proceeds to Step 6.If the authentication service judges the password
correct, it issues a token to the user as proof of AFS
authentication. The login utility logs the user into the
local UNIX file system. Some login utilities echo the
following banner to the screen to alert the user to
authentication with AFS. Step 6 is skipped.
AFS(R) version Login
If no AFS token was granted in
Step 4, the login utility
attempts to log the user into the local file system, by
comparing the password provided to the local password file.
If the password is incorrect or any value other
than an encrypted 13-character string appears in the
password field, the login attempt fails.If the password is correct, the user is logged
into the local file system only.local password filewhen using AFS--modified login utilitylogin utilityAFS version's interaction with local password
filepasswordlocal password fileAs indicated, when you use an AFS-modified login utility, the
password field in the local password file is no longer the primary
gate for access to your system. If the user provides the correct AFS
password, then the program never consults the local password
file. However, you can still use the password field to control
access, in the following way:
To prevent both local login and AFS authentication,
place an asterisk (*) in the
field. This is useful mainly in emergencies, when you want to
prevent a certain user from logging into the machine.To prevent login to the local file system if the user
does not provide the correct AFS password, place a character
string of any length other than the standard thirteen
characters in the field. This is appropriate if you want to
permit only people with local AFS accounts to login on your
machines. A single X or other
character is the most easily recognizable way to do
this.To enable a user to log into the local file system even
after providing an incorrect AFS password, record a standard
UNIX encrypted password in the field by issuing the standard
UNIX password-setting command (passwd or equivalent).Systems that use a Pluggable Authentication Module (PAM) for
login and AFS authentication do not necessarily consult the local
password file at all, in which case they do not use the password
field to control authentication and login attempts. Instead,
instructions in the PAM configuration file (on many system types,
/etc/pam.conf) fill the same
function. See the instructions in the OpenAFS Quick Beginnings for
installing AFS-modified login utilities.local password filewhen not using AFS-modified login utilityUsing Two-Step Login and AuthenticationIn cells that do not use an AFS-modified login utility, users
must issue separate commands to login and authenticate, as detailed
in the OpenAFS User Guide:
They use the standard login program to login to the local
file system, providing the password listed in the local
password file (the /etc/passwd file or equivalent).They must issue the klog command to authenticate with the
AFS authentication service, including its -setpag flag to associate the new
tokens with a process authentication group (PAG).As mentioned in Creating Standard
Files in New AFS Accounts, you can invoke the klog -setpag command in a user's .login file (or equivalent) so that the user
does not have to remember to issue the command after logging in. The
user still must type a password twice, once at the prompt generated
by the login utility and once at the klog command's prompt. This implies that the
two passwords can differ, but it is less confusing if they do
not.Another effect of not using an AFS-modified login utility is
that the AFS servers recognize the standard login program as the anonymous user. If the login program needs to access any AFS files
(such as the .login file in a
user's home directory), then the ACL that protects the file must
include an entry granting the l
(lookup) and r (read)
permissions to the system:anyuser
group.When you do not use an AFS-modified login utility, an actual
(scrambled) password must appear in the local password file for each
user. Use the /bin/passwd file to
insert or change these passwords. It is simpler if the password in
the local password file matches the AFS password, but it is not
required.tokensdisplaying for usertokenscommandcommandstokenslistingtokens held by issuercommandsklogklog commandserver processcreating ticket (tokens) forticketstokenstokenscreating for server processauthenticated identityacquiring with klog commandunlog commandcommandsunlogdiscardingtokenstokensdiscarding with unlog commandObtaining, Displaying, and Discarding TokensOnce logged in, a user can obtain a token at any time with the
klog command. If a valid token
already exists, the new one overwrites it. If a PAG already exists,
the new token is associated with it.By default, the klog command
authenticates the issuer using the identity currently logged in to
the local file system. To authenticate as a different identity, use
the -principal argument. To obtain
a token for a foreign cell, use the -cell argument (it can be combined with the
-principal argument). See the
OpenAFS User Guide and the entry for the klog command in the OpenAFS Administration
Reference.To discard either all tokens or the token for a particular
cell, issue the unlog command. The
command affects only the tokens associated with the current command
shell. See the OpenAFS User Guideand the entry for the unlog command in the OpenAFS Administration
Reference.To display the tokens associated with the current command
shell, issue the tokens
command. The following examples illustrate its output in various
situations.If the issuer is not authenticated in any cell:
% tokens
Tokens held by the Cache Manager:
--End of list--
The following shows the output for a user with AFS UID 1000 in
the Example Corporation cell:
% tokens
Tokens held by the Cache Manager:
User's (AFS ID 1000) tokens for afs@example.com [Expires Jun 2 10:00]
--End of list--
The following shows the output for a user who is authenticated
in Example Corporation cell, the Example Organization cell and the
Example Network cell. The user has different AFS UIDs in the three
cells. Tokens for the last cell are expired:
% tokens
Tokens held by the Cache Manager:
User's (AFS ID 1000) tokens for afs@example.com [Expires Jun 2 10:00]
User's (AFS ID 4286) tokens for afs@example.org [Expires Jun 3 1:34]
User's (AFS ID 22) tokens for afs@example.net [>>Expired<<]
--End of list--
The Kerberos version of the tokens command (the tokens.krb command), also reports information
on the ticket-granting ticket, including the ticket's owner, the
ticket-granting service, and the expiration date, as in the
following example. Also see Support for
Kerberos Authentication.
% tokens.krb
Tokens held by the Cache Manager:
User's (AFS ID 1000) tokens for afs@example.com [Expires Jun 2 10:00]
User smith's tokens for krbtgt.EXAMPLE.COM@example.com [Expires Jun 2 10:00]
--End of list--
Setting Default Token Lifetimes for Userstokenssetting default lifetimes for usersThe maximum lifetime of a user token is the smallest of the
ticket lifetimes recorded in the following three Authentication
Database entries. The kas examine
command reports the lifetime as Max ticket
lifetime. Administrators who have the
ADMIN flag on their Authentication
Database entry can use the -lifetime argument to the kas setfields command to set an entry's
ticket lifetime.
The afs entry, which
corresponds to the AFS server processes. The default is 100
hours.The krbtgt.cellname
entry, which corresponds to the ticket-granting ticket used
internally in generating the token. The default is 720 hours
(30 days).The entry for the user of the AFS-modified login utility
or issuer of the klog
command. The default is 25 hours for user entries created
using the AFS 3.1 or later version of the Authentication
Server, and 100 hours for user entries created using the AFS
3.0 version of the Authentication Server. A user can use the
kas examine command to
display his or her own Authentication Database entry.An AFS-modified login utility always grants a token with a
lifetime calculated from the previously described three
values. When issuing the klog
command, a user can request a lifetime shorter than the default by
using the -lifetime argument. For
further information, see the OpenAFS User Guide and the klog reference page in the OpenAFS
Administration Reference.Changing Passwordspasswordchanging in AFSkpasswd commandcommandskpasswdkas commandssetpasswordcommandskas setpasswordRegular AFS users can change their own passwords by using
either the kpasswd or kas setpassword command. The commands prompt
for the current password and then twice for the new password, to
screen out typing errors.Administrators who have the
ADMIN flag on their Authentication
Database entries can change any user's password, either by using the
kpasswd command (which requires
knowing the current password) or the kas
setpassword command.If your cell does not use an AFS-modified login utility,
remember also to change the local password, using the operating
system's password-changing command. For more instructions on
changing passwords, see Changing AFS
Passwords.Imposing Restrictions on Passwords and Authentication
AttemptsYou can help to make your cell more secure by imposing
restrictions on user passwords and authentication attempts. To
impose the restrictions as you create an account, use the A instruction in the uss template file as described in Increasing Account Security with the A
Instruction. To set or change the values on an existing
account, use the kas setfields
command as described in Improving Password
and Authentication Security.passwordexpirationpasswordlifetimekas commandssetfieldscommandskas setfieldsAuthentication Databasepassword lifetime, settingpasswordrestricting reuseBy default, AFS passwords never expire. Limiting password
lifetime can help improve security by decreasing the time the
password is subject to cracking attempts. You can choose an lifetime
from 1 to 254 days after the password was last changed. It
automatically applies to each new password as it is set. When the
user changes passwords, you can also insist that the new password is
not similar to any of the 20 passwords previously used.passwordconsequences of multiple failed authentication
attemptskas commandssetfieldscommandskas setfieldsauthenticationconsequences of multiple failuresUnscrupulous users can try to gain access to your AFS cell by
guessing an authorized user's password. To protect against this type
of attack, you can limit the number of times that a user can
consecutively fail to provide the correct password. When the limit
is exceeded, the authentication service refuses further
authentication attempts for a specified period of time (the lockout
time). To reenable authentication attempts before the lockout time
expires, an administrator must issue the kas
unlock command.passwordchecking quality ofkpasswd commandcommandskpasswdkas commandssetpasswordkpwvalid programIn addition to settings on user's authentication accounts, you
can improve security by automatically checking the quality of new
user passwords. The kpasswd and
kas setpassword commands pass the
proposed password to a program or script called kpwvalid, if it exists. The kpwvalid performs quality checks and returns
a code to indicate whether the password is acceptable. You can
create your own program or modified the sample program included in
the AFS distribution. See the kpwvalid reference page in the OpenAFS
Administration Reference.There are several types of quality checks that can improve
password quality.
The password is a minimum lengthThe password is not a wordThe password contains both numbers and lettersSupport for Kerberos AuthenticationKerberossupport for in AFScommandsklog.krbcommandspagsh.krbcommandstokens.krbklog.krb commandpagsh.krb commandtokens.krb commandIf your site is using standard Kerberos authentication rather
than the AFS Authentication Server, use the modified versions of the
klog, pagsh, and tokens commands that support Kerberos
authentication. The binaries for the modified version of these
commands have the same name as the standard binaries with the
addition of a .krb
extension.Use either the Kerberos version or the standard command
throughout the cell; do not mix the two versions. AFS Product
Support can provide instructions on installing the Kerberos version
of these four commands. For information on the differences between
the two versions of these commands, see the OpenAFS Administration
Reference.Security and Authorization in AFSAFS incorporates several features to ensure that only authorized
users gain access to data. This section summarizes the most important
of them and suggests methods for improving security in your
cell.Some Important Security FeaturessecurityAFS featuresAFSsecurity featuresACLs on DirectoriesFiles in AFS are protected by the access control list (ACL)
associated with their parent directory. The ACL defines which
users or groups can access the data in the directory, and in what
way. See Managing Access Control
Lists.Mutual Authentication Between Client and ServerWhen an AFS client and server process communicate, each
requires the other to prove its identity during mutual
authentication, which involves the exchange of encrypted
information that only valid parties can decrypt and respond
to. For a detailed description of the mutual authentication
process, see A More Detailed Look at
Mutual Authentication.AFS server processes mutually authenticate both with one
another and with processes that represent human users. After mutual
authentication is complete, the server and client have established
an authenticated connection, across which they can communicate
repeatedly without having to authenticate again until the connection
expires or one of the parties closes it. Authenticated connections
have varying lifetimes.TokensIn order to access AFS files, users must prove their
identities to the AFS authentication service by providing the
correct AFS password. If the password is correct, the
authentication service sends the user a token as evidence of
authenticated status. See Login and
Authentication in AFS.Servers assign the user identity anonymous to users and processes that do not
have a valid token. The anonymous
identity has only the access granted to the system:anyuser group on ACLs.Authorization CheckingMutual authentication establishes that two parties
communicating with one another are actually who they claim to be.
For many functions, AFS server processes also check that the
client whose identity they have verified is also authorized to
make the request. Different requests require different kinds of
privilege. See Three Types of
Privilege.Encrypted Network Communicationsnetworkencrypted communication in AFSencrypted network communicationsecurityencrypted network communicationThe AFS server processes encrypt particularly sensitive
information before sending it back to clients. Even if an
unauthorized party is able to eavesdrop on an authenticated
connection, they cannot decipher encrypted data without the proper
key.The following AFS commands encrypt data because they involve
server encryption keys and passwords:
The bos addkey command,
which adds a server encryption key to the /usr/afs/etc/KeyFile fileThe bos listkeys
command, which lists the server encryption keys from the
/usr/afs/etc/KeyFile
fileThe kpasswd command,
which changes a password in the Authentication DatabaseMost commands in the kas command suiteIn addition, the Update Server
encrypts sensitive information (such as the contents of KeyFile) when distributing it. Other commands
in the bos suite and the commands
in the fs, pts and vos
suites do not encrypt data before transmitting it.Three Types of PrivilegeAFS uses three separate types of privilege for the reasons
discussed in The Reason for Separate
Privileges.
Membership in the system:administrators group. Members
are entitled to issue any pts
command and those fs commands
that set volume quota. By default, they also implicitly have
the a (administer) and l (lookup) permissions on every ACL in the
file tree even if the ACL does not include an entry for
them.The ADMIN flag on the
Authentication Database entry. An administrator with this flag
can issue any kas
command.Inclusion in the /usr/afs/etc/UserList file. An
administrator whose username appears in this file can issue
any bos, vos, or backup command (although some backup commands require additional
privilege as described in Granting
Administrative Privilege to Backup Operators).Authorization Checking versus AuthenticationAFS distinguishes between authentication and authorization
checking. Authentication refers to the process of proving
identity. Authorization checking refers to the process of verifying
that an authenticated identity is allowed to perform a certain
action.AFS implements authentication at the level of
connections. Each time two parties establish a new connection, they
mutually authenticate. In general, each issue of an AFS command
establishes a new connection between AFS server process and
client.AFS implements authorization checking at the level of server
machines. If authorization checking is enabled on a server machine,
then all of the server processes running on it provide services only
to authorized users. If authorization checking is disabled on a
server machine, then all of the server processes perform any action
for anyone. Obviously, disabling authorization checking is an
extreme security exposure. For more information, see Managing Authentication and Authorization
Requirements.Improving Security in Your Cellsecuritysuggestions for improvingYou can improve the level of security in your cell by
configuring user accounts, server machines, and system administrator
accounts in the indicated way.User AccountsUse an AFS-modified login utility, or include the
-setpag flag to the
klog command, to associate
the credential structure that houses tokens with a PAG
rather than a UNIX UID. This prevents users from inheriting
someone else's tokens by assuming their UNIX identity. For
further discussion, see Identifying
AFS Tokens by PAG.Encourage users to issue the unlog command to destroy their tokens
before logging out. This forestalls attempts to access
tokens left behind kernel memory. Consider including the
unlog command in every
user's .logout file or
equivalent.Server MachinesDisable authorization checking only in emergencies or
for very brief periods of time. It is best to work at the
console of the affected machine during this time, to prevent
anyone else from accessing the machine through the
keyboard.Change the AFS server encryption key on a frequent and
regular schedule. Make it difficult to guess (a long string
including nonalphabetic characters, for instance). Unlike
user passwords, the password from which the AFS key is
derived can be longer than eight characters, because it is
never used during login. The kas
setpassword command accepts a password hundreds
of characters long. For instructions, see Managing Server Encryption
Keys.As much as possible, limit the number of people who
can login at a server machine's console or remotely.
Imposing this limit is an extra security precaution rather
than a necessity. The machine cannot serve as an AFS client
in this case.Particularly limit access to the local superuser
root account on a server
machine. The local superuser root has free access to important
administrative subdirectories of the /usr/afs directory, as described in
AFS Files on the Local
Disk.root superuserlimiting loginsAs in any computing environment, server machines must
be located in a secured area. Any other security measures
are effectively worthless if unauthorized people can access
the computer hardware.System AdministratorsLimit the number of system administrators in your
cell. Limit the use of system administrator accounts on
publicly accessible workstations. Such machines are not
secure, so unscrupulous users can install programs that try
to steal tokens or passwords. If administrators must use
publicly accessible workstations at times, they must issue
the unlog command before
leaving the machine.Create an administrative account for each
administrator separate from the personal account, and assign
AFS privileges only to the administrative account. The
administrators must authenticate to the administrative
accounts to perform duties that require privilege, which
provides a useful audit trail as well.Administrators must not leave a machine unattended
while they have valid tokens. Issue the unlog command before leaving.Use the -lifetime
argument to the kas
setfields command to set the token lifetime for
administrative accounts to a fairly short amount of time.
The default lifetime for AFS tokens is 25 hours, but 30 or
60 minutes is possibly a more reasonable lifetime for
administrative tokens. The tokens for administrators who
initiate AFS Backup System operations must last somewhat
longer, because it can take several hours to complete some
dump operations, depending on the speed of the tape device
and the network connecting it to the file server machines
that house the volumes is it accessing.Limit administrators' use of the telnet program. It sends unencrypted
passwords across the network. Similarly, limit use of other
remote programs such as rsh
and rcp, which send
unencrypted tokens across the network.A More Detailed Look at Mutual Authenticationmutual authenticationdistributed file systemsecurity issuesshared secretserver encryption keydefinedAs in any file system, security is a prime concern in AFS. A
file system that makes file sharing easy is not useful if it makes
file sharing mandatory, so AFS incorporates several features that
prevent unauthorized users from accessing data. Security in a
networked environment is difficult because almost all procedures
require transmission of information across wires that almost anyone
can tap into. Also, many machines on networks are powerful enough
that unscrupulous users can monitor transactions or even intercept
transmissions and fake the identity of one of the
participants.The most effective precaution against eavesdropping and
information theft or fakery is for servers and clients to accept the
claimed identity of the other party only with sufficient proof. In
other words, the nature of the network forces all parties on the
network to assume that the other party in a transaction is not
genuine until proven so. Mutual authentication is the means through
which parties prove their genuineness.Because the measures needed to prevent fakery must be quite
sophisticated, the implementation of mutual authentication
procedures is complex. The underlying concept is simple, however:
parties prove their identities by demonstrating knowledge of a
shared secret. A shared secret is a piece of information known only
to the parties who are mutually authenticating (they can sometimes
learn it in the first place from a trusted third party or some other
source). The party who originates the transaction presents the
shared secret and refuses to accept the other party as valid until
it shows that it knows the secret too.The most common form of shared secret in AFS transactions is
the encryption key, also referred to simply as a key. The two
parties use their shared key to encrypt the packets of information
they send and to decrypt the ones they receive. Encryption using
keys actually serves two related purposes. First, it protects
messages as they cross the network, preventing anyone who does not
know the key from eavesdropping. Second, ability to encrypt and
decrypt messages successfully indicates that the parties are using
the key (it is their shared secret). If they are using different
keys, messages remain scrambled and unintelligible after
decryption.The following sections describe AFS's mutual authentication
procedures in more detail. Feel free to skip these sections if you
are not interested in the mutual authentication process.Simple Mutual AuthenticationSimple mutual authentication involves only one encryption
key and two parties, generally a client and server. The client
contacts the server by sending a challenge message encrypted with
a key known only to the two of them. The server decrypts the
message using its key, which is the same as the client's if they
really do share the same secret. The server responds to the
challenge and uses its key to encrypt its response. The client
uses its key to decrypt the server's response, and if it is
correct, then the client can be sure that the server is genuine:
only someone who knows the same key as the client can decrypt the
challenge and answer it correctly. On its side, the server
concludes that the client is genuine because the challenge message
made sense when the server decrypted it.AFS uses simple mutual authentication to verify user
identities during the first part of the login procedure. In that
case, the key is based on the user's password.Complex Mutual AuthenticationComplex mutual authentication involves three encryption keys
and three parties. All secure AFS transactions (except the first
part of the login process) employ complex mutual
authentication.ticket-granterserver encryption keytokensdata inWhen a client wishes to communicate with a server, it first
contacts a third party called a ticket-granter. The ticket-granter
and the client mutually authenticate using the simple
procedure. When they finish, the ticket-granter gives the client a
server ticket (or simply ticket) as proof that it (the
ticket-granter) has preverified the identity of the client. The
ticket-granter encrypts the ticket with the first of the three
keys, called the server encryption key because it is known only to
the ticket-granter and the server the client wants to contact. The
client does not know this key.The ticket-granter sends several other pieces of information
along with the ticket. They enable the client to use the ticket
effectively despite being unable to decrypt the ticket
itself. Along with the ticket, the items constitute a token:
A session key, which is the second encryption key
involved in mutual authentication. The ticket-granter
invents the session key at random as the shared secret
between client and server. For reasons explained further
below, the ticket-granter also puts a copy of the session
key inside the ticket. The client and server use the session
key to encrypt messages they send to one another during
their transactions. The ticket-granter invents a different
session key for each connection between a client and a
server (there can be several transactions during a single
connection).session keyThe name of the server for which the ticket is valid
(and so which server encryption key encrypts the ticket
itself).A ticket lifetime indicator. The default lifetime of
AFS server tickets is 100 hours. If the client wants to
contact the server again after the ticket expires, it must
contact the ticket-granter to get a new ticket.The ticket-granter seals the entire token with the third key
involved in complex mutual authentication--the key known only to
it (the ticket-granter) and the client. In some cases, this third
key is derived from the password of the human user whom the client
represents.Now that the client has a valid server ticket, it is ready
to contact the server. It sends the server two things:
The server ticket. This is encrypted with the server
encryption key.Its request message, encrypted with the session
key. Encrypting the message protects it as it crosses the
network, since only the server/client pair for whom the
ticket-granter invented the session key know it.At this point, the server does not know the session key,
because the ticket-granter just created it. However, the
ticket-granter put a copy of the session key inside the
ticket. The server uses the server encryption key to decrypts the
ticket and learns the session key. It then uses the session key to
decrypt the client's request message. It generates a response and
sends it to the client. It encrypts the response with the session
key to protect it as it crosses the network.This step is the heart of mutual authentication between
client and server, because it proves to both parties that they
know the same secret:
The server concludes that the client is authorized to
make a request because the request message makes sense when
the server decrypts it using the session key. If the client
uses a different session key than the one the server finds
inside the ticket, then the request message remains
unintelligible even after decryption. The two copies of the
session key (the one inside the ticket and the one the
client used) can only be the same if they both came from the
ticket-granter. The client cannot fake knowledge of the
session key because it cannot look inside the ticket, sealed
as it is with the server encryption key known only to the
server and the ticket-granter. The server trusts the
ticket-granter to give tokens only to clients with whom it
(the ticket-granter) has authenticated, so the server
decides the client is legitimate.(Note that there is no direct communication between
the ticket-granter and the server, even though their
relationship is central to ticket-based mutual
authentication. They interact only indirectly, via the
client's possession of a ticket sealed with their shared
secret.)The client concludes that the server is genuine and
trusts the response it gets back from the server, because
the response makes sense after the client decrypts it using
the session key. This indicates that the server encrypted
the response with the same session key as the client
knows. The only way for the server to learn that matching
session key is to decrypt the ticket first. The server can
only decrypt the ticket because it shares the secret of the
server encryption key with the ticket-granter. The client
trusts the ticket-granter to give out tickets only for
legitimate servers, so the client accepts a server that can
decrypt the ticket as genuine, and accepts its
response.Backing Up AFS DataAFS provides two related facilities that help the administrator
back up AFS data: backup volumes and the AFS Backup System.Backup VolumesThe first facility is the backup volume, which you create by
cloning a read/write volume. The backup volume is read-only and so
preserves the state of the read/write volume at the time the clone
is made.Backup volumes can ease administration if you mount them in
the file system and make their contents available to users. For
example, it often makes sense to mount the backup version of each
user volume as a subdirectory of the user's home directory. A
conventional name for this mount point is OldFiles. Create a new version of the backup
volume (that is, reclone the read/write) once a day to capture any
changes that were made since the previous backup. If a user
accidentally removes or changes data, the user can restore it from
the backup volume, rather than having to ask you to restore
it.The OpenAFS User Guide does not mention backup volumes, so
regular users do not know about them if you decide not to use
them. This implies that if you do
make backup versions of user volumes, you need to tell your users
about how the backup works and where you have mounted it.Users are often concerned that the data in a backup volume
counts against their volume quota and some of them even want to
remove the OldFiles mount point. It
does not, because the backup volume is a separate volume. The only
amount of space it uses in the user's volume is the amount needed
for the mount point, which is about the same as the amount needed
for a standard directory element.Backup volumes are discussed in detail in Creating Backup Volumes.The AFS Backup SystemBackup volumes can reduce restoration requests, but they
reside on disk and so do not protect data from loss due to hardware
failure. Like any file system, AFS is vulnerable to this sort of
data loss.To protect your cell's users from permanent loss of data, you
are strongly urged to back up your file system to tape on a regular
and frequent schedule. The AFS Backup System is available to ease
the administration and performance of backups. For detailed
information about the AFS Backup System, see Configuring the AFS Backup System and
Backing Up and Restoring AFS
Data.Accessing AFS through NFSUsers of NFS client machines can access the AFS filespace by
mounting the /afs directory of an AFS
client machine that is running the NFS/AFS Translator. This is a
particular advantage in cells already running NFS who want to access
AFS using client machines for which AFS is not available. See Appendix A, Managing the NFS/AFS
Translator.