This chapter discusses many of the issues to consider when configuring and administering a cell, and directs you to detailed related information available elsewhere in this guide. It is assumed you are already familiar with the material in An Overview of AFS Administration.
It is best to read this chapter before installing your cell's first file server machine or performing any other administrative task.
AFS is intended to behave like a standard UNIX file system in most respects, while also making file sharing easy within and between cells. This section lists the important respects in which AFS and UNIX do differ, and (in most cases) also refers you to a more detailed discussion.
AFS augments the standard UNIX file protection mechanism in two ways: it associates an access control list (ACL) with each directory, and it enables users to define a large number of their own groups, which can appear simultaneously on an ACL.
ACLs
AFS uses ACLs to protect files and directories, rather than relying exclusively on the mode bits. This has several implications, which are discussed further in the indicated sections:
Groups
Another difference between UNIX and AFS protection is that AFS enables users to define the groups of other users. Placing these groups on ACLs extends the same permissions to a number of exactly specified users at the same time, which is much more convenient than placing the individuals on the ACLs directly. See Administering the Protection Database.
There are also system-defined groups, system:anyuser and system:authuser, whose presence on an ACL extends access to a wide range of users at once. See The System Groups and Using Groups on ACLs.
Just as the AFS filespace is distinct from each machine's local file system, AFS authentication is separate from local login. This has two practical implications, which are discussed further in Using an AFS-modified login Utility.
AFS provides a modified login utility for each system type that accomplishes both local login and AFS authentication in one step, based on a single password. If you choose not to use the AFS-modified login utility, your users must login and authenticate in separate steps, as detailed in the AFS User's Guide.
This section summarizes how AFS modifies the functionality of the following UNIX commands.
Never run the standard UNIX fsck command on an AFS file server machine. Because it does not understand the way that the File Server alters the disk format, it removes all AFS data from AFS server partitions, placing it the lost+found directory on the partition.
Instead, use the version of the fsck program that is included in the AFS distribution. The AFS Installation Guide explains how to replace the vendor-supplied fsck program with the AFS version as you install each server machine.
The AFS version functions like the standard fsck program on data stored on both UFS and AFS partitions. The appearance of a banner like the following as the fsck program initializes confirms that you are running the correct one:
--- AFS (R) version fsck---
where version is the AFS version. For correct results, it must match the AFS version of the server binaries in use on the machine.
If you ever accidentally run the standard version of the program, contact AFS Product Support immediately. It is sometimes possible to recover volume data from the lost+found directory.
AFS does not allow hard links (created with the UNIX ln command) between files that reside in different directories, because in that case it is unclear which of the directory's ACLs to associate with the link.
AFS also does not allow hard links to directories, in order to keep the file system organized as a tree.
It is possible to create symbolic links (with the UNIX ln -s command) between elements in two different AFS directories, or even between an element in AFS and one in a machine's local UNIX file system. However, you must not create a symbolic link to a file whose name begins with either a number sign (#) or a percent sign (%) because the Cache Manager interprets such links as a mount point to a regular or ReadWrite volume, respectively.
Upon issue of the UNIX close or fsync system call, the Cache Manager sends file modifications to the File Server for permanent storage in the central copy of the file kept there and in the file server's non-volatile storage.
Upon issue of the UNIX write system call, modifications are stored in the Cache Manager's local cached copy only. If the local machine crashes or an application program exits without issuing the close system call, changes are saved in the locally cached copy only.
Most application programs issue the close system call upon completion or automatically upon exit.
Set the setuid bit only for the local superuser root; this does not present an automatic security risk, since that user has no special privilege in AFS, but only in the local machine's UNIX file system and kernel.
Any file can be marked with setuid, but only members of the system:administrators group can issue the chown system call or the /etc/chown command.
The fs setcell command determines whether setuid programs that originate in a foreign cell can run on a given client machine. See Determining if a Client Can Run Setuid Programs.
This section explains how to choose a cell name and explains why choosing an appropriate cell name is important.
Your cell name must distinguish your cell from all others in the AFS global namespace. The cell name is the second element in any AFS pathname; therefore, a unique cell name guarantees that every AFS pathname uniquely identifies a file, even if cells use the same directory names at lower levels in their local trees. For example, both the ABC Corporation cell and the State University cell can have a home directory for the user pat, because the pathnames are distinct: /afs/abc.com/usr/pat and /afs/stateu.edu/usr/pat.
By convention, cell names follow the ARPA Internet Domain System conventions for site names. If you are already an Internet site, then it is simplest to choose your Internet domain name as the cellname.
If you are not an Internet site, it is best to choose a unique Internet-style name, particularly if you plan to connect to the Internet in the future. AFS Product Support is available for help in selecting an appropriate name. There are few constraints on the form of an Internet domain name that can be used as the name of an AFS cell:
For businesses and other commercial organizations. Example: abc.com for the ABC Corporation cell.
For educational institutions such as universities. Example: stateu.edu for the State University cell.
For United States government institutions.
For United States military installations.
Other suffixes are available if none of these are appropriate. You can learn about suffixes by calling the Defense Data Network [Internet] Network Information Center in the United States at (800) 235-3155. The NIC can also provide you with the forms necessary for registering your cell name as an Internet domain name. The advantage of registering your name is that it prevents another Internet site from adopting the name later.
The cell name is recorded in two files on the local disk of each file server and client machine. Among other functions, these files define the machine's cell membership and so affect how programs and processes run on the machine; see Why Choosing the Appropriate Cell Name is Important. The procedure for setting the cell name is different for the two types of machines.
For file server machines, the two files that record the cell name are the /usr/afs/etc/ThisCell and /usr/afs/etc/CellServDB files. As described more explicitly in the AFS Installation Guide, you set the cell name in both by issuing the bos setcellname command on the first file server machine you install in your cell. It is not usually necessary to issue the command again. As you install additional file server machines, the Update Server distributes its copy of the ThisCell and CellServDB files to each machine.
For client machines, the two files that record the cell name are the /usr/vice/etc/ThisCell and /usr/vice/etc/CellServDB files. You create these files on a per-client basis, either with a text editor or by copying them onto the machine from a central source in AFS (of course, only the first option is available on the very first client you install). See Maintaining Knowledge of Database Server Machines for details.
Change the cell name in these files only when you want to transfer the machine to a different cell (it can only belong to one cell at a time). If the machine is a file server, follow the complete set of instructions in the AFS Installation Guide for configuring a new cell. If the machine is a client, all you need to do is change the files appropriately and reboot the machine. The next section explains further the negative consequences of changing the name of an existing cell.
You can set the default cell name used by most AFS commands without changing the local /usr/vice/etc/ThisCell file by setting the AFSCELL environment variable to the cell name that you want to work in. When this variable is set, it overrides the value found in the ThisCell file for most AFS commands. It is worth setting this variable if you intend to do a significant amount of work in a foreign cell for a temporary period of time.
Note: | The fs checkservers and fs mkmount commands do not use the AFSCELL variable. The fs checkservers command always defaults to the cell named in the ThisCell file, unless the -cell argument is used. The fs mkmount command defaults to the cell in which the parent directory of the new mount point resides. |
Note: | Take care to select a cell name that is suitable for long-term use. Changing a cell name later is complicated. |
An appropriate cell name is important because it is the second element in the pathname of all files in a cell's file tree. Because each cell name is unique, its presence in an AFS pathname makes the pathname unique throughout the AFS global namespace, no matter how similar cells' file trees are at lower levels. For instance, it means that every cell can have a home directory called /afs/cellname/usr/pat without causing a conflict. The presence of the cell name in pathnames also means that users in every cell use the same pathname to access a file, whether the file resides in their local cell or in a foreign cell.
Another reason to choose the correct cell name early in the process of installing your cell is that the cell membership defined in each machine's ThisCell file affects the performance of many programs and processes running on the machine. For instance, AFS commands (fs, kas, pts and vos commands) by default execute in the cell of the machine on which they are issued. The command interpreters check the ThisCell file on the local disk and then contact the database server machines listed in the CellServDB file for the indicated cell (the bos commands work differently because the issuer always has to name of the machine on which to run the command).
The ThisCell file also determines which cell a user is authenticated in (receives a token for) by default when he or she logs in on that machine. The cell name also plays a role in security. AFS converts passwords into encryption keys before storing them in the Authentication Database (for a description of how AFS's security system uses such keys, see A More Detailed Look at Mutual Authentication). Before converting the password into an encryption key, the Authentication Server combines it with the cell name found in the ThisCell file. AFS-modified login utilities use the same algorithm to convert the user's password into an encryption key (it learns the cell name by looking in the local /usr/vice/etc/ThisCell file).
This method of converting passwords into encryption keys means that the same password results in different keys in different cells. Obtaining a user's token from one cell does not enable unauthorized access to that user's account in another cell, even if the user uses the same password in both cells.
Changing the cell name requires changing the ThisCell and CellServDB files on every file server and client machine. Failure to change them all can prevent login: the encryption keys produced by the login utility do not match the keys stored in the Authentication Database. In addition, many commands from the AFS suites do not work as expected.
Participating in the AFS global namespace makes your cell's local file tree visible to AFS users in foreign cells and makes other cells' file trees visible to your local users. It makes file sharing across cells just as easy as sharing within a cell. This section outlines the procedures necessary for participating in the global namespace.
There are several general points to note. Further details appear below.
The AFS global namespace appears the same to all AFS cells that participate in it, because they all agree to follow a small set of conventions in constructing pathnames.
The first convention is that all AFS pathnames begin with /afs to indicate that they belong to the AFS global namespace.
The second convention is a cell name appears as the second element in every pathname; it indicates where the file resides (that is, the cell in which a file server machine houses the file). The presence of a cell name in pathnames is what makes the global namespace possible: because the cell name is guaranteed to be unique, its presence in the pathname guarantees that all AFS pathnames are unique. This further implies that the pathname uniquely identifies the file no matter which cell the file is viewed from. This remains true even if cells use the same directory names at lower levels in their local trees.
What appears at the third and lower levels in an AFS pathname depends on how a cell has chosen to arrange its filespace. There are some suggested conventional directories at the third level; see The Third Level.
You make your cell visible to others by advertising your cell name and the names and IP addresses of your database server machines. Just like client machines in the local cell, the Cache Manager on machines in foreign cells use the information to reach your cell's VL Servers, which in turn provide volume and file location information. Similarly, knowing how to reach a cell's Authentication Servers enables client-side authentication programs running in foreign cells to obtain a token, provided the user has an Authentication Database entry in the cell.
There are two places you can make this information available:
To add or change your cell's listing in this file, have the official support contact at your site call or write to AFS Product Support. Changes to the file are frequent enough that AFS Product Support does not announce each one. It is a good policy to check the file for changes on a regular schedule.
Update the files whenever you change the identity of your cell's database server machines. Also update the copies of the CellServDB files on all of your file server machines (the /usr/afs/etc/CellServDB file) and client machines (the /usr/vice/etc/CellServDB file). For instructions, see Maintaining the Server CellServDB File and Maintaining Knowledge of Database Server Machines.
Once you have advertised your database server machines, it is difficult to make your cell invisible again. You can remove the CellServDB.local file and ask AFS Product Support to remove your entry from the global CellServDB file, but other cells probably have an entry for your cell in their local CellServDB files already. To make those entries invalid, you must change the names or IP addresses of your database server machines, which of course requires changing them in all of your local CellServDB files too.
Your cell does not have to be invisible to be inaccessible, however. To make your cell completely inaccessible to foreign users, remove the system:anyuser group from all ACLs at the top three levels of your filespace; see Granting and Denying Foreign Users Access to Your Cell.
To make a foreign cell's filespace visible on a client machine in your cell, perform the following three steps:
The /usr/vice/etc/CellServDB file on every client machine's local disk lists the database server machines for the local and foreign cells. The afsd program reads the contents of the CellServDB file into kernel memory as it initializes the Cache Manager. You can also use the fs newcell command to add or alter entries in kernel memory directly between reboots of the machine.
Note that making a foreign cell visible to client machines does not guarantee that your users can access its filespace. The ACLs in the foreign cell must also grant them the necessary permissions.
For more information on maintaining a client machine's knowledge of foreign cells, see Maintaining Knowledge of Database Server Machines .
As mentioned in Making Your Cell Visible to Others, making your cell visible in the AFS global namespace does not take away your control over the way in which users from foreign cells access your file tree.
By default, foreign users access your cell as user anonymous, which means they have only the permissions you grant to the system:anyuser group on each directory's ACL. Normally these permissions are limited to the lookup (l) and read (r) permissions.
There are two ways to grant wider access to foreign users; you can:
This section summarizes the issues to consider when configuring your AFS filespace. For a discussion of creating volumes that correspond most efficiently to the filespace's directory structure, see Creating Volumes.
AFS pathnames must follow a few conventions so the AFS global namespace looks the same from any AFS client machine. There are corresponding conventions to follow in building your file tree, not just because pathnames reflect the structure of a file tree, but also because the AFS Cache Manager expects a certain configuration.
The first convention is that the top level in your file tree be called the /afs directory. If you name it something else, then you must use the -mountdir argument with the afsd program to get Cache Managers to mount AFS properly. You cannot participate in the AFS global namespace in that case.
The second convention is that just below the /afs directory you place directories corresponding to each cell whose file tree is visible and accessible from the local cell. Minimally, there must be a directory for the local cell. Each such directory is a mount point to the indicated cell's root.cell volume. For example, in the ABC Corporation cell, /afs/abc.com is a mount point for the cell's own root.cell volume and stateu.edu is a mount point for the State University cell's root.cell volume. The fs lsmount command displays the mount points.
% fs lsmount /afs/abc.com '/afs/abc.com' is a mount point for volume '#root.cell' % fs lsmount /afs/stateu.edu '/afs/stateu.edu' is a mount point for volume '#stateu.edu:root.cell'
To reduce the amount of typing necessary in pathnames, it is practical to create symbolic links with abbreviated names to the mount points of those cells your users access often (particularly the home cell). In the ABC Corporation cell, for instance, /afs/abc is a symbolic link to the /afs/abc.com mount point, as the fs lsmount command reveals.
% fs lsmount /afs/abc '/afs/abc' is a symbolic link, leading to a mount point for volume '#root.cell'
You can organize the third level of your cell's file tree any way you wish. The following list describes directories that appear at this level in the conventional configuration:
A directory accessible to anyone who can access your filespace, because its ACL grants the l (lookup) and r (read) permissions to the system:anyuser group. It is useful if you want to enable your users to make selected information available to everyone, but do not want to grant foreign users access to the contents of the usr directory that houses user home directories (also at this level). It is conventional to create a subdirectory for each of your cell's users.
This directory contains files and subdirectories that help cells coordinate resource sharing. For a list of the proposed standard files and subdirectories to create, call or write to AFS Product Support.
As an example, files that other cells expect to find in this directory's etc subdirectory can include the following:
A separate directory for storing the server and client binaries for each system type you use in the cell. Configuration is simplest if you use the system type names assigned in the AFS distribution, particularly if you wish to use the @sys variable in pathnames (see Using the @sys Variable in Pathnames). The AFS Release Notes lists the conventional name for each supported system type.
Within each such directory, create directories named bin, etc, usr, and so on to store what is normally found in /bin, /etc and /usr directories on a local disk. Then create symbolic links from the local directories on client machines into AFS; see Configuring the Local Disk. Even if you do not choose to use symbolic links in this way, it can be convenient to have central copies of system binaries in AFS. If binaries are accidentally removed from a machine, you can recopy them onto the local disk from AFS rather than having to recover them from tape
Contains the home directory of each user in your cell and any foreign users granted a local account. As discussed in the previous entry for the public directory, it is often practical to protect this directory so that only locally authenticated users can access it. This keeps the contents of your user's home directories as secure as possible.
If your cell is quite large, directory lookup can be slowed if you put all home directories in a single usr directory. For suggestions on distributing user home directories among multiple grouping directories, see Grouping Home Directories.
This section discusses how to create volumes in ways that make administering your system easier. In many places, it refers you to Managing Volumes for more detailed information.
At the top levels of your file tree (at least through the third level), each directory generally corresponds to a separate volume. Some cells also configure the subdirectories of some third level directories as separate volumes. Common examples are the /afs/cell/common and /afs/cell/usr directories.
It is not required to create a separate volume for every directory level in a tree. However, if you do, each volume tends to be smaller and thus easier to move for load balancing. The overhead for a mount point is no greater than for a standard directory, nor does the volume structure itself require much disk space. Most cells find that below the fourth level in the tree, using a separate volume for each directory is no longer efficient. For instance, while each user's home directory (at the fourth level in the tree) corresponds to a separate volume, all of the subdirectories in the home directory normally reside in the same volume.
Keep in mind that only one volume can be mounted at a given directory location in the tree. In contrast, a volume can be mounted at several locations, though this is not recommended because it distorts the strict hierarchy of the file tree, potentially causing confusion.
The AFS implementation of volumes imposes an absolute length limit of 31 characters on volume names. Because there must be room for the addition of a .readonly extension if you replicate a volume, the Volume Server does not allow you to create ReadWrite volumes with names over 22 characters in length.
Do not add the extensions .readonly and .backup to volume names yourself, even if they are appropriate. The extensions are reserved for ReadOnly and Backup versions of a volume, respectively. The Volume Server adds them automatically when necessary.
There are two volumes that every cell must include in its file system, and which must be named as follows:
Deviating from these required names only creates confusion and extra work. Changing the root.afs volume, for instance, prevents the Cache Manager from finding the volume that it mounts at the /afs level by default. You must add the -rootvol argument to the afsd program to name the alternate volume.
Similarly, changing the root.cell volume makes it difficult for other cells to make your file tree visible to their users as discussed in Making Other Cells Visible in Your Cell. If you change the name from root.cell, then attempts to access your filespace fails from cells that have mounted your root.cell volume in their filespace: the mount point refers to a non-existent volume (this is one way to make your cell invisible). You must also mount the volume with the changed name in your own cell's filespace, rather than mounting the root.cell volume.
You can name your volumes anything you choose, subject to the two simple restrictions mentioned in Restrictions on Volume Names. Adopting the permission naming scheme for volumes can greatly ease administration, however. Two important qualities in a volume name are that it reflect the contents of the volume and that it be similar to the names of volumes with similar contents. It is also helpful if the volume name is similar to (or at least has elements in common with) the name of the directory at which it is mounted. The advantage of these qualities is that once someone understands the general pattern, they can accurately guess what a volume contains and where it is mounted, without having to issue a number of commands to figure those things out.
Many cells find that the most effective volume naming scheme
puts a common prefix on the names of all related volumes. Table 1 describes the recommended prefixing scheme.
Table 1. Suggested volume prefixes
Prefix | Volume Type | Example Name | Example Mount Point |
---|---|---|---|
common. | common volumes | common.etc | /afs/cellname/common/etc |
src. | source volumes | src.afs | /afs/cellname/src/afs |
proj. | project volumes | proj.portafs | /afs/cellname/proj/portafs |
test. | test volumes | test.smith | /afs/cellname/usr/smith/test |
user. | user volumes | user.terry | /afs/cellname/usr/terry |
sys_type. | system volumes | rs_aix42.bin | /afs/cellname/rs_aix42/bin |
Table 2 is a more specific example for a cell's
rs_aix42 system volumes and directories:
Table 2. Example volume-prefixing scheme
Example Name | Example Mount Point |
---|---|
rs_aix42.bin | /afs/cellname/rs_aix42/bin/afs/cell/rs_aix42/bin |
rs_aix42.etc | /afs/cellname/rs_aix42/etc |
rs_aix42.usr | /afs/cellname/rs_aix42/usr |
rs_aix42.usr.afsws | /afs/cellname/rs_aix42/usr/afsws |
rs_aix42.usr.lib | /afs/cellname/rs_aix42/usr/lib |
rs_aix42.usr.bin | /afs/cellname/rs_aix42/usr/bin |
rs_aix42.usr.etc | /afs/cellname/rs_aix42/usr/etc |
rs_aix42.usr.inc | /afs/cellname/rs_aix42/usr/inc |
rs_aix42.usr.man | /afs/cellname/rs_aix42/usr/man |
rs_aix42.usr.sys | /afs/cellname/rs_aix42/usr/sys |
rs_aix42.usr.local | /afs/cellname/rs_aix42/usr/local |
There are several advantages to this scheme:
If your cell is large enough to make it practical, consider grouping related volumes together on the same partition of a file server machine. In general, you need at least three file server machines for volume grouping to be effective. Grouping has several advantages, which are most obvious when the file server machine goes down:
The advantages of grouping related volumes on a partition do not necessarily extend to the grouping of all related volumes on one file server machine. For instance, it is probably unwise in a cell with two file server machines to put all system volumes on one machine and all user volumes on the other. An outage of either machine probably affects everyone.
Admittedly, the need to move volumes for load balancing purposes can limit the practicality of grouping related volumes. The system administrator has to weigh the opposing advantages case by case.
As discussed in Replication, replication refers to making a copy, or clone, of a ReadWrite source volume and then placing the copy on one or more additional file server machines in a cell. One benefit of replicating a volume is that it increases the availability of the contents. If one file server machine housing the volume fails, users can still access the volume on a different machine. No one machine is likely to become overburdened with requests for a popular file, either, because the file is available from several machines.
However, replication is not appropriate for all cells. If a cell does not have much disk space, replication might be unduly expensive in terms of space, because each clone not on the same partition as the ReadWrite source takes up as much disk space as its source volume did at the time the clone was made. Also, if you have only one file server machine, replication uses up disk space without increasing availability.
Replication is also not appropriate for volumes whose contents change frequently. Clones are ReadOnly versions of the volume; they cannot change once they are copied from a source volume, even if the ReadWrite source volume does change. To keep a ReadOnly clone in sync with its changing ReadWrite source, you have to issue commands to make a new ReadOnly clone and distribute it to the different file server machines that house a clone of that volume.
For both of these reasons, replication is appropriate only for popular volumes whose contents do not change very often, such as system binaries and other "high-level" volumes. User volumes usually exist only in a ReadWrite version since they change so often.
If possible, replicate your cell's root.afs and root.cell volumes at two or three sites, even if your cell only has two or three file server machines. The unavailability of these volumes (perhaps due to a server failure) makes all other volumes unavailable too, even if the file server machines storing the other volumes are still functioning. The Cache Manager needs to pass through the directories corresponding to the root.afs and root.cell volumes as it interprets any pathname.
Another reason to replicate the root.afs volume is that it can lessen the load on the File Server machine. The Cache Manager has a built-in bias to access a ReadOnly version of the root.afs volume if it is available. By accessing the ReadOnly version of the root.afs volume, the Cache Manager gets on a "ReadOnly path." While on such a ReadOnly path, the Cache Manager requests files from the ReadOnly version(s) of a volume. And when distributing files from ReadOnly volumes, the fileserver process needs to maintain only one callback per volume, rather than one callback per file for ReadWrite volumes. Fewer callbacks translate into a smaller load on the File Server machine.
If the root.afs volume is not replicated in your cell, the Cache Manager always follows a ReadWrite path through your cell's filespace. While on the ReadWrite path, the Cache Manager always requests files from the ReadWrite version of a volume. When a File Server machine must distribute files from a ReadWrite volume, a large number of callbacks must be maintained (one for each copy of a file distributed). This puts a greater load on the File Server machine containing the ReadWrite volume.
For more on ReadWrite and ReadOnly paths, see The Rules of Mount Point Traversal.
Other volumes to consider replicating are system binary volumes, the volume corresponding to the /afs/cellname/usr directory, and the volume(s) corresponding to the /afs/cellname/common directory and its subdirectories.
Where to Place ReadOnly Volumes
It is a good idea to release a ReadOnly clone to the same partition as the ReadWrite source because only the ReadWrite volumes takes up the full amount of disk space. The ReadOnly clone on the same partition as its source is actually like a backup volume -- it is a copy of the source volume's vnode index.
This setup increases the availability of the volume content without requiring the storage costs of a non-clone ReadOnly volume (one that does not share the same partition as its ReadWrite source). Only if the ReadWrite volume moves to another partition or it changes substantially does the ReadOnly clone take up the full amount of disk space.
If your cell is sufficiently large, it can be practical to dedicate a small set of file server machines to storing only ReadOnly volumes. Only one callback is required per ReadOnly volume, while a callback is required for each distributed file in a ReadWrite volume. Thus, by keeping very popular programs on the ReadOnly file server machines, you lessen the load on file server machines containing many ReadWrite volumes.
In general, smaller volumes are easier to administer and manipulate than larger ones. Once a volume is more than about 80 MB, it can be difficult to move it for load balancing purposes-- many partitions do not have that much empty space available.
Every AFS volume has associated with it a quota that limits the amount of disk space the volume is allowed to use. As a system administrator, you can set and change volume quota using the commands described in Setting and Listing Volume Quota and Current Size.
By default, every new volume is assigned a space quota of 5000 KB blocks unless you include the -maxquota argument to the vos create command. Also by default, the ACL on the root directory of every new volume grants all permissions to the members of the system:administrators group. To learn how to change these values when creating an account with individual commands, see To create one user account with individual commands. When using uss commands to create accounts, you can specify alternate ACL and quota values in the template file's V instruction; see Creating a Volume with the V Instruction.
This section discusses some issues to consider when configuring file server machines, which store your cell's AFS data and transfer it to client machines on request. File server machines also run the programs that maintain administrative databases. How AFS divides machines into two classes, file servers and clients, is discussed in Servers and Clients.
To learn about client machines, see Configuring Client Machines.
AFS is available on a number of popular platforms. They are listed in the AFS Release Notes
If your cell has more than one AFS server machine, you can configure them to perform specialized functions. A machine can assume up to four possible roles, depending on the processes it runs:
For a more detailed description of the four roles, and the server processes that define each one, see The Four Roles for File Server Machines. The AFS Installation Guide includes instructions for configuring each type of machine.
The AFS Installation Guide directs you to configure your cell's first file server machine to assume all four roles:
It is simplest if the first machine you install has the lowest IP address of any machine you plan to use as a database server machine. If you later decide to use a machine with a lower IP address as a database server machine, you must update the CellServDB file on all clients before introducing the new machine. See the AFS Installation Guide.
When installing additional file server machines of an existing system type, it is often convenient to configure the AFS client software first. You can then load the AFS server binaries onto the machine via AFS rather than from the distribution media. The AFS Installation Guide provides complete instructions for installing additional file server machines.
The AFS administrative databases kept on database server machines store information that is crucial for correct cell functioning, and that both server processes and Cache Managers access frequently. The following are examples:
If your cell has more than one server machine, it is best to run more than one database server machine, but more than three are rarely necessary. Replicating the administrative databases in this way yields the same benefits as replicating volumes: increased availability and reliability of information. If one database server machine or process goes down, the information in the database is still available from others. The load of requests for database information is spread across multiple machines, preventing any one from becoming overloaded.
Unlike replicated volumes, however, replicated databases do change frequently. Consistent system performance demands that all copies of the database always be identical, so it is not possible to record changes in only some of them. To synchronize the copies of a database, the database server processes use AFS's distributed database technology, Ubik. For instructions on the configuration required for optimum Ubik functioning, see Replicating the Administrative Databases.
If your cell has only one file server machine, it must also serve as a database server machine.
If you cell has two file server machines, it is not always advantageous to run both as database server machines. Replicating the databases is beneficial as long as both machines and all database server processes are functioning normally. However, if a server, process, or network failure interrupts communications between the database server processes on the two machines, it can become impossible to update the information in the database because neither of them can alone elect itself as the synchronization site.
For security reasons, grant access to the directories and files under the /usr/afs directory on a file server machine only to their owner, the local superuser root. The /usr/afs/etc/KeyFile file lists the AFS server encryption keys, so you do not want anyone attempting to read it except through the bos listkeys command. Similarly, the /usr/afs/etc/UserList file controls privilege for vos, bos, and backup commands, so you do not want anyone attempting to alter it except through the proper bos commands.
The suggested protections are:
Directory/File | Protections | Equivalent |
---|---|---|
/usr/afs | drwxr-xr-x | 755 |
/usr/afs/backup | drwx------ | 700 |
/usr/afs/bin | drwxr-xr-x | 755 |
/usr/afs/db | drwx------ | 700 |
/usr/afs/etc | drwxr-xr-x | 755 |
/usr/afs/etc/KeyFile | -rw------- | 600 |
/usr/afs/etc/UserList | -rw------- | 600 |
/usr/afs/local | drwx------ | 700 |
/usr/afs/logs | drwxr-xr-x | 755 |
It is recommended that you store the binaries for all AFS server processes in the /usr/afs/bin directory on every file server machine, even if some processes run on a limited number of machines. This makes it easier to reconfigure a machine to fill a new role.
For a description of the contents of all AFS directories on a file server machine, see Administering Server Machines.
The partitions that house AFS volumes on a file server machine must be mounted at directories named
/vicepindex
where index is one or two lowercase letters. By convention, the first AFS partition created is mounted at the /vicepa directory, the second at the /vicepb directory, and so on.
Note: Each /vicepx directory must correspond to an entire partition or local volume, and must be a subdirectory of the root directory ( / ). It is not acceptable to store AFS files in part of (for example) the /usr partition and create a directory called something like /usr/vicepa.
It is best not to store non-AFS files on the partitions corresponding to /vicepx directories. The File Server and Volume Server expect to have available all of the space on the partition, and sharing space also creates competition between AFS and the local UNIX file system for access to the partition, particularly if the UNIX files are frequently used.
By default, the BOS Server on each file server machine stops and immediately restarts all AFS server processes on the machine (including itself) once a week, at 4:00 a.m. on Sunday. This reduces the potential for the core leaks that can develop as any process runs for an extended time.
The BOS Server also checks each morning at 5:00 a.m. for any newly installed binary files in the /usr/afs/bin directory. It checks for files with time stamps later than the time at which the corresponding process last restarted. If it finds any new binaries, it restarts the corresponding process to start using them.
Restarting processes causes a service outage, so the default times are in the early morning hours when an outage is likely to disturb the fewest number of people. You can list the restart times on a per-machine basis with the bos getrestart command, and set them with the bos setrestart command. The latter command enables you to disable automatic restarts entirely, by setting the time to never.
For more information on automatic restarting, see Setting the BOS Server's Restart Times.
Rebooting a file server machine requires shutting down the AFS processes first and so inevitably causes a service outage. Reboot file server machines as infrequently as possible.
For instructions on rebooting file server machines, see Rebooting a Server Machine.
AFS comes with three main monitoring tools that run on client machines:
You can configure the scout and afsmonitor programs to alert you when certain threshold values are exceeded, for example when a server partition is more than 95% full.
See Monitoring and Auditing AFS Performance.
This section summarizes issues to consider as you install and configure client machines in your cell.
You can save disk space on the machine's local disk by storing in AFS those files that are normally kept on the machine's local disk. You then create a symbolic link on the local disk that refers to the AFS file. The @sys pathname variable can be useful in links to system-specific files; see Using the @sys Variable in Pathnames.
Two basic classes of files actually must reside on the local disk: boot sequence files needed before AFS is accessible during reboot (that is, before the afsd program is invoked), and files that can be helpful during file server machine outages.
Files Needed During Reboot
During a reboot, AFS is inaccessible until the afsd program executes and reinitializes the Cache Manager (normally the afsd program is invoked in the /etc/rc directory or equivalent). Any files needed during reboot prior to that point must reside on the local disk.
Note: | The following list is not necessarily exhaustive. |
For more information on these files, see Configuration and Cache-Related Files on the Local Disk.
Diagnostic and Recovery Files
Certain commands in the fs and bos command suites can help users diagnose and recover from problems caused by a file server outage. It is useful to have local disk copies of the binaries for those suites, since a file server outage that requires their use can also make them inaccessible.
It is practical to store the bos and fs binary files in the /usr/vice/etc volume as well as the /usr/afsws volume, which is normally a link into AFS. Then set PATH variables so that the /usr/afsws directory appears before the /usr/vice/etc directory. That way, the user accesses the copy in the /usr/afsws directory when the file servers are accessible; being in AFS, it is more likely to current than a local copy.
It is also practical to keep the binaries for a text editor (such as ed or vi) on the local disk for use during outages.
The package program automates the configuration of the local disk on client machines, which can save a tremendous amount of time. It works by updating the contents of the local disk to match definitions found in a package configuration file." See Configuring Client Machines with the package program.
As detailed in Making Other Cells Visible in Your Cell, you enable the Cache Manager to access a cell's file tree by providing it a list of the cell's database server machines. The list is kept in the /usr/vice/etc/CellServDB file, and loads into kernel memory at reboot for easy retrieval by the Cache Manager. You can change the list of a cell's database server machines in kernel memory between reboots by using the fs newcell command.
The fact that access to cells relies on information on the local disk and in kernel memory means that the "view" of the AFS global namespace can differ from client to client in your cell. By including a cell in one machine's kernel list but not another's, you make that cell's file tree accessible to the former machine but not to the latter machine.
For the sake of consistency, make the same cells accessible from every client. This is particularly recommended in cells where users work on different machines each day. It is often practical to store a source version of the CellServDB file in AFS and use the package program periodically to update each client's version with the source copy. See Making Other Cells Visible in Your Cell and Maintaining Knowledge of Database Server Machines.
When constructing links into AFS on the local disk, it is practical to use the @sys variable in some pathnames. When the Cache Manager encounters the @sys variable in a pathname, it substitutes the local machine's CPU/operating system type for the @sys variable. For example, the Cache Manager on an IBM RS/6000 running AIX 4.2 performs the following translation
/afs/abc.com/@sys --> /afs/abc.com/rs_aix42
whereas a Sun Microsystems machine running Solaris 2.6 interpret the same pathname as
/afs/abc.com/@sys --> /afs/abc.com/sun4x_56
The Cache Manager learns the local machine's system type from a location in kernel memory. The fs sysname command displays the current value, and enables authorized users to change it. See Displaying and Setting the System Type Name.
To use the @sys variable, you must use the standard names assigned in the AFS distribution when naming the directories where you store binaries, or use the fs sysname command to change the value from its default on every client machine of the relevant system type. The AFS Release Notes list the assigned system type names.
The advantage of the @sys variable is that you can place the same links on machines of different system types and still have each machine access the files appropriate to its system type. In pathnames in the AFS filespace itself, use the @sys variable carefully and sparingly, because it can lead to unexpected results as the Cache Manager traverses a pathname. Its use is most appropriate when it is difficult to predict the specific system type name. Specifying an actual system type in most pathnames is recommended. If you use the @sys variable, restrict its use to only one level in the file tree; the third level is a common choice, because that is where most cells store the binaries for different machine types.
Multiple instances of the @sys variable in a pathname are especially dangerous to people who must explicitly change directories with the cd command into directories storing binaries for system types different from the local machine (such as administrators or developers who maintain those directories). After changing directories, it is recommended that such people verify they are in the desired directory.
The Cache Manager stores a table of preferences for file server machines in the kernel of the client machine. A preference is a file server machine's IP address and an associated rank (an integer in the range from 1 to 65,534).
A file server machine's rank determines the Cache Manager's preference for selecting ReadOnly replicas that reside on that machine. The Cache Manager's preferences enable you to bias it to access replicas from file server machines situated "close" to the client machine instead of those replicas on "distant" machines.
The fs getserverprefs command can be used to display a Cache Manager's preferences. The fs setserverprefs command can be used to set a Cache Manager's preferences for one or more file server machines. See Setting Server Preference Ranks for more information about server preferences and the associated commands.
This section discusses some of the issues to consider when configuring AFS user accounts. Because AFS is separate from the UNIX file system, an AFS account is separate from any UNIX account the user already has. An AFS user account has several components, which are listed in The Components of an AFS User Account. You can create them in one of two ways.
The preferred method for creating a user account is with the uss suite. With a single command, you can create all the components of one or many accounts, after you have prepared a template file that guides the account creation. See Creating and Deleting User Accounts with the uss Command Suite.
The other method is to issue the individual commands to create each component of the account. For instructions, along with instructions for removing user accounts and changing user passwords, storage quotas and usernames, see Administering User Accounts.
Below is a list of the components of an AFS user account. These components are described in greater detail in Overview of the uss Command Suite and The Components of an AFS User Account. The components include the following.
You can create accounts at different levels of functionality. The following list describes three common levels:
For instructions on creating accounts at each of these levels, using the uss command suite or individual commands, see Creating and Deleting User Accounts with the uss Command Suite and Administering User Accounts.
This section suggests schemes for choosing usernames, AFS UIDs, user volume names and mount point names, and also outlines some restrictions on your choices.
You can choose any naming scheme you wish for usernames. By convention, many components of the account share this name, including the entries in the Protection and Authentication Databases, the volume, and the mount point. Preferably, the name gives some indication of the user's identity (which often implies that you cannot allow users their own free choice of usernames). Depending on your electronic mail delivery system, the username can become part of the user's mailing address. The username is also the string that the user types when logging in to a client machine.
Some common choices for usernames are last names, first names, initials, first name with initial of last name, first initial with last name, or initials combined with sequential or randomly generated numbers.
Many utilities and applications can accommodate usernames of no more than eight characters. It is also best to avoid using the following characters, many of which have special meanings to the command shell.
AFS associates a unique identification number with every username: the AFS UID. The user's Protection Database entry records the mapping. The AFS UID functions within AFS much as the UNIX UID does in the local file system: the AFS server processes and the Cache Manager use it internally to identify a user, rather than the username.
Every AFS user also must have a UNIX UID recorded in the local password file (/etc/passwd or equivalent) of each client machine they log onto. Both administration and a user's AFS access are simplest if the AFS UID and UNIX UID match. One important consequence of matching UIDs is that the owner reported by the ls -l command matches the AFS username.
It is usually best to allow the Protection Server to allocate the AFS UID as it creates the Protection Database entry. However, both the pts createuser command and the uss commands that create user accounts enable you to assign AFS UIDs explicitly. This is appropriate in two cases:
After the Protection Server initializes for the first time on a cell's first file server machine, it starts assigning AFS UIDs at a default value. To change the default before creating any user accounts, or at any time, use the pts setmax command to reset the max user id counter. To display the counter, use the pts listmax command. See Displaying and Setting the AFS UID and GID Counters.
AFS reserves one AFS UID, 32766, for the user anonymous. The AFS server processes assign this identity and AFS UID to any user who does not possess a token. Do not assign this AFS UID to any other user or hardcode its current value into any programs or a file's owner field, because it is subject to change in future releases.
Like any volume name, a user volume's base (ReadWrite) name cannot exceed 22 characters in length or include the .readonly or .backup extension. See Restrictions on Volume Names.
By convention, user volume names have the format user.username. Using the user. prefix not only makes it easy to identify the volume's contents, but also to back up all user volumes together; see Using Prefixes on Related Volumes.
By convention, the mount point for a user's volume is the username. Many cells follow the convention of mounting user volumes in the /afs/cellname/usr directory, as discussed in The Third Level. Very large cells sometimes find that mounting all user volumes in the same directory slows directory lookup, however; for suggested alternative, see Grouping Home Directories.
Many cells have UNIX user accounts that predate the introduction of AFS in the cell, and wish to convert these account into AFS accounts. There are three main issues to address when performing such conversions:
Because the uss command enables you to assign explicit AFS UIDs, it is possible to use that program when converting accounts; see Converting Existing UNIX Accounts with uss. Manual conversion of accounts is explained in Converting Existing UNIX Accounts.
The suggested directory for mounting user volumes is the /afs/cellname/usr directory, an AFS-appropriate variation on the standard UNIX practice of putting user home directories under the /usr subdirectory. However, cells with a large number of users (that is, more than several hundred) sometimes find that mounting all user volumes in a single directory slows directory lookup. The solution is to distribute user volume mount points into several directories; there are a number of alternative methods to accomplish this.
For instructions on how to implement the various schemes when using the uss program to create user accounts, see Evenly Distributing User Home Directories with the G Instruction and Creating a Volume with the V Instruction.
AFS provides a simple mechanism that enables users themselves to restore data they have accidentally removed or deleted: the backup version of a volume. You create a backup version of the user's volume and mount it at a subdirectory in their home directory (called perhaps the OldFiles subdirectory). At the end of every day you create a new backup version to capture the changes made that day, and overwrite the previous day's backup version with the new one. Users can always retrieve the previous day's copy of a file without your assistance, leaving you with the time to deal with more pressing tasks.
Note that the data in a Backup Volume does not count in the user volume's total quota, because it is a separate volume. The only space the Backup Volume uses in the user volume is the amount needed for the mount point.
For further discussion of Backup Volumes, see Backing Up AFS Data and Creating Backup Volumes.
From your experience as a UNIX administrator, you are probably familiar with the use of login and shell initialization files (such as the .login and .cshrc files) to make an account easier to use.
It is practical to add some AFS-specific directories to the definition of the user's PATH environment variable, including the following:
If you are not using an AFS-modified login utility, it can be helpful to users to invoke the klog command in their .login file so that they obtain AFS tokens as part of logging in. In the following example command sequence, the first line echoes the string klog to the standard output stream, so that the user understands the purpose of the Password: prompt that appears when the second line is executed. The -setpag flag associates the new tokens with a process authentication group (PAG), which is discussed further in Using an AFS-modified login Utility.
echo -n "klog " klog -setpag
The following sequence of commands has a similar effect, except that the pagsh command forks a new shell with which the PAG and tokens are associated.
pagsh echo -n "klog " klog
If you use an AFS-modified login utility, this sequence is not necessary, because such utilities both log a user in locally and obtain AFS tokens.
When users leave your system, you can remove their accounts to free up space in your file tree. There is no uss command for removing user accounts, so you must remove account components individually. Instructions appear in Removing a User Account.
AFS enables users to define their own groups of other users. The groups are placed on ACLs to grant the same permissions to many users without listing each user individually. The creation and use of groups is discussed in Administering the Protection Database. This section summarizes some of the issues relevant to groups.
Groups have AFS UIDs, just as users do; but a group AFS UID is a negative integer whereas a user AFS UID is a positive integer. By default, the Protection Server allocates the AFS UID for a new group, but the pts creategroup command enables members of the system:administrators group to assign AFS UIDs if desired. As with user UIDs, allowing the Protection Server to allocated group UIDs itself is preferred. Before explicitly assigning a group UID, it is practical to verify that it is not already in use.
As the Protection Server initializes for the first time on a cell's first database server machine, it automatically creates and assigns AFS UIDs to three group entries: the system:anyuser, system:authuser, and system:administrators groups. It then sets the max group id counter it uses in allocating group AFS UIDs to be larger (more negative) than these AFS UIDs, avoiding conflict.
The first two system groups are unlike any other groups in the Protection Database in that they do not have a stable membership:
An implication of the lack of stable membership is that it is not possible to list the system:anyuser and system:authuser groups' current members. Similarly, these groups do not appear when the pts membership command is used to list the groups to which a user belongs.
The system:administrators group does have a stable membership, consisting of the cell's privileged administrators. Members of this group can issue any pts command, and are the only ones who can issue several other restricted commands (such as the chown command on AFS files). By default, they also implicitly have the administer (a) and lookup (l) permissions on every ACL in the file tree even if the ACL does not include an entry for them. For information about changing this default, see Administering the system:administrators Group.
For instructions on effectively using the system groups on ACLs, see Using Groups on ACLs.
All users can create "regular" groups. The name of these groups has two fields separated by a colon, the first of which must accurately reflect the group's ownership. The Protection Server refuses to create or change the name of a group if the result does not accurately indicate the ownership. The syntax is as follows:
owner_name:group_name
Members of the system:administrators group can create prefix-less groups whose names do not have the first, owner_name field. For suggestions on using the two types of groups effectively, see Using Groups Effectively.
The practical limit on group size is 5000 members, because the pts membership command that lists group membership cannot display any more than that.
Groups cannot be members of other groups, but groups can own other groups. A group must already have at least one member in order to own another group.
By default, each user can create 20 groups. A system administrator can increase or decrease this group creation quota with the pts setfields command.
Each Protection Database entry (group or user) is protected by a set of five "privacy flags" that limit who can administer the entry and what they can do. The default privacy flags are fairly restrictive, especially for user entries. See Setting the Privacy Flags on Database Entries.
The current owner of a group can transfer ownership of the group to another user without the new owner's permission. At that point the former owner loses administrative control over the group. No matter how many times ownership is transferred, a group's existence is always counted against the group creation quota of its creator. When the group is removed from the Protection Database, its creator's quota increases by one.
As explained in Differences in Authentication, AFS authentication is separate from UNIX authentication because the two file systems are separate. The separation has two practical implications:
The AFS distribution includes library or binary files that modify each system type's login utility to authenticate users with AFS and log them into the local file system in one step. If you do not configure an AFS-modified login utility on a client machine, its users must issue the klog command to authenticate with AFS after logging in.
Note: | The AFS-modified libraries and binaries do not necessarily support all features available in the proprietary login utilities available with each operating system. In some cases, it is not possible to support a utility at all. For more information about the supported utilities in each AFS version, see the AFS Release Notes. |
Several users can log into an AFS client machine at the same time and obtain separate AFS tokens. To make sure that each user accesses AFS with the proper authorization, the Cache Manager needs a way to track which token belongs to which user. (An AFS token is a small collection of data that the AFS Authentication Server grants to users who have proved their identity by providing the correct AFS password. AFS server processes require their clients to present a token when requesting service, and use it to establish that the user is genuine. To review this mutual authentication procedure, see A More Detailed Look at Mutual Authentication.)
The Cache Manager stores tokens in a separate credential structure in kernel memory for each user who is currently logged into the machine. The Cache Manager can associate a user's credential structure either with the user's UNIX UID or with a process authentication group (PAG). Using a PAG is preferable because it guaranteed to be unique: the Cache Manager allocates it based on a counter that increments with each use. In contrast, multiple users on a machine can share or assume the same UNIX UID, which creates potential security problems. Two common situations are the following:
Yet another advantage of PAGs over UIDs is that processes spawned by the user inherit the PAG and so share the token; thus they gain access to AFS as the authenticated user. In many environments, for example, printer and other daemons run under identities (such as the local superuser root) that the AFS server processes recognize only as the anonymous user. Unless PAGs are used, such daemons cannot access files for which the system:anyuser group does not have the necessary ACL permissions.
Once a user has a PAG, any new tokens the user obtains are associated with the PAG. The PAG expires two hours after any associated tokens expire or are discarded. If the user issues the klog command before the PAG expires, the new token is associated with the existing PAG, and the PAG is "recycled."
AFS-modified login utilities automatically generate a PAG, as described in Using an AFS-modified login Utility. If you use a standard login utility, then your users must include the -setpag flag to the klog command to generate a PAG. For instructions, see Not Using an AFS-modified login Utility.
As previously mentioned, an AFS-modified login utility simultaneously obtains an AFS token and logs the user into the local file system. This section outlines the login and authentication process and its interaction with the value in the password field of the local password file.
An AFS-modified login utility performs a sequence of steps similar to the following; details can vary for different operating systems:
AFS(R) version Login
As indicated, when you use an AFS-modified login utility, the password field in the local password file is no longer the primary gate for access to your system. If the user provides the correct AFS password, then the program never consults the local password file. However, you can still use the password field to control access, in the following way:
Systems that use a Pluggable Authentication Module (PAM) for login and AFS authentication do not necessarily consult the local password file at all, in which case they do not use the password field to control authentication and login attempts. Instead, instructions in the PAM configuration file (on many system types, /etc/pam.conf) fill the same function. See the instructions in the AFS Installation Guide for installing AFS-modified login utilities.
In cells that do not use an AFS-modified login utility, users must issue separate commands to login and authenticate:
The AFS User's Guide provides complete instructions.
As mentioned in Creating Standard Files in New AFS Accounts, you can invoke the klog -setpag command in a user's .login file (or equivalent) so that the user does not have to remember to issue the command after logging in. The user still must type a password twice, once at the prompt generated by the login utility and once at the klog command's prompt. This implies that the two passwords can differ, but it is less confusing if they do not.
Another effect of not using an AFS-modified login utility is that the AFS servers recognize the standard login program as the anonymous user. If the login program needs to access any AFS files (such as the .login file in a user's home directory), then the ACL that protects the file must include an entry granting the l (lookup) and r (read) permissions to the system:anyuser group.
When you do not use an AFS-modified login utility, an actual (scrambled) password must appear in the local password file for each user. Use the /bin/passwd file to insert or change these passwords. It is simpler if the password in the local password file matches the AFS password, but it is not required.
Unscrupulous users can try to gain access to your AFS cell by guessing an authorized user's password. To protect against this type of attack, you can limit the number of times that a user can consecutively fail to provide the correct password during an authentication attempt. When the limit is exceeded, the Authentication Server refuses further authentication attempts for a specified period of time (the lockout time). To reenable authentication attempts before the lockout time expires, an administrator must issue the kas unlock command.
Use the kas setfields command to set the limit on the number of failed authentication attempts and the lockout time, as described in Improving Password and Authentication Security.
A regular AFS user can change his or her password using either the kpasswd or kas setpassword commands. To change passwords, the user must prove his or her identity by typing in his or her current password. The user is then prompted to enter the new password twice (to screen out typing errors).
A system administrator can change any user password. To do this, the system administrator can use
Note: | Neither the kpasswd nor kas setpassword commands affect the local password file. Use the /bin/passwd file instead. |
By default, passwords never expire: a user can go on using the same password without ever having to change it. For security reasons, however, it is practical to require users to change their passwords after a given period of time.
The kas setfields command enables you to specify a lifetime for a user's password beyond which the password becomes invalid. The password lifetime can range between 1 and 254 days.
Once a user's password expires, the user cannot authenticate. However, for up to thirty days after the expiration of a password, a user can still use that password to change passwords. Beyond that, a system administrator must change the user's password.
For instructions on setting password lifetimes, see Improving Password and Authentication Security. Consult the AFS Command Reference Manual for detailed information on the kas setfields and kas unlock commands.
Using the same password for long periods of time makes a user's account ever more vulnerable to unauthorized entry. You can prevent users from reusing recently used passwords with the kas setfields command.
For instructions on restricting password reuse, see Improving Password and Authentication Security.
From a security point of view, some passwords are better than others. You can automatically check the quality of every new user password created with either the kpasswd or the kas setpassword command. The quality check is made by a program called kpwvalid which must reside in the same directory as the kpasswd and kas setpassword programs.
You can create your own kpwvalid program or use the version provided in the AFS distribution. See the kpwvalid reference page in the AFS Command Reference Manual.
If you create a custom kpwvalid program or shell script, the types of quality checks made are left to your discretion. Following are some suggested checks:
For instructions on other ways to improve user account security, see Improving Password and Authentication Security.
To create a new process authentication group (PAG) with which to associate your credential structure, issue the pagsh command before the klog command, or include the -setpag flag to the klog command
The difference between the two commands is that the pagsh command initializes a new command shell along with a new PAG. If you already had a PAG, then any processes or jobs that are already running continue to use the tokens associated with the old PAG. Any jobs or processes that start after the new PAG is created use the new PAG and its associated tokens. When you exit the new shell (by pressing <Ctrl-d>, for example), then you return to the original PAG and shell. By default, the pagsh command initializes a Bourne shell, but you can include the -c argument to initialize a C shell (the /bin/csh file on many system types) or Korn shell (the /bin/ksh file) instead.
For further discussion, see Using PAGs to Identify AFS Tokens and the pagsh and klog commands reference pages in the AFS Command Reference Manual.
A user can have only one token at a time per machine per PAG for a given cell. To obtain a second token for the same cell, the user must either log into a different machine or create a new PAG. It is, however, possible to possess tokens on one machine and PAG for many different cells (one token per cell). As this implies, authentication status on one machine or PAG is independent of authentication status on another machine or PAG, which can be very useful to a user or system administrator.
Once logged in, a user can obtain a token at any time with the klog command. If a valid token already exists, the new one overwrites it. If a PAG already exists, the new token is associated with it.
By default, the klog command authenticates the user using his or her login name. The -principal argument enables the issuer to adopt a different identity (if that identity's password is known). The -cell argument enables authentication in other cells and can be combined with the -principal argument.
For further information, see the AFS User's Guide and the entry for the klog command in the AFS Command Reference Manual.
The tokens command lists all of the tokens currently held by the Cache Manager on the local machine. The following examples illustrate its output in various situations.
The following shows the output when the issuer is not authenticated in any cell.
% tokens Tokens held by the Cache Manager: [ 1] --End of list--
The following shows the output for a user with AFS UID 1000 in the ABC Corporation cell:
% tokens Tokens held by the Cache Manager: User's (AFS ID 1000) tokens for afs@abc.com [Expires Jun 2 10:00] --End of list--
The following shows the output for a user who is authenticated in ABC Corporation cell, the State University cell and the DEF Company cell. The user has different AFS UIDs in the three cells. Tokens for the last cell are expired:
% tokens Tokens held by the Cache Manager: User's (AFS ID 1000) tokens for afs@abc.com [Expires Jun 2 10:00] User's (AFS ID 4286) tokens for afs@stateu.edu [Expires Jun 3 1:34] User's (AFS ID 22) tokens for afs@def.com [>>Expired<<] --End of list--
Note: | If you issue the Kerberos version of the tokens command (the
tokens.krb command), the output also includes information on
the ticket-granting ticket, including the ticket's owner, the
ticket-granting service, and the expiration date. Following is an
example:
% tokens.krb Tokens held by the Cache Manager: User's (AFS ID 1000) tokens for afs@abc.com [Expires Jun 2 10:00] User smith's tokens for krbtgt.ABC.COM@abc.com [Expires Jun 2 10:00] --End of list-- |
Also see the AFS User's Guide and the tokens reference page in the AFS Command Reference Manual. For more information on using Kerberos authentication, see section Support for Kerberos Authentication.
The unlog command discards tokens currently held for the issuer by the local machine's Cache Manager. You can discard all tokens or tokens for specified cells. For further information, see the AFS User's Guideand the entry for the unlog command in the AFS Command Reference Manual.
Note: | Since tokens are granted on a per-machine basis, destroying your tokens on one machine has no effect on tokens on another machine. |
The maximum lifetime of a user token is the smallest of the Max ticket lifetime values recorded in the following three Authentication Database entries. Administrators who have the ADMIN flag on their Authentication Database entry can use the kas examine command to display the entries, and the kas command to set the maximum ticket lifetime.
Note: | An AFS-modified login utility always grants a token with a lifetime calculated from the previously described three values. When issuing the klog command, a user can request a lifetime shorter than the default by using the -lifetime argument. For further information, see the AFS User's Guide and the klog reference page in the AFS Command Reference Manual. |
If your site is using standard Kerberos authentication rather than the AFS Authentication Server, use the modified versions of the klog, pagsh, and tokens commands that support Kerberos authentication. The binaries for the modified version of these commands have the same name as the standard binaries with the addition of a .krb extension.
Use either the Kerberos or the standard versions of these commands on all machines in the cell; do not mix the two versions. AFS Product Support can provide instructions on installing the Kerberos version of these four commands. For information on the differences between the two versions of these commands, see the AFS Command Reference Manual.
AFS incorporates several features to ensure that only authorized users gain access to data. This section summarizes the most important of them and suggests methods for improving security in your cell.
ACLs on Directories
Files in AFS are protected by the access control list (ACL) associated with their parent directory. The ACL defines which users or groups can access the data in the directory, and in what way. To learn more, see Managing Access Control Lists.
Mutual Authentication Between Client and Server
When an AFS client and server process communicate, each requires the other to prove its identity during mutual authentication, which involves the exchange of encrypted information that only valid parties can decrypt and respond to. For a detailed description of the mutual authentication process, see A More Detailed Look at Mutual Authentication.
AFS server processes mutually authenticate both with one another and with processes that represent human users. After mutual authentication is complete, the server and client have established an authenticated connection, across which they can communicate repeatedly without having to authenticate again until the connection expires or one of the parties closes it. Authenticated connections have varying lifetimes.
Tokens
In order to access AFS files, users must prove their identities to the AFS Authentication Server by providing the correct AFS password. If the password is correct, the Authentication Server sends the user a token as evidence of authenticated status. For most functions, the AFS server processes require that their clients present a token with a request for service. The token contains encryption keys used in the mutual authentication process.
Servers assign the user identity anonymous to users and processes that do not have a valid token. The anonymous identity has only the access granted to the system:anyuser group on ACLs.
Authorization Checking
Mutual authentication establishes that two parties communicating with one another are actually who they claim to be. For many functions, AFS server processes also check that the client whose identity they have verified is also authorized to make the request. Different requests require different kinds of privilege. See Three Types of Privilege.
Encrypted Network Communications
The AFS server processes encrypt particularly sensitive information before sending it back to clients. Even if an unauthorized party is able to eavesdrop on an authenticated connection, they cannot decipher encrypted data without the proper key.
AFS commands that involve server encryption keys and passwords use data encryption. These include the following:
In addition, the United States edition of the Update Server encrypts sensitive information (such as the contents of KeyFile) when distributing it.
The remaining bos commands and the commands in the fs, pts and vos suites do not encrypt data before transmitting it.
AFS uses three separate types of privilege:
For a discussion of why AFS uses three types of privilege, see The Reason for Separate Privileges.
AFS distinguishes between authentication and authorization checking. Authentication refers to the process of proving identity. Authorization checking refers to the process of verifying that an authenticated identity is allowed to perform a certain action.
AFS implements authentication at the level of connections. Each time two parties establish a new connection, they mutually authenticate. In general, each issue of an AFS command establishes a new connection between AFS server process and client.
AFS implements authorization checking at the level of file server machines. If authorization checking is enabled on a file server machine, then all of the server processes running on it provide services only to authorized users. If authorization checking is disabled on a file server machine, then all of the server processes perform any action for anyone. Obviously, disabling authorization checking is very dangerous.
For more information, see Managing Authentication and Authorization Requirements.
You can improve the level of security in your cell by configuring user accounts, file server machines, and system administrator accounts in the indicated way.
Users
File Server Machines
System Administrators
As in any file system, security is a prime concern in AFS. A file system that makes file sharing easy is not useful if it makes file sharing mandatory, so AFS incorporates several features that prevent unauthorized users from accessing data. Security in a networked environment is difficult because almost all procedures require transmission of information across wires that almost anyone can tap into. Also, many machines on networks are powerful enough to allow unscrupulous users to monitor others' transactions, or worse, break into them and fake the identity of one of the participants.
The most effective precaution against eavesdropping and information theft or fakery is to have servers and clients mutually authenticate, or prove their identities to one another, before trusting that the other party in the transaction is really what it claims to be. In other words, the nature of the network forces all parties on the network to be "paranoid" or "mutually suspicious," assuming that their partner in a transaction is not genuine until proven so. Mutual authentication is the means through which parties prove their genuineness.
Because the measures needed to prevent fakery must be quite sophisticated, the implementation of mutual authentication procedures is complex. The underlying concept is simple, however: parties prove their identities to one another by demonstrating that they possess a shared secret. A shared secret is a piece of information known only to the parties who are mutually authenticating (they can sometimes learn it in the first place from a trusted third party or some other source). In establishing a connection, one of the processes "presents" the shared secret for the other party to identify, and refuses to accept the other party as valid until it shows that it knows the secret too.
The most common form of shared secret in AFS transactions is the encryption key, also referred to simply as a key. The two parties in a transaction use their shared key to encrypt the "packets" of information they send and to decrypt the ones they receive. Keys actually serve two related purposes in mutual authentication. First, they protect messages as they cross the network by preventing anyone who does not know the key from eavesdropping on the transaction, and using the knowledge to masquerade as client or server. Second, knowledge of the key serves as the main proof of identity (the shared secret). The client and server's ability to encrypt and decrypt messages with it indicates that they share the same key. If they are using two different keys, messages remain scrambled and unintelligible after decryption.
The following sections describe the mutual authentication procedures that AFS server and client processes use to prove their identities to one another. Feel free to skip these sections if you are not interested in the mutual authentication process.
Simple mutual authentication involves only one encryption key and two parties, generally a client and server. The client contacts the server by sending a "challenge" message encrypted with a key known only to the two of them. The server decrypts the message using its key, which is the same as the client's if they really do share the same secret. The server responds to the challenge and uses its key to encrypt its response. The client uses its key to decrypt the server's response, and if it is correct, then the client can be sure that the server is genuine: only someone who knows the same key as the client can decrypt the challenge and answer it correctly. On its side, the server concludes that the client is genuine because the challenge message made sense when the server decrypted it. If the client uses a different key than the server, the challenge message remains scrambled and unintelligible even after decryption.
AFS uses simple mutual authentication to verify user identities during the first part of the login procedure. In that case, the key is based on the user's password.
Complex mutual authentication involves three encryption keys and three parties. All secure AFS transactions (except the first part of the login process) employ complex mutual authentication.
When a client wishes to communicate with a server, it first contacts a third party called a ticket-granter. The ticket-granter and the client mutually authenticate using the simple procedure. When they finish, the ticket-granter gives the client a server ticket (or simply ticket) as proof that it (the ticket-granter) has preverified the identity of the client. The ticket-granter encrypts the ticket with the first of the three keys, called the server encryption key (because it is known only to the ticket-granter and the server the client wants to contact). The server encryption key is a shared secret between the ticket-granter and the server; the client does not know the secret.
The ticket-granter sends several other items to the client with the ticket. These items inform the client about important things about the ticket, and are necessary because the client itself cannot read the ticket. Together with the ticket, all of these items make up a token:
The ticket-granter seals the entire token with the third key involved in complex mutual authentication-- the key known only to it (the ticket-granter) and the client. In some cases, this third key is derived from the password of the human user whom the client represents.
Now that the client has a valid server ticket, it is ready to contact the server. It sends the server two things:
At this point, the server does not know the session key, because the ticket-granter just created it. However, the ticket-granter put a copy of the session key inside the ticket. The server decrypts the ticket (using the server encryption key) and learns the session key. It then uses the session key to decrypt the client's request message. It generates a response and sends it to the client. It encrypts the response with the session key to protect it as it crosses the network.
This step is the heart of mutual authentication between client and server, because it proves to both parties that they know the same secret:
(Note that there is no direct communication between the ticket-granter and the server, even though their relationship is central to ticket-based mutual authentication. They interact only indirectly, via the client's possession of a ticket sealed with their shared secret.)
AFS provides two related facilities that help the administrator back up AFS data: Backup volumes and the AFS Backup System.
The first facility is the Backup volume, which you create by cloning a ReadWrite volume. The Backup volume is read-only and so preserves the state of the ReadWrite volume at the time the clone is made.
Backup volumes can ease administration if you mount them in the file system and make their contents available to users. For example, it often makes sense to mount the Backup version of each user volume as a subdirectory of the user's home directory. A conventional name for this mount point is OldFiles. Create a new version of the Backup volume (that is, reclone the ReadWrite) once a day to capture any changes that were made since the previous backup. If a user accidentally removes or changes data, the user can restore it from the Backup volume, rather than having to ask you to restore it.
The AFS User's Guide does not mention Backup volumes, so regular users do not know about them if you decide not to use them. This implies that if you do make Backup versions of user volumes, you need to tell your users about how the Backup works and where you have mounted it.
Users are often concerned that the data in a Backup volume counts against their volume quota and some of them even want to remove the OldFiles mount point. It does not, because the Backup volume is a separate volume. The only amount of space it uses in the user's volume is the amount needed for the mount point, which is about the same as the amount needed for a standard directory element.
Backup volumes are discussed in detail in Creating Backup Volumes.
Backup volumes can reduce restoration requests, but they reside on disk and so do not protect data from loss due to hardware failure. Like any file system, AFS is vulnerable to this sort of data loss.
To protect your cell's users from permanent loss of data, you are strongly urged to back up your file system to tape on a regular and frequent schedule. The AFS Backup System is available to ease the administration and performance of backups. For detailed information about the AFS Backup System, see Configuring the AFS Backup System and Backing Up and Restoring AFS Data.
The AFS distribution includes modified versions of several standard UNIX commands, daemons and programs that provide remote services, including the following:
These modifications enable the commands to handle AFS authentication information (tokens). This enables issuers to be recognized on the remote machine as an authenticated AFS user.
Replacing the standard versions of these programs in your file tree with the AFS-modified versions is optional. It is likely that AFS's transparent access reduces the need for some of the programs anyway, especially those involved in transferring files from machine to machine, like the ftpd and rcp programs.
Of all the commands, the AFS-modified version of the login utility is most likely to increase the usability of AFS in your cell. It authenticates users with AFS and logs them into the local UNIX file system in one step. If you do not use an AFS-modified login utility, your users must issue additional AFS authentication commands after logging in. However, the AFS-modified login utility does not necessarily include all of the features available in some proprietary versions of the utility. For more information, see Using an AFS-modified login Utility.
If you decide to use the AFS versions of these commands, be aware that several of them are interdependent. For example, the passing of AFS authentication information works correctly with the rcp command only if you are using the AFS version of both the rcp and inetd commands.
To learn more about the added functionality provided by the AFS versions of these commands, see the chapter on Modified UNIX Commands in the AFS Command Reference Manual. Restrictions on and requirements for their use also appear in that chapter.
The AFS distribution tape includes these modified remote service commands in the /usr/afsws/bin and /usr/afsws/etc directories.
Users of NFS client machines can access AFS files by mounting the /afs directory of an AFS client machine that is running the NFS/AFS Translator. This is a particular advantage in cells already running NFS who want to access AFS using client machines with unsupported system types.
The NFS/AFS Translator is a separately licensed product. Contact your AFS Sales representative.