Installation Guide


[Return to Library] [Contents] [Previous Topic] [Bottom of Topic] [Next Topic] [Index]


Installing the First AFS Machine

This chapter describes how to install the first AFS machine in your cell, setting it up as both a file server machine and a client machine. After completing all procedures in this chapter, you can remove the client functionality if you wish, as described in Removing Client Functionality.

To install additional file server machines after completing this chapter, see Installing Additional Server Machines.

To install additional client machines after completing this chapter, see Installing Additional Client Machines.


Requirements and Configuration Decisions

The instructions in this chapter assume that you and the machine you are installing meet the following requirements.

During the installation, you must make the following decisions about how to configure your cell and the first machine. The chapter in the AFS System Administrator's Guide chapter about issues in cell administration and configuration provides more detailed guidelines.


How to Use This Chapter

This chapter is divided into three large sections corresponding to the three parts of installing the first AFS machine in your cell. Perform all of the steps in the order they appear. Each functional section begins with a summary of the procedures to perform. The sections are as follows:


Overview: Installing File Server Functionality

In the first phase of installing your cell's first AFS machine, you install file server machine functionality, by performing the following procedures:

  1. Choose which machine to install as the first AFS machine

  2. Create AFS-related directories on the local disk

  3. Incorporate AFS modifications into the machine's kernel

  4. Configure partitions for storing volumes

  5. Replace the standard fsck program with an AFS-modified version on some system types

  6. Start the Basic OverSeer (BOS) Server

  7. Define the cell name and the machine's cell membership

  8. Start the database server processes: Authentication Server, Backup Server, Protection Server, and Volume Location (VL) Server

  9. Configure initial security mechanisms

  10. Start the fs process, which incorporates three component processes: the File Server, Volume Server, and Salvager

  11. Start the server portion of the Update Server

  12. Start the controller process (called runntp) for the Network Time Protocol Daemon, which synchronizes clocks

Choosing the First AFS Machine

The first AFS machine you install must have sufficient disk space to store AFS volumes. To take best advantage of AFS's capabilities, store client-side binaries as well as user files in volumes. When you later install additional file server machines in your cell, you can distribute these volumes among the different machines as you see fit.

These instructions configure the first AFS machine as a database server machine, the binary distribution machine for its system type, and (if you are using the United States edition of AFS) the cell's system control machine. For a description of these roles, see the AFS System Administrator's Guide.

Installation of additional machines is simplest if the first machine has the lowest IP address of any database server machine you currently plan to install. If you later install database server functionality on a machine with a lower IP address, you must first update the /usr/vice/etc/CellServDB file on all of your cell's client machines. For more details, see Installing Database Server Functionality.


Creating AFS Directories

Create the /usr/afs and /usr/vice/etc directories on the local disk, to house server and client files respectively. Subsequent instructions copy files from the AFS distribution CD-ROM into them, at the appropriate point for each system type.

      
   # mkdir /usr/afs
      
   # mkdir /usr/vice
      
   # mkdir /usr/vice/etc
   
   # mkdir /cdrom 
     

Three of the initial procedures in installing a file server machine vary a good deal from platform to platform. For convenience, the following sections group together all three of the procedures for a system type. Most of the remaining procedures are the same on every system type, but differences are noted as appropriate. The three initial procedures are the following.

To continue, proceed to the appropriate section:


Getting Started on AIX Systems

Begin by running the AFS initialization script to call the AIX kernel extension facility, which dynamically loads AFS modifications into the kernel. Then use the SMIT program to create partitions for storing AFS volumes, and replace the AIX fsck program with a version that correctly handles AFS volumes.

Loading AFS into the Kernel on AIX Systems

The AIX kernel extension facility is the dynamic kernel loader provided by IBM Corporation for AIX. AIX does not support building AFS modifications into a static kernel.

For AFS to function correctly, the kernel extension facility must run each time the machine reboots. The simplest way to guarantee this is to invoke the facility in the machine's AFS initialization file. In the following instructions you edit the rc.afs initialization script provided in the AFS distribution, selecting the appropriate options depending on whether NFS is also to run.

After editing the script, you run it to incorporate AFS into the kernel. In later sections you verify that the script correctly initializes all AFS components, then create an entry in the AIX inittab file so that the script runs automatically at reboot.

  1. Mount the AFS CD-ROM labeled AFS for AIX, International Edition on the local /cdrom directory. For instructions on mounting CD-ROMs (either locally or remotely via NFS), see your AIX documentation.

  2. Copy the AFS kernel library files from the CD-ROM to the local /usr/vice/etc/dkload directory, and the AFS initialization script to the /etc directory.
       # cd  /cdrom/rs_aix42/root.client/usr/vice/etc
       
       # cp -rp  dkload  /usr/vice/etc
       
       # cp -p  rc.afs  /etc/rc.afs
        
    

  3. Edit the /etc/rc.afs script, setting the NFS variable as indicated.
    Note:For the machine to function as an NFS/AFS translator, NFS must already be loaded into the kernel. It is loaded automatically on systems running AIX 4.1.1 and later, as long as the file /etc/exports exists.

  4. Invoke the /etc/rc.afs script to load AFS modifications into the kernel. You can ignore any error messages about the inability to start the BOS Server or the AFS client.
       #    /etc/rc.afs   
    

Configuring Server Partitions on AIX Systems

Every AFS file server machine must have at least one partition or logical volume for storing AFS volumes, each mounted at a directory named /vicepxx, where xx is one or two lowercase letters. By convention, the first AFS server partition is mounted on the /vicepa directory, the second on the /vicepb directory, and so on. The directories must reside in the file server machine's root directory, not in one of its existing subdirectories (for example, /usr/vicepa is not an acceptable directory location).

The AFS Release Notes for each AFS version specify the maximum number of server partitions on each file server machine. For instructions on configuring or removing AFS server partitions on an existing file server machine, see the chapter in the AFS System Administrator's Guide about maintaining server machines.
Note:Not all file system types that an operating system supports are necessarily supported as AFS server partitions. For possible restrictions, see the AFS Release Notes.

To configure server partitions on an AIX system, perform the following procedures:

  1. Create a directory called /vicepxx for each AFS server partition you are configuring (there must be at least one). Repeat the command for each partition.
       
       # mkdir /vicepxx
       
    

  2. Use the SMIT program to create a journaling file system on each partition to be configured as an AFS server partition.

  3. Mount each partition at one of the /vicepxx directories. Choose one of the following three methods:

    Also configure the partitions so that they are mounted automatically at each reboot. For more information, refer to the AIX documentation.

  4. Add the following line to the /etc/vfs file. It enables the Cache Manager to unmount AFS correctly during shutdown.
       
         afs     4     none     none
       
    

Replacing the fsck Program on AIX Systems

Never run the operating system vendor's fsck program on an AFS file server machine of this system type. It does not recognize the structures that the File Server uses to organize volume data on AFS server partitions, and so removes all of the data. In this step, you replace the operating system vendor's fsck program with a modified version that properly checks both AFS and standard UFS partitions. To repeat:

NEVER run the standard vendor-supplied fsck program on an AFS file server machine of this system type. It discards AFS volumes.

You can tell you are running the correct AFS version when it displays a banner like the following:

   
   [AFS (R) 3.5 fsck]

On AIX systems, you do not replace the fsck binary itself, but rather the program helper file included in the AIX distribution as /sbin/helpers/v3fshelper.

  1. Move the AIX fsck program helper to a safe location and install the version from the AFS distribution in its place. The AFS CD-ROM must still be mounted at the /cdrom directory.
       
       # cd /sbin/helpers
       
       # mv v3fshelper v3fshelper.noafs
       
       # cp -p /cdrom/rs_aix42/root.server/etc/v3fshelper v3fshelper
       
     
    

  2. Proceed to Starting the BOS Server.

Getting Started on Digital UNIX Systems

Begin by building AFS modifications into a new static kernel; Digital UNIX does not support dynamic loading. Then create partitions for storing AFS volumes, and replace the Digital UNIX fsck program with a version that correctly handles AFS volumes.

Building AFS into the Kernel on Digital UNIX Systems

Use the following instructions to build AFS modifications into the kernel on a Digital UNIX system.

  1. Create a copy called AFS of the basic kernel configuration file included in the Digital UNIX distribution as /usr/sys/conf/machine_name, where machine_name is the machine's hostname in all uppercase letters.
       # cd /usr/sys/conf
       
       # cp machine_name AFS
       
    

  2. Add AFS to the list of options in the configuration file you created in the previous step, so that the result looks like the following:
              .                   .
              .                   .
           options               UFS
           options               NFS
           options               AFS
              .                   .
              .                   .
       
    

  3. Add an entry for AFS to two places in the /usr/sys/conf/files file.

  4. Add an entry for AFS to two places in the /usr/sys/vfs/vfs_conf.c file.

  5. Mount the AFS CD-ROM labeled AFS for Digital UNIX, International Edition on the local /cdrom directory. For instructions on mounting CD-ROMs (either locally or remotely via NFS), see your Digital UNIX documentation.

  6. Copy the AFS initialization file from the distribution directory to the local directory for initialization files on Digital UNIX machines, /sbin/init.d by convention. Note the removal of the .rc extension as you copy the file.
       
       # cd /cdrom/alpha_dux40/root.client
       
       # cp usr/vice/etc/afs.rc  /sbin/init.d/afs
       
    

  7. Copy the AFS kernel module from the distribution directory to the local /usr/sys/BINARY directory.

    If the machine's kernel supports NFS server functionality:

      
       # cp bin/libafs.o /usr/sys/BINARY/afs.mod
       
    

    If the machine's kernel does not support NFS server functionality:

      
       # cp bin/libafs.nonfs.o /usr/sys/BINARY/afs.mod
       
    

  8. Configure and build the kernel. Respond to any prompts by pressing <Return>. The resulting kernel resides in the file /sys/AFS/vmunix.
       
       # doconfig -c AFS
       
    

  9. Rename the existing kernel file and copy the new, AFS-modified file to the standard location.
       
       # mv /vmunix /vmunix_save
       
       # cp /sys/AFS/vmunix /vmunix
       
    

  10. Reboot the machine to start using the new kernel.
       
       # shutdown -r now
       
    

Configuring Server Partitions on Digital UNIX Systems

Every AFS file server machine must have at least one partition or logical volume for storing AFS volumes, each mounted at a directory named /vicepxx, where xx is one or two lowercase letters. By convention, the first AFS server partition is mounted on the /vicepa directory, the second on the /vicepb directory, and so on. The directories must reside in the file server machine's root directory, not in one of its existing subdirectories (for example, /usr/vicepa is not an acceptable directory location).

The AFS Release Notes for each AFS version specify the maximum number of server partitions on each file server machine. For instructions on configuring or removing AFS server partitions on an existing file server machine, see the chapter in the AFS System Administrator's Guide about maintaining server machines.
Note:Not all file system types that an operating system supports are necessarily supported as AFS server partitions. For possible restrictions, see the AFS Release Notes.

  1. Create a directory called /vicepxx for each AFS server partition you are configuring (there must be at least one). Repeat the command for each partition.
       
       # mkdir /vicepxx
       
    

  2. Add a line with the following format to the file systems registry file, /etc/fstab, for each directory just created. The entry maps the directory name to the disk partition to be mounted on it.
       
       /dev/disk /vicepx ufs rw 0 2
    

    For example,

       
       /dev/rz3a /vicepa ufs rw 0 2
       
    

  3. Create a file system on each partition that is to be mounted at a /vicep directory. The following command is probably appropriate, but consult the Digital UNIX documentation for more information.
       
       # newfs -v /dev/disk
       
    

  4. Mount each partition by issuing either the mount -a command to mount all partitions at once or the mount command to mount each partition in turn.

Replacing the fsck Program on Digital UNIX Systems

Never run the operating system vendor's fsck program on an AFS file server machine of this system type. It does not recognize the structures that the File Server uses to organize volume data on AFS server partitions, and so removes all of the data. In this step, you replace the operating system vendor's fsck program with a modified version that properly checks both AFS and standard UFS partitions. To repeat:

NEVER run the standard vendor-supplied fsck program on an AFS file server machine of this system type. It discards AFS volumes.

You can tell you are running the correct AFS version when it displays a banner like the following:

   
   [AFS (R) 3.5 fsck]

On Digital UNIX systems, the files /sbin/fsck and /usr/sbin/fsck are driver programs. Rather than replacing either of them, you replace the actual binary included in the Digital UNIX distribution as /sbin/ufs_fsck and /usr/sbin/ufs_fsck.

  1. Move the Digital UNIX fsck binaries to a safe location, install the version from the AFS distribution (the vfsck binary), and link the Digital UNIX program names to it. The AFS CD-ROM must still be mounted at the /cdrom directory.
       
       # mv /sbin/ufs_fsck /sbin/ufs_fsck.noafs
       
       # mv /usr/sbin/ufs_fsck /usr/sbin/ufs_fsck.noafs
       
       # cd /cdrom/alpha_dux40/root.server/etc
       
       # cp vfsck /sbin/vfsck
       
       # cp vfsck /usr/sbin/vfsck
       
       # ln -s /sbin/vfsck /sbin/ufs_fsck
       
       # ln -s /usr/sbin/vfsck /usr/sbin/ufs_fsck
       
    

  2. Proceed to Starting the BOS Server.

Getting Started on HP-UX Systems

Begin by building AFS modifications into a new kernel; HP-UX does not support dynamic loading. Then create partitions for storing AFS volumes, and replace the HP-UX fsck program with a version that correctly handles AFS volumes.

Building AFS into the Kernel on HP-UX Systems

Use the following instructions to build AFS modifications into the kernel on an HP-UX system.

  1. Move the existing kernel-related files to a safe location.
       
       # cp /stand/vmunix /stand/vmunix.noafs
       
       # cp /stand/system /stand/system.noafs
       
    

  2. Mount the AFS CD-ROM labeled AFS for HP-UX, International Edition on the local /cdrom directory. For instructions on mounting CD-ROMs (either locally or remotely via NFS), see your HP-UX documentation.

  3. Copy the AFS initialization file from the AFS CD-ROM to the local directory for initialization files on HP-UX machines, /sbin/init.d by convention. Note the removal of the .rc extension as you copy the file.
       
       # cd /cdrom/hp_ux110/root.client
       
       # cp usr/vice/etc/afs.rc  /sbin/init.d/afs
       
    

  4. Copy the file afs.driver from the AFS CD-ROM to the local /usr/conf/master.d directory, changing its name to afs as you do so.
         
       # cp  usr/vice/etc/afs.driver  /usr/conf/master.d/afs
       
    

  5. Copy the AFS kernel module from the AFS CD-ROM to the local /usr/conf/lib directory.

    If the machine's kernel supports NFS server functionality:

       
       # cp bin/libafs.a /usr/conf/lib
       
    

    If the machine's kernel does not support NFS server functionality:

       
       # cp bin/libafs.nonfs.a /usr/conf/lib
       
    

  6. Incorporate the AFS driver into the kernel, either using the SAM program or a series of individual commands.

  7. Proceed to Configuring Server Partitions on HP-UX Systems.

Configuring Server Partitions on HP-UX Systems

Every AFS file server machine must have at least one partition or logical volume for storing AFS volumes, each mounted at a directory named /vicepxx, where xx is one or two lowercase letters. By convention, the first AFS server partition is mounted on the /vicepa directory, the second on the /vicepb directory, and so on. The directories must reside in the file server machine's root directory, not in one of its existing subdirectories (for example, /usr/vicepa is not an acceptable directory location).

The AFS Release Notes for each AFS version specify the maximum number of server partitions on each file server machine. For instructions on configuring or removing AFS server partitions on an existing file server machine, see the chapter in the AFS System Administrator's Guide about maintaining server machines.
Note:Not all file system types that an operating system supports are necessarily supported as AFS server partitions. For possible restrictions, see the AFS Release Notes.

  1. Create a directory called /vicepxx for each AFS server partition you are configuring (there must be at least one). Repeat the command for each partition.
       
       # mkdir /vicepxx
       
    

  2. Use the SAM program to create a file system on each partition. For instructions, consult the HP-UX documentation.

  3. On some HP-UX systems that use logical volumes, the SAM program automatically mounts the partitions. If it has not, mount each partition by issuing either the mount -a command to mount all partitions at once or the mount command to mount each partition in turn.

Replacing the fsck Program on HP-UX Systems

Never run the operating system vendor's fsck program on an AFS file server machine of this system type. It does not recognize the structures that the File Server uses to organize volume data on AFS server partitions, and so removes all of the data. In this step, you replace the operating system vendor's fsck program with a modified version that properly checks both AFS and standard UFS partitions. To repeat:

NEVER run the standard vendor-supplied fsck program on an AFS file server machine of this system type. It discards AFS volumes.

You can tell you are running the correct AFS version when it displays a banner like the following:

   
   [AFS (R) 3.5 fsck]

On HP-UX systems, there are several configuration files to install in addition to the AFS-modified fsck program (the vfsck binary).

  1. Create the command configuration file /sbin/lib/mfsconfig.d/afs. Use a text editor to place the indicated two lines in it:
       
       format_revision 1
       fsck            0        m,P,p,d,f,b:c:y,n,Y,N,q,
       
    

  2. Create an AFS-specific command directory called /sbin/fs/afs.
       
       # mkdir /sbin/fs/afs
       
    

  3. Copy the AFS-modified version of the fsck program (the vfsck binary) and related files from the distribution directory to the new AFS-specific command directory. Change the vfsck binary's name to fsck.
       
       # cd  /cdrom/hp_ux110/root.server/etc
       
       # cp -p  *  /sbin/fs/afs
       
       # mv  vfsck  fsck
       
    

  4. Set the mode bits appropriately on all of the files in the /sbin/fs/afs directory.
       
       # cd  /sbin/fs/afs
       
       # chmod  755  *
       
    

  5. Edit the /etc/fstab file, changing the file system type for each AFS server (/vicep) partition from hfs to afs. This ensures that the AFS-modified fsck program runs on the appropriate partitions.

    The sixth line in the following example of an edited file shows an AFS server partition, /vicepa.

       
       /dev/vg00/lvol1 / hfs defaults 0 1
       /dev/vg00/lvol4 /opt hfs defaults 0 2
       /dev/vg00/lvol5 /tmp hfs defaults 0 2
       /dev/vg00/lvol6 /usr hfs defaults 0 2
       /dev/vg00/lvol8 /var hfs defaults 0 2
       /dev/vg00/lvol9 /vicepa afs defaults 0 2
       /dev/vg00/lvol7 /usr/vice/cache hfs defaults 0 2
       
    

  6. Proceed to Starting the BOS Server.

Getting Started on IRIX Systems

To incorporate AFS into the kernel on IRIX systems, choose one of two methods:

Then create partitions for storing AFS volumes. You do not need to replace the IRIX fsck program because SGI has already modified it to handle AFS volumes properly.

Loading AFS into the Kernel on IRIX Systems

The ml program is the dynamic kernel loader provided by SGI for IRIX systems.

If you choose to use the ml program rather than to build AFS modifications into a static kernel, then for AFS to function correctly the ml program must run each time the machine reboots. The simplest way to guarantee this is to invoke the program in the machine's AFS initialization script, which is included in the AFS distribution. In this section you activate the configuration variables that trigger the appropriate commands in the script.

In later sections you verify that the script correctly initializes all AFS components, then create the links that incorporate AFS into the IRIX startup and shutdown sequence.

  1. Mount the AFS CD-ROM labeled AFS for IRIX, International Edition on the local /cdrom directory. For instructions on mounting CD-ROMs (either locally or remotely via NFS), see your IRIX documentation.

  2. Issue the uname -m command to determine the machine's CPU type. The IPxx value in the output must match one of the supported CPU types listed in the AFS Release Notes for the current version of AFS.
       
       # uname -m
       
    

  3. Copy the appropriate AFS kernel library file from the CD-ROM to the local /usr/vice/etc/sgiload directory; the IPxx portion of the library file name must match the value returned by the uname -m command. Also choose the file appropriate to whether the machine's kernel supports NFS server functionality (NFS must be supported for the machine to act as an NFS/AFS Translator). Single- and multiprocessor machines use the same library file.

    You can choose to copy all of the kernel library files into the /usr/vice/etc/sgiload directory, but they require a significant amount of space.

     
       # mkdir /usr/vice/etc/sgiload
      
       # cd  /cdrom/sgi_65/root.client/usr/vice/etc
    

    If the machine's kernel supports NFS server functionality:

       
       # cp -p   sgiload/libafs.IPxx.o   /usr/vice/etc/sgiload   
       
    

    If the machine's kernel does not support NFS server functionality:

       
       # cp -p  sgiload/libafs.nonfs.IPxx.o  /usr/vice/etc/sgiload
       
    

  4. Copy the AFS initialization file from the CD-ROM to the local directory for initialization files on IRIX machines, /etc/init.d by convention. Note the removal of the .rc extension as you copy the file.
       
       # cp -p   afs.rc  /etc/init.d/afs 
       
    

  5. Issue the chkconfig command to activate the afsml configuration variable.
       
       # /etc/chkconfig -f afsml on
       
    

    If the machine is to function as an NFS/AFS Translator and the kernel supports NFS server functionality, activate the afsxnfs variable.

       
       # /etc/chkconfig -f afsxnfs on
       
    

  6. Invoke the /etc/init.d/afs script to load AFS extensions into the kernel. The script invokes the ml command, automatically determining which kernel library file to use based on this machine's CPU type and the activation state of the afsxnfs variable.

    You can ignore any error messages about the inability to start the BOS Server or Cache Manager.

       
       # /etc/init.id/afs  start
       
    

  7. Proceed to Configuring Server Partitions on IRIX Systems.

Building AFS into the Kernel on IRIX Systems

Use the following instructions to build AFS modifications into the kernel on an IRIX system.

  1. Mount the AFS CD-ROM labeled AFS for IRIX, International Edition on the /cdrom directory. For instructions on mounting CD-ROMs (either locally or remotely via NFS), see your IRIX documentation.

  2. Copy the AFS initialization file from the CD-ROM to the local directory for initialization files on IRIX machines, /etc/init.d by convention. Note the removal of the .rc extension as you copy the file.
       
       # cd  /cdrom/sgi_65/root.client
       
       # cp -p   usr/vice/etc/afs.rc  /etc/init.d/afs
       
    

  3. Copy the kernel initialization file afs.sm to the local /var/sysgen/system directory, and the kernel master file afs to the local /var/sysgen/master.d directory.
       
       # cp -p  bin/afs.sm  /var/sysgen/system
       
       # cp -p  bin/afs  /var/sysgen/master.d
       
    

  4. Issue the uname -m command to determine the machine's CPU type. The IPxx value in the output must match one of the supported CPU types listed in the AFS Release Notes for the current version of AFS.
       
       # uname -m
        
    

  5. Copy the appropriate AFS kernel library file from the CD-ROM to the local file /var/sysgen/boot/afs.a; the IPxx portion of the library file name must match the value returned by the uname -m command. Also choose the file appropriate to whether the machine's kernel supports NFS server functionality (NFS must be supported for the machine to act as an NFS/AFS Translator). Single- and multiprocessor machines use the same library file.

    If the machine's kernel supports NFS server functionality:

       
       # cp -p   bin/libafs.IPxx.a   /var/sysgen/boot/afs.a   
       
    

    If the machine's kernel does not support NFS server functionality:

       
       # cp -p  bin/libafs.nonfs.IPxx.a   /var/sysgen/boot/afs.a
       
    

  6. Issue the chkconfig command to deactivate the afsml configuration variable.
       
       # /etc/chkconfig -f afsml off
       
    

    If the machine is to function as an NFS/AFS Translator and the kernel supports NFS server functionality, activate the afsxnfs variable.

        
       # /etc/chkconfig -f afsxnfs on
       
    

  7. Copy the existing kernel file, /unix, to a safe location and compile the new kernel. It is created as /unix.install, and overwrites the existing /unix file when the machine reboots in the next step.
       
       # cp /unix /unix_orig
       
       # autoconfig
       
    

  8. Reboot the machine to start using the new kernel.

       
       # shutdown -i6 -g0 -y
       
    

Configuring Server Partitions on IRIX Systems

Every AFS file server machine must have at least one partition or logical volume for storing AFS volumes, each mounted at a directory named /vicepxx, where xx is one or two lowercase letters. By convention, the first AFS server partition is mounted on the /vicepa directory, the second on the /vicepb directory, and so on. The directories must reside in the file server machine's root directory, not in one of its existing subdirectories (for example, /usr/vicepa is not an acceptable directory location).

The AFS Release Notes for each AFS version specify the maximum number of server partitions on each file server machine. For instructions on configuring or removing AFS server partitions on an existing file server machine, see the chapter in the AFS System Administrator's Guide about maintaining server machines.
Note:Not all file system types that an operating system supports are necessarily supported as AFS server partitions. For possible restrictions, see the AFS Release Notes.

AFS supports use of both EFS and XFS partitions for housing AFS volumes. SGI encourages use of XFS partitions.

  1. Create a directory called /vicepxx for each AFS server partition you are configuring (there must be at least one). Repeat the command for each partition.
       
       # mkdir /vicepxx
       
    

  2. Add a line with the following format to the file systems registry file, /etc/fstab, for each partition (or logical volume created with the XLV volume manager) to be mounted on one of the directories created in the previous step.

    For an XFS partition or logical volume:

       
       /dev/dsk/disk /vicepxx xfs rw,raw=/dev/rdsk/disk 0 0
       
    

    For an EFS partition:

       
       /dev/dsk/disk /vicepxx efs rw,raw=/dev/rdsk/disk 0 0
       
    

    The following are examples of an entry for each file system type:

       
       /dev/dsk/dks0d2s6 /vicepa  xfs rw,raw=/dev/rdsk/dks0d2s6  0 0
       /dev/dsk/dks0d3s1 /vicepa  efs rw,raw=/dev/rdsk/dks0d3s1  0 0
       
    

  3. Create a file system on each partition that is to be mounted on a /vicep directory. The following commands are probably appropriate, but consult the IRIX documentation for more information.

    For XFS file systems, include the indicated options to configure the partition or logical volume with inodes large enough to accommodate special AFS-specific information:

       
       # mkfs -t xfs -i size=512 -l size=4000b device
       
    

    For EFS file systems:

       
       # mkfs -t efs device
       
    

    In both cases, device is a raw device name like /dev/rdsk/dks0d0s0 for a single disk partition or /dev/rxlv/xlv0 for a logical volume.

  4. Mount each partition by issuing either the mount -a command to mount all partitions at once or the mount command to mount each partition in turn.

  5. Proceed to Starting the BOS Server.

Getting Started on Linux Systems

Begin by running the AFS initialization script to call the insmod program, which dynamically loads AFS modifications into the kernel. Then create partitions for storing AFS volumes. You do not need to replace the Linux fsck program.

Loading AFS into the Kernel on Linux Systems

The insmod program is the dynamic kernel loader for Linux. Linux does not support building AFS modifications into a static kernel.

For AFS to function correctly, the insmod program must run each time the machine reboots. The simplest way to guarantee this is to invoke the program in the machine's AFS initialization file. As distributed, the initialization file includes commands that select the appropriate AFS library file and run the insmod program automatically. In this section you run the script to load AFS modifications into the kernel.

In later sections you verify that the script correctly initializes all AFS components, then activate a configuration variable, which results in the script being incorporated into the Linux startup and shutdown sequence.

  1. Mount the AFS CD-ROM labeled AFS for Linux, International Edition on the local /cdrom directory. For instructions on mounting CD-ROMs (either locally or remotely via NFS), see your Linux documentation.

  2. Copy the AFS kernel library files from the CD-ROM to the local /usr/vice/etc/modload directory. The filenames for the libraries have the format libafs-version.o, where version indicates the kernel build level. The string .mp in the version indicates that the file is appropriate for machines running a multiprocessor kernel.
       
       # cd  /cdrom/i386_linux22/root.client/usr/vice/etc
      
       # cp -rp  modload  /usr/vice/etc
       
    

  3. Copy the AFS initialization file from the CD-ROM to the local directory for initialization files on Linux machines, /etc/rc.d/init.d by convention. Note the removal of the .rc extension as you copy the file.
       
       # cp -p   afs.rc  /etc/rc.d/init.d/afs 
        
    

  4. Run the AFS initialization script to load AFS extensions into the kernel. The script invokes the insmod command, automatically determining which kernel library file to use based on the Linux kernel version installed on this machine.

    You can ignore any error messages about the inability to start the BOS Server or Cache Manager.

       
       # /etc/rc.d/init.d/afs  start
       
    

Configuring Server Partitions on Linux Systems

Every AFS file server machine must have at least one partition or logical volume for storing AFS volumes, each mounted at a directory named /vicepxx, where xx is one or two lowercase letters. By convention, the first AFS server partition is mounted on the /vicepa directory, the second on the /vicepb directory, and so on. The directories must reside in the file server machine's root directory, not in one of its existing subdirectories (for example, /usr/vicepa is not an acceptable directory location).

The AFS Release Notes for each AFS version specify the maximum number of server partitions on each file server machine. For instructions on configuring or removing AFS server partitions on an existing file server machine, see the chapter in the AFS System Administrator's Guide about maintaining server machines.
Note:Not all file system types that an operating system supports are necessarily supported as AFS server partitions. For possible restrictions, see the AFS Release Notes.

  1. Create a directory called /vicepxx for each AFS server partition you are configuring (there must be at least one). Repeat the command for each partition.
       
       # mkdir /vicepxx
       
    

  2. Add a line with the following format to the file systems registry file, /etc/fstab, for each directory just created. The entry maps the directory name to the disk partition to be mounted on it.
       
       /dev/disk /vicepx ext2 defaults 0 2
       
    

    For example,

       
       /dev/sda8 /vicepa ext2 defaults 0 2
       
    

  3. Create a file system on each partition that is to be mounted at a /vicep directory. The following command is probably appropriate, but consult the Linux documentation for more information.
       
       # mkfs -v /dev/disk
       
    

  4. Mount each partition by issuing either the mount -a command to mount all partitions at once or the mount command to mount each partition in turn.

  5. Proceed to Starting the BOS Server.

Getting Started on Solaris Systems

Begin by running the AFS initialization script to call the modload program distributed by Sun Microsystems, which dynamically loads AFS modifications into the kernel. Then create partitions for storing AFS volumes, and replace the Solaris fsck program with a version that correctly handles AFS volumes.

Loading AFS into the Kernel on Solaris Systems

The modload program is the dynamic kernel loader provided by Sun Microsystems for Solaris systems. Solaris does not support building AFS modifications into a static kernel.

For AFS to function correctly, the modload program must run each time the machine reboots. The simplest way to guarantee this is to invoke the program in the machine's AFS initialization file. In this section you copy an AFS library file to the location where the modload program can access it, /kernel/fs/afs. Select the appropriate library file based on whether NFS is also running.

In later sections you verify that the script correctly initializes all AFS components, then create the links that incorporate AFS into the Solaris startup and shutdown sequence.

  1. Mount the AFS CD-ROM labeled AFS for Solaris, International Edition on the /cdrom directory. For instructions on mounting CD-ROMs (either locally or remotely via NFS), see your Solaris documentation.

  2. Copy the AFS initialization file from the CD-ROM to the local directory for initialization files on Solaris machines, /etc/init.d by convention. Note the removal of the .rc extension as you copy the file.
       
       # cd  /cdrom/sun4x_56/root.client/usr/vice/etc
       
       # cp -p  afs.rc  /etc/init.d/afs
       
    

  3. Copy the appropriate AFS kernel library file from the CD-ROM to the local file /kernel/fs/afs.

    If the machine's kernel supports NFS server functionality and the nfsd process is running:

       
       # cp -p modload/libafs.o /kernel/fs/afs
       
    

    If the machine's kernel does not support NFS server functionality or if the nfsd process is not running:

       
       # cp -p modload/libafs.nonfs.o /kernel/fs/afs
       
    

  4. Invoke the AFS initialization script to load AFS modifications into the kernel. It automatically creates an entry for AFS in slot 105 of the local /etc/name_to_sysnum file if necessary, reboots the machine to start using the new version of the file, and runs the modload command. You can ignore any error messages about the inability to start the BOS Server or the AFS client.
          
       # /etc/init.d/afs start
       
    

Configuring Server Partitions on Solaris Systems

Every AFS file server machine must have at least one partition or logical volume for storing AFS volumes, each mounted at a directory named /vicepxx, where xx is one or two lowercase letters. By convention, the first AFS server partition is mounted on the /vicepa directory, the second on the /vicepb directory, and so on. The directories must reside in the file server machine's root directory, not in one of its existing subdirectories (for example, /usr/vicepa is not an acceptable directory location).

The AFS Release Notes for each AFS version specify the maximum number of server partitions on each file server machine. For instructions on configuring or removing AFS server partitions on an existing file server machine, see the chapter in the AFS System Administrator's Guide about maintaining server machines.
Note:Not all file system types that an operating system supports are necessarily supported as AFS server partitions. For possible restrictions, see the AFS Release Notes.

  1. Create a directory called /vicepxx for each AFS server partition you are configuring (there must be at least one). Repeat the command for each partition.
       
       # mkdir /vicepxx
       
    

  2. Add a line with the following format to the file systems registry file, /etc/vfstab, for each partition to be mounted on a directory created in the previous step.
       
       /dev/dsk/disk   /dev/rdsk/disk   /vicepxx   ufs   boot_order  yes
      
    

    The following is an example.

      
       /dev/dsk/c0t6d0s1 /dev/rdsk/c0t6d0s1 /vicepa ufs 3 yes
      
    

  3. Create a file system on each partition that is to be mounted at a /vicep directory. The following command is probably appropriate, but consult the Solaris documentation for more information.
      
       # newfs -v /dev/rdsk/disk
      
    

  4. Issue the mountall command to mount all partitions at once.

Replacing the fsck Program on Solaris Systems

Never run the operating system vendor's fsck program on an AFS file server machine of this system type. It does not recognize the structures that the File Server uses to organize volume data on AFS server partitions, and so removes all of the data. In this step, you replace the operating system vendor's fsck program with a modified version that properly checks both AFS and standard UFS partitions. To repeat:

NEVER run the standard vendor-supplied fsck program on an AFS file server machine of this system type. It discards AFS volumes.

You can tell you are running the correct AFS version when it displays a banner like the following:

   
   [AFS (R) 3.5 fsck]

  1. Create the /usr/lib/fs/afs directory to house AFS library files.
      
       # mkdir /usr/lib/fs/afs
      
    

  2. Copy the AFS-modified fsck program (vfsck) from the CD-ROM distribution directory to the newly created directory.
      
       # cd /cdrom/sun4x_56/root.server/etc
      
       # cp vfsck /usr/lib/fs/afs/fsck
      
    

  3. Working in the /usr/lib/fs/afs directory, create the following links to Solaris libraries:
      
       # cd /usr/lib/fs/afs	
       # ln -s /usr/lib/fs/ufs/clri	
       # ln -s /usr/lib/fs/ufs/df
       # ln -s /usr/lib/fs/ufs/edquota
       # ln -s /usr/lib/fs/ufs/ff
       # ln -s /usr/lib/fs/ufs/fsdb	
       # ln -s /usr/lib/fs/ufs/fsirand
       # ln -s /usr/lib/fs/ufs/fstyp
       # ln -s /usr/lib/fs/ufs/labelit
       # ln -s /usr/lib/fs/ufs/lockfs
       # ln -s /usr/lib/fs/ufs/mkfs	
       # ln -s /usr/lib/fs/ufs/mount
       # ln -s /usr/lib/fs/ufs/ncheck
       # ln -s /usr/lib/fs/ufs/newfs
       # ln -s /usr/lib/fs/ufs/quot
       # ln -s /usr/lib/fs/ufs/quota
       # ln -s /usr/lib/fs/ufs/quotaoff
       # ln -s /usr/lib/fs/ufs/quotaon
       # ln -s /usr/lib/fs/ufs/repquota
       # ln -s /usr/lib/fs/ufs/tunefs
       # ln -s /usr/lib/fs/ufs/ufsdump
       # ln -s /usr/lib/fs/ufs/ufsrestore
       # ln -s /usr/lib/fs/ufs/volcopy
       
    

  4. Append the following line to the end of the file /etc/dfs/fstypes.
      
       afs AFS Utilities
      
    

  5. Edit the /sbin/mountall file, making two changes.

  6. Proceed to Starting the BOS Server.

Starting the BOS Server

You are now ready to start the AFS server processes on this machine. Begin by copying the AFS server binaries from the CD-ROM to the conventional local disk location, the /usr/afs/bin directory. The following instructions also copy other server files into other subdirectories of /usr/afs.

Then issue the bosserver command to initialize the Basic OverSeer (BOS) Server, which monitors and controls other AFS server processes on its file server machine. Include the -noauth flag to disable authorization checking. Because you have not yet configured your cell's AFS authentication and authorization mechanisms, the BOS Server cannot perform authorization checking as it does during normal operation. In no-authorization mode, it does not verify the identity or privilege of the issuer of a bos command, and so performs any operation for anyone.

Disabling authorization checking gravely compromises cell security. You must complete all subsequent steps in one uninterrupted pass and must not leave the machine unattended until you restart the BOS Server with authorization checking enabled, in Verifying the AFS Initialization Script.

As it initializes for the first time, the BOS Server creates the following directories and files, setting the owner to the local superuser root and the mode bits to limit the ability to write (and in some cases, read) them. For explanations of the contents and function of these directories and files, see the chapter in the AFS System Administrator's Guide about administering server machines. For further discussion of the mode bit settings, see Protecting Sensitive AFS Directories.

The BOS Server also creates symbolic links called /usr/vice/etc/ThisCell and /usr/vice/etc/CellServDB to the corresponding files in the /usr/afs/etc directory. The AFS command interpreters consult the CellServDB and ThisCell files in the /usr/vice/etc directory because they generally run on client machines. On machines that are AFS servers only (as this machine currently is), the files reside only in the /usr/afs/etc directory; the links enable the command interpreters to retrieve the information they need. The subsequent instructions for installing the client functionality replace the links with actual files (in Creating the Client CellServDB File).

  1. On the local /cdrom directory, mount the AFS CD-ROM for this machine's system type that is labeled International Edition, if it is not already. For instructions on mounting CD-ROMs (either locally or remotely via NFS), consult the operating system documentation.

  2. Copy files from the CD-ROM to the local /usr/afs directory.
       
       # cd /cdrom/sysname/root.server/usr/afs
       
       # cp -rp  *  /usr/afs
       
    

  3. If you use the United States edition of AFS, mount at the /cdrom directory the AFS CD-ROM that is labeled Encryption Files, Domestic Edition.

  4. Copy files from the CD-ROM to the local /usr/afs/bin directory.
       
       # cd /cdrom/sysname/root.server/usr/afs/bin
       
       # cp -p  *  /usr/afs/bin
       
    

  5. Issue the bosserver command. Include the -noauth flag to disable authorization checking.
       # /usr/afs/bin/bosserver -noauth &
       
    

  6. Verify that the BOS Server created the symbolic links /usr/vice/etc/ThisCell and /usr/vice/etc/CellServDB to the corresponding files in the /usr/afs/etc directory.
       
       # ls -l  /usr/vice/etc
       
    

    If /usr/vice/etc/ThisCell and /usr/vice/etc/CellServDB do not exist, or are not links, issue the following commands.

       
       # ln -s /usr/afs/etc/ThisCell /usr/vice/etc/ThisCell
       
       # ln -s /usr/afs/etc/CellServDB /usr/vice/etc/CellServDB 
        
    

Defining Cell Name and Membership for Server Processes

Now assign your cell's name. The chapter in the AFS System Administrator's Guide about cell configuration and administration issues discusses the important considerations, why changing the name is difficult, and the restrictions on name format. Two of the most important restrictions are that the name cannot include uppercase letters or more than 64 characters.

Use the bos setcellname command to assign the cell name. It creates two files:

Note:In the following and every instruction in this guide, for the machine name argument substitute the fully-qualified hostname (such as fs1.abc.com) of the machine you are installing. For the cell name argument substitute your cell's complete name (such as abc.com).

  1. Issue the bos setcellname command to set the cell name.
       # cd /usr/afs/bin
          
       # ./bos setcellname <machine name> <cell name> -noauth
       
    

    Because you are not authenticated and authorization checking is disabled, the bos command interpreter possibly produces error messages about being unable to obtain tickets and running unauthenticated. You can safely ignore the messages.

  2. Issue the bos listhosts command to verify that the machine you are installing is now registered as the cell's first database server machine.
       
       # ./bos listhosts <machine name> -noauth
       Cell name is cell name
           Host 1 is machine name
       
    

Starting the Database Server Processes

Next use the bos create command to create an entry for the four database server processes in the /usr/afs/local/BosConfig file and start them running. The four processes run on database server machines only:

Note:AFS's authentication and authorization software is based on algorithms and other procedures known as Kerberos, as originally developed by Project Athena at the Massachusetts Institute of Technology. Some cells choose to replace the AFS Authentication Server and other security-related protocols with Kerberos as obtained directly from Project Athena or other sources. If you wish to do this, contact the AFS Product Support group now to learn about necessary modifications to the installation.

As you start each server, messages appear on the console indicating that AFS's distributed database technology, Ubik, is electing a quorum. This is necessary even when there is only one database server machine. As you start each server process, wait to issue the next command until a message indicates the election is complete or that the server process is ready to process requests.

The remaining instructions in this chapter include the -cell argument on all applicable commands. Provide the cell name you assigned in Defining Cell Name and Membership for Server Processes. If a command appears on multiple lines, it is only for legibility.

  1. Issue the bos create command to start the Authentication Server. The current working directory is still /usr/afs/bin.
       
       # ./bos create <machine name> kaserver simple /usr/afs/bin/kaserver  \
                      -cell <cell name>  -noauth
       
    

    You can safely ignore the messages that tell you to add Kerberos to the /etc/services file; AFS uses a default value that makes the addition unnecessary. You can also ignore messages about the failure of authentication.

  2. Issue the bos create command to start the Backup Server.
       
       # ./bos create <machine name> buserver simple /usr/afs/bin/buserver  \
                      -cell <cell name>  -noauth
       
    

  3. Issue the bos create command to start the Protection Server.
       
       # ./bos create <machine name> ptserver simple /usr/afs/bin/ptserver  \
                      -cell <cell name>  -noauth
       
    

  4. Issue the bos create command to start the VL Server.
       
       # ./bos create <machine name> vlserver simple /usr/afs/bin/vlserver  \
                      -cell <cell name>  -noauth
       
    

Initializing Cell Security

Now initialize the cell's security mechanisms. Begin by creating the following two initial entries in the Authentication Database:

Then enable the new admin user to issue privileged bos and vos commands, and define the initial server encryption key in the local /usr/afs/etc/KeyFile file. Conclude by adding the admin user to the system:administrators group, which is one of the system groups that the Protection Server automatically creates in the Protection Database automatically as it initializes. Belonging to this group enables the admin user to issue privileged pts and fs commands.

The following instructions do not configure all of the security mechanisms related to the AFS Backup System. See the chapter in the AFS System Administrator's Guide about configuring the Backup System.

  1. Enter kas interactive mode. Because the machine is in no-authorization checking mode, use the -noauth flag to suppress the Authentication Server's usual prompt for a password.
       
       # kas  -cell <cell name> -noauth 
       ka>
      
    

  2. Issue the kas create command to create Authentication Database entries called admin and afs.

    Do not provide passwords on the command line. Instead provide them as afs_passwd and admin_passwd in response to the kas command interpreter's prompts as shown, so that they do not echo visibly on the screen.

    You need to enter the afs_passwd string only in this step and in Step 7, so provide a value that is as long and complex as possible, preferably including both uppercase and lowercase letters, numerals, and punctuation characters. Also, make the admin_passwd as long and complex as possible, but keep in mind that administrators need to enter it often. Both passwords must be at least six characters long.

       
       ka> create afs 
       initial_password:  afs_passwd
       Verifying, please re-enter initial_password: afs_passwd
        
       ka> create admin
       initial_password: admin_passwd
       Verifying, please re-enter initial_password: admin_passwd
       
    

  3. Issue the kas examine command to display a checksum for the server encryption key in the afs entry. In Step 8 you issue the bos listkeys command, and need to verify that the checksum in its output matches the checksum in this command's output.
       
       ka> examine afs
       User data for afs
        key (0) cksum is checksum . . .
       
    

  4. Issue the kas setfields command to turn on the ADMIN flag in the admin entry. This enables the admin user to issue privileged kas commands. Then issue the kas examine command to verify that the ADMIN flag appears in parentheses on the first line of the output, as shown in the example.
       
       ka> setfields admin -flags admin
       
       ka> examine admin 
       User data for admin (ADMIN) . . .
         
    

  5. Issue the kas quit command to leave kas interactive mode.
       
       ka> quit
       
    

  6. Issue the bos adduser command to add the admin user to the local /usr/afs/etc/UserList file. This enables the admin user to issue privileged bos and vos commands.
       
       # ./bos adduser <machine name> admin -cell <cell name> -noauth
       
    

  7. Issue the bos addkey command to define the AFS server encryption key in the local /usr/afs/etc/KeyFile file.

    Do not provide the password on the command line. Instead provide it as afs_passwd in response to the bos command interpreter's prompts, as shown. Provide the same string as in Step 2.

       
       # ./bos addkey <machine name> -kvno 0 -cell <cell name>  -noauth
       Input key: afs_passwd
       Retype input key: afs_passwd
       
    

  8. Issue bos listkeys command to verify that the checksum of the new key in the KeyFile file is the same in the Authentication Database afs entry, which you displayed in Step 3.
       
       # ./bos listkeys <machine name> -cell <cell name> -noauth
       key 0 has cksum checksum
        
    

    You can safely ignore any error messages indicating that bos failed to get tickets or that authentication failed.

    If the keys are different, issue the following commands, making sure that the afs_passwd string is the same in each case. The checksum strings reported by the kas examine and bos listkeys commands must match; if they do not, repeat these instructions until they do, using the -kvno argument to increment the key version number each time.

       
       # ./kas  -cell <cell name> -noauth 
           
       ka> setpassword afs -kvno 1 
       new_password: afs_passwd
       Verifying, please re-enter initial_password: afs_passwd
       
       ka> examine afs
       User data for afs
        key (1) cksum is checksum . . .
      
       ka> quit
      
       # ./bos addkey <machine name> -kvno 1 -cell <cell name> -noauth 
       Input key: afs_passwd
       Retype input key: afs_passwd
       
       # ./bos listkeys <machine name> -cell <cell name> -noauth
       key 1 has cksum checksum
       
    

  9. Issue the pts createuser command to create a Protection Database entry for the admin user.

    By default, the Protection Server assigns AFS UID 1 to the admin user, because it is the first user entry you are creating. If the local password file (/etc/passwd or equivalent) already has an entry for admin that assigns it a UNIX UID other than 1, it is best to use the -id argument to the pts createuser command to make the new AFS UID match the existing UNIX UID. Otherwise, it is best to accept the default.

       
       # ./pts createuser -name admin -cell <cell name> [-id <AFS UID>]  -noauth
       User admin has id AFS UID
       
    

  10. Issue the pts adduser command to make the admin user a member of the system:administrators group, and the pts membership command to verify the new membership.
       
       # ./pts adduser admin system:administrators -cell <cell name> -noauth
       
       # ./pts membership admin -cell  <cell name> -noauth
       Groups admin (id: 1) is a member of:
         system:administrators
       
    

  11. Issue the bos restart command with the -all flag to restart the database server processes, so that they start using the new server encryption key. As when you started each process for the first time, wait to continue until a message indicates for each process that Ubik election is complete or that the server process is ready to process requests.
       
       # ./bos restart <machine name> -all -cell <cell name> -noauth
       
    

Starting the File Server, Volume Server, and Salvager

Start the fs process, which consists of the File Server, Volume Server, and Salvager (fileserver, volserver and salvager processes).

  1. Issue the bos create command to start the fs process. The command appears here on multiple lines only for legibility.
       
       # ./bos create  <machine name> fs fs /usr/afs/bin/fileserver   \
             /usr/afs/bin/volserver /usr/afs/bin/salvager  \
             -cell <cell name>  -noauth
       
    

    In some cases, a message about VLDB initialization appears, along with one or more instances of an error message similar to the following:

        
       FSYNC_clientInit temporary failure (will retry)
       
    

    This message appears when the volserver process tries to start before the fileserver process has completed its initialization. Wait a few minutes after the last such message before continuing, to guarantee that both processes have started successfully.

    To verify that the fs process has started successfully, check that the output from the bos status command mentions two proc starts.

      
       # ./bos status <machine name> fs -long -noauth
       
    

  2. You next action depends on whether you have ever run AFS file server machines in the cell:

Starting the Server Portion of the Update Server

Start the server portion of the Update Server (the upserver process), to distribute the contents of directories on this machine to other server machines in the cell. It becomes active when you configure the client portion of the Update Server on additional server machines.

Distributing the contents of its /usr/afs/bin directory to other server machines of its system type makes this machine a binary distribution machine. The other server machines of its system type run the upclientbin process (an instance of the client portion of the Update Server) to retrieve the binaries from its /usr/afs/bin directory.

Distributing the contents of its /usr/afs/etc directory makes this machine the cell's system control machine. The other server machines in the cell run the upclientetc process (an instance of the client portion of the Update Server) to retrieve the configuration files from the /usr/afs/etc directory.

The binaries in the /usr/afs/bin directory are not sensitive, so it is not necessary to encrypt them before transfer across the network. With both editions of AFS, use the -clear argument to the upserver initialization command to specify that the Update Server distributes the contents of the /usr/afs/bin directory in unencrypted form unless an upclientbin process requests encrypted transfer.

In both the United States and international editions of AFS, the server and client portions of the Update Server always mutually authenticate with one another, regardless of whether you use the -clear or -crypt arguments. This protects their communications from eavesdropping to some degree.

For more information on the upclient and upserver processes, see their reference pages in the AFS Command Reference Manual. The commands appear on multiple lines here only for legibility.

  1. Issue the bos create command to start the upserver process.

Starting the Controller for NTPD

In this section you start the runntp process, which controls the Network Time Protocol Daemon (NTPD). This daemon runs on all of your cell's server machines, and keeps their clocks synchronized. Keeping clocks synchronized is crucial to several functions, and in particular to the correct operation of AFS's distributed database technology, Ubik. The chapter in the AFS System Administrator's Guide about administering server machines explains how time skew can disturb Ubik's performance and cause service outages in your cell.
Note:Do not run the runntp process if NTPD or another time synchronization protocol is already running on the machine. Attempting to run multiple instances of the NTPD causes an error. Running NTPD together with another time synchronization protocol is unnecessary and can cause instability in the clock setting.

Some versions of some operating systems run a time synchronization program by default. For correct NTPD functioning, it is best to disable the default program. See the AFS Release Notes for details.

If your cell has reliable network connectivity to machines outside your cell, then it is conventional to configure the first AFS machine to refer to a time source outside the cell. When you later install the runntp program on other server machines in the cell, it configures NTPD to choose a time source at random from among the local database server machines listed in the /usr/afs/etc/CellServDB file. Time synchronization therefore works in a chained manner: this database server machine refers to a time source outside the cell, the database server machines refer to the machine among them that has access to the most accurate time (NTPD itself includes code for determining this), and each non-database server machine refers to a local database server machine chosen at random from the /usr/afs/etc/CellServDB file. If you ever decide to remove database server functionality from this machine, it is best to transfer responsibility for consulting an external time source to a remaining database server machine.

If your cell is does not have network connectivity to external machines, or if the connectivity is not reliable, include the -localclock flag to the runntp command as indicated in the following instructions. The flag tells NTPD to rely on the machine's internal clock when all external time sources are inaccessible.

Choosing an appropriate external time source is important, but involves more considerations than can be discussed here. If you need help in selecting a source, contact the AFS Product Support group. The AFS Command Reference Manual provides more information on the runntp command's arguments.

As the runntp process initializes NTPD, trace messages sometimes appear on the console. You can ignore them, but they can be informative if you understand how NTPD works.

  1. Issue the bos create command to start the runntp process. For host, substitute the fully-qualified hostname or IP address of one or more machines outside the cell that are to serve as time sources. Separate each name with a space.

Overview: Installing Client Functionality

The machine you are installing is now an AFS file server machine, database server machine, system control machine (if you are using the United States edition of AFS), and binary distribution machine. Now make it a client machine by completing the following tasks:

  1. Define the machine's cell membership for client processes

  2. Create the client version of the CellServDB file

  3. Define cache location and size

  4. Create the /afs directory and start the Cache Manager

  5. If the machine is to remain an AFS client machine, modify the machine's authentication system to authenticate users with AFS at login time; on Solaris systems, also alter the file system clean-up script

Copying Client Files to the Local Disk

Before installing and configuring the AFS client, copy the necessary files from the AFS CD-ROM to the local /usr/vice/etc directory.

  1. On the local /cdrom directory, mount the AFS CD-ROM for this machine's system type that is labeled International Edition, if it is not already. For instructions on mounting CD-ROMs (either locally or remotely via NFS), consult the operating system documentation.

  2. Copy files from the CD-ROM to the local /usr/vice/etc directory.
    Note:This step places a copy of the AFS initialization script (and related files, if applicable) into the /usr/vice/etc directory. In the preceding instructions for incorporating AFS into the kernel, you copied the script directly to the operating system's conventional location for initialization files. Later, you link the two files to avoid the potential confusion of having the two files differ; instructions appear in Activating the AFS Initialization Script.

    On some system types that use a kernel dynamic loader program, you previously copied AFS library files into a subdirectory of the /usr/vice/etc directory. On other system types, you copied the appropriate AFS library file directly to the directory where the operating system accesses it. The following instruction does not copy (or recopy) the AFS library files into the dynamic-loader subdirectory, because on some system types the library files consume a large amount of space. If you want to copy the library files as well, add the -r flag to the first cp command and skip the second cp command.

       
       # cd /cdrom/sysname/root.client/usr/vice/etc
       
       # cp -p  *  /usr/vice/etc
      
       # cp -rp  C  /usr/vice/etc
       
    

Defining Cell Membership for Client Processes

Every AFS client machine has a copy of the /usr/vice/etc/ThisCell file on its local disk to define the machine's cell membership for the AFS client programs that run on it. The ThisCell file you created in the /usr/afs/etc directory (in Defining Cell Name and Membership for Server Processes) is used only by server processes.

Among other functions, the ThisCell file on a client machine determines the following:

Perform the following steps.

  1. Change to the /usr/vice/etc directory and remove the symbolic link created in Starting the BOS Server.
          
       # cd /usr/vice/etc
       
       # rm ThisCell
       
    

  2. Create the ThisCell file as a copy of the /usr/afs/etc/ThisCell file. Defining the same local cell for both server and client processes leads to the most consistent AFS performance.
       
       # cp /usr/afs/etc/ThisCell   ThisCell
       
    

Creating the Client CellServDB File

The /usr/vice/etc/CellServDB file on a client machine's local disk lists the database server machines in each cell that the local Cache Manager can contact. If there is no entry in the file for a cell, or if the list of database server machines is wrong, then users working on this machine cannot access the cell. The chapter in the AFS System Administrator's Guide about administering client machines explains how to maintain the file after creating it.

As the afsd program initializes the Cache Manager, it copies the contents of the CellServDB file into kernel memory. The Cache Manager always consults the list in kernel memory rather than the CellServDB file itself. Between reboots of the machine, can use the fs newcell command to update the list in kernel memory directly; see the chapter in the AFS System Administrator's Guide about administering client machines.

The AFS distribution includes a sample CellServDB file called CellServDB.sample, which you have copied to the /usr/vice/etc directory. It includes an entry for all AFS cells that agreed to share their database server machine information at the time the CD-ROM was created. The AFS Product Support group also maintains a copy of the file, updating it as necessary. If you are interested in participating in the global AFS namespace, it is a good policy to consult the file occasionally for updates. Ask the AFS Product Support group for a pointer to its location.

Because all of the entries in the sample file use the correct format, it is a good basis for this machine's CellServDB file. You can add or remove cell entries as you see fit. To enable the Cache Manager actually to reach the cells, you must also follow the instructions in Enabling Access to Foreign Cells.

In this section, you add an entry for the local cell to the local CellServDB file. The current working directory is still /usr/vice/etc.

  1. Remove the symbolic link created in Starting the BOS Server and rename the CellServDB.sample file to CellServDB.
       
       # rm   CellServDB
      
       # mv  CellServDB.sample  CellServDB
          
    

  2. Add an entry for the local cell to the CellServDB file. One easy method is to use the cat command to append the contents of the server /usr/afs/etc/CellServDB file to the client version.
       
        # cat  /usr/afs/etc/CellServDB >>  CellServDB
       
    

    Then open the file in a text editor to verify that there are no blank lines, and that all entries have the required format, which is described just following. The ordering of cells is not significant, but it can be convenient to have the client machine's home cell at the top; move it there now if you wish.

  3. If the file includes cells that you do not wish users of this machine to access, remove their entries.

    To make the remaining cells accessible from this machine, see the instructions in Enabling Access to Foreign Cells.

The following example shows entries for two cells, each of which has three database server machines:

   
   >abc.com       #ABC Corporation (home cell)
   192.12.105.3      #db1.abc.com
   192.12.105.4      #db2.abc.com
   192.12.105.55     #db3.abc.com
   >stateu.edu    #State University cell
   138.255.68.93     #serverA.stateu.edu
   138.255.68.72     #serverB.stateu.edu
   138.255.33.154    #serverC.stateu.edu
   

Configuring the Cache

The Cache Manager uses a cache on the local disk or in machine memory to store local copies of files fetched from file server machines. To set basic cache configuration parameters, the afsd program reads the local /usr/vice/etc/cacheinfo file as it initializes the Cache Manager. The file has three fields:

  1. The first field names the local directory on which to mount the AFS filespace. The conventional location is /afs.

  2. The second field defines the local disk directory to use for the disk cache. The conventional location is the /usr/vice/cache directory, but you can specify an alternate directory if another partition has more space available. There must always be a value in this field, but the Cache Manager ignores it if the machine uses a memory cache.

  3. The third field defines cache size as a number of kilobyte (1024 byte) blocks. See the following discussion.

The values you provide must meet the following requirements.

Within these hard limits, the factors that determine appropriate cache size include the number of users working on the machine, the size of the files with which they usually work, and (for a memory cache) the number of processes that usually run on the machine. The higher the demand from these factors, the larger the cache needs to be to maintain good performance.

Disk caches smaller than 10 MB do not generally perform well. Machines serving multiple users usually perform better with a cache of at least 60 to 70 MB. The point at which enlarging the cache further does not really improve performance depends on the factors mentioned previously and is difficult to predict.

Memory caches smaller than 1 MB are nonfunctional, and the performance of caches smaller than 5 MB is usually unsatisfactory. Suitable upper limits are similar to those for disk caches but are probably determined more by the demands on memory from other sources on the machine (number of users and processes). Machines running only a few processes possibly can use a smaller memory cache.

Configuring a Disk Cache

Use the following instructions to configure a disk cache.
Note:Not all file system types that an operating system supports are necessarily supported for use as the cache partition. For possible restrictions, see the AFS Release Notes.

To configure the disk cache, perform the following procedures:

  1. Create the local directory to use for caching. The following instruction shows the conventional location, /usr/vice/cache.
       
       # mkdir /usr/vice/cache
    

  2. Create the cacheinfo file to define the configuration parameters discussed previously. The following instruction shows the standard mount location, /afs, and the standard cache location, /usr/vice/cache.
       
       # echo "/afs:/usr/vice/cache:#blocks" > /usr/vice/etc/cacheinfo
    

    The following example defines the disk cache size as 50,000 KB:

       # echo "/afs:/usr/vice/cache:50000" > /usr/vice/etc/cacheinfo
    

Configuring a Memory Cache

To configure a memory cache, create the cacheinfo file to define the configuration parameters discussed previously. The following instruction shows the standard mount location, /afs, and the standard cache location, /usr/vice/cache (though the exact value of the latter is irrelevant for a memory cache).

   
   # echo "/afs:/usr/vice/cache:#blocks" > /usr/vice/etc/cacheinfo

The following example allocates 25,000 KB of memory for the cache.

   # echo "/afs:/usr/vice/cache:25000" > /usr/vice/etc/cacheinfo

Configuring the Cache Manager

By convention, the Cache Manager mounts the AFS filespace on the local /afs directory. In this section you create that directory.

The afsd program sets several cache configuration parameters as it initializes, and starts daemons that improve performance. You can use the afsd command's arguments to override the parameters' default values and to change the number of some of the daemons. Depending on the machine's cache size, its amount of RAM, and how many people work on it, you can sometimes improve Cache Manager performance by overriding default values. For a discussion of all of the afsd command's arguments, see its reference page in the AFS Command Reference Manual.

The afsd command line in the AFS initialization script on each system type includes an OPTIONS variable. You can use it to set nondefault values for the command's arguments, in one of the following ways:

Perform the following procedures.

  1. Create the local directory on which to mount the AFS filespace, by convention /afs. If the directory already exists, verify that it is empty.
       
       # mkdir /afs
       
    

  2. On Linux systems, copy the afsd options file from the /usr/vice/etc directory to the /etc/sysconfig directory. Note the removal of the .conf extension as you copy the file.
       # cp /usr/vice/etc/afs.conf /etc/sysconfig/afs
       
    

  3. Edit the machine's AFS initialization script or afsd options file to set appropriate values for afsd command parameters. The script resides in the indicated location on each system type:

    Use one of the methods described in the introduction to this section to add the following flags to the afsd command line. If you intend for the machine to remain an AFS client, also set any performance-related arguments you wish.


Enabling AFS Login

Note:If you plan to remove the client functionality from this machine, skip this section and proceed to Overview: Completing the Installation of the First AFS Machine.

The AFS distribution includes files that you can incorporate into a client machine's authentication system so that users obtain an AFS token at the same time they log into the local file system. AFS is simpler and more convenient for your users if you install the AFS modifications on all client machines. Otherwise, they must use a two-step login procedure (login to the local file system and then issue the klog command). For further discussion of AFS authentication, see the chapter in the AFS System Administrator's Guide about cell configuration and administration issues.

Proceed to the appropriate section for this machine:

Enabling AFS Login on AIX Systems

Follow the instructions in this section to incorporate AFS modifications into the AIX secondary authentication system.

  1. Verify that the afs_dynamic_auth program is installed in the local /usr/vice/etc directory. If not, copy it from the /cdrom/rs_aix42/root.client/usr/vice/etc directory on the CD-ROM labeled AFS for AIX, International Edition.
       
       # ls /usr/vice/etc
       
    

  2. Edit the local /etc/security/user file, making changes to the indicated stanzas:

  3. Edit the local /etc/security/login.cfg file, creating or editing the indicated stanzas:

  4. Proceed to Overview: Completing the Installation of the First AFS Machine.

Enabling AFS Login on Digital UNIX Systems

On Digital UNIX systems, the AFS initialization script automatically incorporates the AFS authentication library file into the Security Integration Architecture (SIA) matrix on the machine. Incorporating the library means that users with AFS accounts obtain a token at login. In this section you copy the library file to the appropriate location.

The SIA integrates most of the authentication mechanisms on a Digital UNIX machine, including login, the Common Desktop Environment (CDE), and remote services. For more information on SIA, see the Digital UNIX reference page for matrix.conf, or consult the section on security in your Digital UNIX documentation.
Note:If the machine runs both the DCE and AFS client software, AFS must start after DCE. Consult the AFS initialization script for suggested symbolic links to create for correct ordering. Also, the system startup script order must initialize SIA before any long-running process that uses authentication.

  1. Mount the AFS CD-ROM labeled AFS for Digital UNIX, International Edition on the local /cdrom directory, if it is not already.

  2. Copy the appropriate AFS authentication library file from the distribution directory on the CD-ROM to the local /usr/shlib directory.
       
       # cd /cdrom/alpha_dux40/lib/afs
       
    

    If you use the AFS Authentication Server (kaserver process) in the cell:

       
       # cp  libafssiad.so  /usr/shlib
       
    

    If you use a Kerberos implementation of AFS authentication, rename the library file as you copy it:

       
       # cp  libafssiad.krb.so  /usr/shlib/libafssiad.so
       
    

  3. Proceed to Overview: Completing the Installation of the First AFS Machine.

Enabling AFS Login on HP-UX Systems

In this section you incorporate AFS into the operating system's Pluggable Authentication Module (PAM) scheme. PAM integrates all authentication mechanisms on the machine, including login, to provide the security infrastructure for authenticated access to and from the machine.

Explaining PAM is beyond the scope of this document. It is assumed that you understand the syntax and meanings of settings in the PAM configuration file (for example, the meaning of the required and optional attributes in the third field of a service's entry, how the other entry works, and so on).

If you want to use AFS authentication for a service, its entries in the PAM configuration file must meet the following requirements:

The AFS-related entries in the PAM configuration file make use of one or more of the following three attributes. This list describes their meaning with respect to the AFS PAM module.

try_first_pass
If the AFS PAM module is not the first one listed for a service (and it must not be), then it tries to use the password that was provided to the module listed first (usually the standard operating system PAM module). If the password is the user's correct AFS password, AFS authentication succeeds. This is a standard PAM attribute; see the operating system's PAM documentation for further discussion.

ignore_root
The AFS module ignores not only the local superuser root, but also any user with UID 0 (zero). This attribute is specific to the AFS PAM module.

setenv_password_expires
The AFS PAM module sets the environment variable PASSWORD_EXPIRES to the expiration date of the user's AFS password, which is recorded in the Authentication Database. This attribute is specific to the AFS PAM module.
Note:On some platforms you possibly need to install operating system patches in order for some authentication programs (such as the Common Desktop Environment [CDE]) to interact correctly with PAM. For details, see the AFS Release Notes.

  1. Mount the AFS CD-ROM labeled AFS for HP-UX, International Edition on the /cdrom directory, if it is not already.

  2. Copy the AFS authentication library file from the CD-ROM to the /usr/lib/security directory. Create a symbolic link to the library file that does not mention the version; this eliminates the need to edit the PAM configuration file if you later update the library file.
      
       # cd /usr/lib/security
    

    If you use the AFS Authentication Server (kaserver process) in the cell:

      
       # cp /cdrom/hp_ux110/lib/pam_afs.so.1  .
      
       # ln -s  pam_afs.so.1  pam_afs.so
       
    

    If you use a Kerberos implementation of AFS authentication:

      
       # cp /cdrom/hp_ux110/lib/pam_afs.krb.so.1   .
      
       # ln -s pam_afs.krb.so.1 pam_afs.so
       
    

  3. Edit the existing entries in the Authentication management section of the HP-UX PAM configuration file, /etc/pam.conf by convention. These entries have the value auth in their second field, and many of them refer to the HP-UX PAM module (the /usr/lib/security/pam_unix.so.1 file) in the fourth field.

    The pam.conf file in the HP-UX distribution usually includes entries for the login, rlogin, and rsh services. Change the third field of each entry to read optional.

    The HP-UX version of the pam.conf file does not usually have entries for the ftp or telnet services. If want to use AFS authentication for them, you must create an entry for them that refers to the HP-UX PAM module and has the value optional in the third field.

    If you make the required changes for the services mentioned previously, the result is as follows:

      
       login   auth  optional   /usr/lib/security/pam_unix.so.1
       rlogin  auth  optional   /usr/lib/security/pam_unix.so.1
       rsh     auth  optional   /usr/lib/security/pam_unix.so.1
       telnet  auth  optional   /usr/lib/security/pam_unix.so.1
       ftp     auth  optional   /usr/lib/security/pam_unix.so.1
       
    

  4. Insert AFS-related entries into the Authentication management section of the pam.conf file. Place each entry immediately below the entry for the same service that refers to the HP-UX PAM module. The following example AFS entries appear on two lines only for legibility.
      
       login   auth  optional  /usr/lib/security/pam_afs.so \
              try_first_pass  ignore_root  setenv_password_expires
      
       rlogin  auth  optional  /usr/lib/security/pam_afs.so \
              try_first_pass  ignore_root  setenv_password_expires
      
       rsh     auth  optional  /usr/lib/security/pam_afs.so \
              try_first_pass  ignore_root		
      
       telnet  auth  optional  /usr/lib/security/pam_afs.so \
              try_first_pass  ignore_root  setenv_password_expires
      
       ftp     auth  optional  /usr/lib/security/pam_afs.so \
              try_first_pass  ignore_root
       
    

  5. To enable users to obtain an AFS token as they log in via the Common Desktop Environment (CDE), add or edit the following four entries in the Authentication management section of the pam.conf file. Create each entry as a single line; the AFS-related entries appear on two lines here only for legibility.
      
       dtlogin   auth  optional  /usr/lib/security/pam_unix.so.1
       dtlogin   auth  optional  /usr/lib/security/pam_afs.so \
               try_first_pass  ignore_root
       dtaction  auth  optional  /usr/lib/security/pam_unix.so.1
       dtaction  auth  optional  /usr/lib/security/pam_afs.so \
               try_first_pass  ignore_root
       
    

  6. Proceed to Overview: Completing the Installation of the First AFS Machine.

Enabling AFS Login on IRIX Systems

The standard IRIX command-line login binary and xdm graphical login binary authenticate users with AFS automatically when AFS is incorporated into the machine's kernel. However, some IRIX distributions include another login utility as the default, and it does not necessarily incorporate the required AFS modifications. For AFS users to obtain AFS tokens at login in that case, you must disable the default utility. For further discussion, see the AFS Release Notes.

If an AFS-modified login utility is being used, the other requirement is that two files included in the AFS distribution reside in the /usr/vice/etc directory: afsauthlib.so and afskauthlib.so. Issue the ls command to verify. If the files do not exist, copy them from the /cdrom/sgi_65/root.client/usr/vice/etc directory on the CD-ROM labeled AFS for AIX, International Edition.

  
   # ls /usr/vice/etc
   

After taking any necessary action, proceed to Overview: Completing the Installation of the First AFS Machine.

Enabling AFS login on Linux Systems

In this section you incorporate AFS into the operating system's Pluggable Authentication Module (PAM) scheme. PAM integrates all authentication mechanisms on the machine, including login, to provide the security infrastructure for authenticated access to and from the machine.

Explaining PAM is beyond the scope of this document. It is assumed that you understand the syntax and meanings of settings in the PAM configuration file (for example, the meaning of the required and optional attributes in the third field of a service's entry, how the other entry works, and so on).

If you want to use AFS authentication for a service, its entries in the PAM configuration file must meet the following requirements:

The AFS-related entries in the PAM configuration file make use of one or more of the following three attributes. This list describes their meaning with respect to the AFS PAM module.

try_first_pass
If the AFS PAM module is not the first one listed for a service (and it must not be), then it tries to use the password that was provided to the module listed first (usually the standard operating system PAM module). If the password is the user's correct AFS password, AFS authentication succeeds. This is a standard PAM attribute; see the operating system's PAM documentation for further discussion.

ignore_root
The AFS module ignores not only the local superuser root, but also any user with UID 0 (zero). This attribute is specific to the AFS PAM module.

setenv_password_expires
The AFS PAM module sets the environment variable PASSWORD_EXPIRES to the expiration date of the user's AFS password, which is recorded in the Authentication Database. This attribute is specific to the AFS PAM module.
Note:On some platforms you possibly need to install operating system patches in order for some authentication programs (such as the Common Desktop Environment [CDE]) to interact correctly with PAM. For details, see the AFS Release Notes.

  1. Mount the AFS CD-ROM labeled AFS for Linux, International Edition on the /cdrom directory, if it is not already.

  2. Copy the AFS authentication library file from the CD-ROM to the directory where Linux expects to find it, which depends on which Linux distribution you are using.Create a symbolic link to the library file that does not mention the version; this eliminates the need to edit the PAM configuration file if you later update the library file.

    If you are using a Linux distribution from Red Hat Software:

       # cd /lib/security
       
    

    If you are using another Linux distribution:

       # cd /usr/lib/security
       
    

    Then, if you use the AFS Authentication Server (kaserver process) in the cell:

       # cp /cdrom/i386_linux22/lib/pam_afs.so.1  .
       # ln -s pam_afs.so.1 pam_afs.so
       
    

    Or if you use a Kerberos implementation of AFS authentication:

       # cp /cdrom/i386_linux22/lib/pam_afs.krb.so.1   .
       # ln -s pam_afs.krb.so.1 pam_afs.so
       
    

  3. Insert an AFS-related entry into the auth section of the PAM configuration file for each service with which you want to use AFS authentication. (Linux uses a separate configuration file for each service, whereas some other operating systems use a single file for all services.) By convention, the configuration files reside in the /etc/pam.d directory.

    Place the AFS entry immediately below any existing entries that define conditions under which you want the service to fail for a user who does not meet each entry's requirements. Place the AFS entry above any entries that you want to be executed even if AFS authentication fails.

    If using the Red Hat distribution:

       auth  sufficient  /lib/security/pam_afs.so   try_first_pass  ignore_root
       
    

    If using another distribution:

       auth  sufficient  /usr/lib/security/pam_afs.so  try_first_pass  ignore_root
       
    

    The following example adds AFS to the login configuration file on a machine using the Red Hat distribution (/etc/pam.d/login).

       #%PAM-1.0
       auth      required   /lib/security/pam_securetty.so
       auth      required   /lib/security/pam_nologin.so
       auth      sufficient /lib/security/pam_afs.so try_first_pass ignore_root
       auth      required   /lib/security/pam_pwdb.so shadow nullok
       account   required   /lib/security/pam_pwdb.so
       password  required   /lib/security/pam_cracklib.so
       password  required   /lib/security/pam_pwdb.so shadow nullok use_authtok
       session   required   /lib/security/pam_pwdb.so
       
    

  4. Proceed to Overview: Completing the Installation of the First AFS Machine.

Enabling AFS Login and Editing the File Systems Clean-up Script on Solaris Systems

In this section you incorporate AFS into the operating system's Pluggable Authentication Module (PAM) scheme. PAM integrates all authentication mechanisms on the machine, including login, to provide the security infrastructure for authenticated access to and from the machine.

Explaining PAM is beyond the scope of this document. It is assumed that you understand the syntax and meanings of settings in the PAM configuration file (for example, the meaning of the required and optional attributes in the third field of a service's entry, how the other entry works, and so on).

If you want to use AFS authentication for a service, its entries in the PAM configuration file must meet the following requirements:

The AFS-related entries in the PAM configuration file make use of one or more of the following three attributes. This list describes their meaning with respect to the AFS PAM module.

try_first_pass
If the AFS PAM module is not the first one listed for a service (and it must not be), then it tries to use the password that was provided to the module listed first (usually the standard operating system PAM module). If the password is the user's correct AFS password, AFS authentication succeeds. This is a standard PAM attribute; see the operating system's PAM documentation for further discussion.

ignore_root
The AFS module ignores not only the local superuser root, but also any user with UID 0 (zero). This attribute is specific to the AFS PAM module.

setenv_password_expires
The AFS PAM module sets the environment variable PASSWORD_EXPIRES to the expiration date of the user's AFS password, which is recorded in the Authentication Database. This attribute is specific to the AFS PAM module.
Note:On some platforms you possibly need to install operating system patches in order for some authentication programs (such as the Common Desktop Environment [CDE]) to interact correctly with PAM. For details, see the AFS Release Notes.

  1. Mount the AFS CD-ROM labeled AFS for Solaris, International Edition on the /cdrom directory, if it is not already.

  2. Copy the AFS authentication library file from the distribution directory on the CD-ROM to the /usr/lib/security directory. Create a symbolic link to the library file that does not mention the version; this eliminates the need to edit the PAM configuration file if you later update the library file.
      
       # cd /usr/lib/security
       
    
       
    

    If you use the AFS Authentication Server (kaserver process) in the cell:

      
       # cp /cdrom/sun4x_56/lib/pam_afs.so.1  .
      
       # ln -s pam_afs.so.1 pam_afs.so
       
    

    If you use a Kerberos implementation of AFS authentication:

         
    # cp /cdrom/sun4x_56/lib/pam_afs.krb.so.1   .
      
       # ln -s pam_afs.krb.so.1 pam_afs.so
       
    

  3. Edit the existing entries in the Authentication management section of the Solaris PAM configuration file, /etc/pam.conf by convention. These entries have the value auth in their second field, and many of them refer to the Solaris PAM module (the /usr/lib/security/pam_unix.so.1 file) in the fourth field.

    The pam.conf file in the Solaris distribution usually includes entries for the login, rlogin, and rsh services. Change the third field of each entry to read optional.

    The Solaris version of the pam.conf file does not usually have entries for the ftp or telnet services. If want to use AFS authentication for them, you must create an entry for them that refers to the Solaris PAM module and has the value optional in the third field.

    If you make the required changes for the services mentioned previously, the result is as follows:

      
       login   auth  optional   /usr/lib/security/pam_unix.so.1
       rlogin  auth  optional   /usr/lib/security/pam_unix.so.1
       rsh     auth  optional   /usr/lib/security/pam_unix.so.1
       telnet  auth  optional   /usr/lib/security/pam_unix.so.1
       ftp     auth  optional   /usr/lib/security/pam_unix.so.1
       
    

  4. Insert AFS-related entries into the Authentication management section of the pam.conf file. Place each entry immediately below the standard entry for the same service. The following example entries appear on two lines only for legibility.
      
       login   auth  optional  /usr/lib/security/pam_afs.so \
              try_first_pass  ignore_root  setenv_password_expires
       rlogin  auth  optional  /usr/lib/security/pam_afs.so \
              try_first_pass  ignore_root  setenv_password_expires
       rsh     auth  optional  /usr/lib/security/pam_afs.so \
              try_first_pass  ignore_root		
       telnet  auth  optional  /usr/lib/security/pam_afs.so \
              try_first_pass  ignore_root  setenv_password_expires
       ftp     auth  optional  /usr/lib/security/pam_afs.so \
              try_first_pass  ignore_root
       
    

  5. To enable users to obtain an AFS token as they log in via the Common Desktop Environment (CDE), add or edit the following four entries in the Authentication management section of the pam.conf file. Create each entry as a single line; The AFS-related entries appear on two lines here only for legibility.
      
       dtlogin   auth  optional  /usr/lib/security/pam_unix.so.1
       dtlogin   auth  optional  /usr/lib/security/pam_afs.so \
               try_first_pass  ignore_root
       dtsession  auth  optional  /usr/lib/security/pam_unix.so.1
       dtsession  auth  optional  /usr/lib/security/pam_afs.so \
               try_first_pass  ignore_root
       
    

  6. Alter the script that locates and removes unneeded files from the file system. If the script is included in your version of the Solaris distribution, the conventional local disk location for it is /usr/lib/fs/nfs/nfsfind.

    Modify the pathname specified in the file to exclude the /afs directory. Otherwise, the command traverses the AFS filespace of every cell that is accessible from the machine, which can take many hours. The following alterations are possibilities, but you must verify that they are appropriate for your cell.

    The first possible alteration is to add the -local flag to the existing command in the /usr/lib/fs/nfs/nfsfind file, so that it looks like the following:

      
       find $dir -local -name .nfs\* -mtime +7 -mount -exec rm -f {} \;
       
    

    Another alternative is to exclude any directories whose names begin with the lowercase letter a or a non-alphabetic character.

      
       find /[A-Zb-z]*  remainder of existing command
       
    

    Do not use the following command, which still searches under the /afs directory, looking for a subdirectory of type 4.2.

      
       find / -fstype 4.2     /* do not use */
       
    

Overview: Completing the Installation of the First AFS Machine

The machine is now configured as an AFS file server and client machine. In this final phase of the installation, you initialize the Cache Manager and then create the upper levels of your AFS filespace, among other procedures. The procedures are:

  1. Verify that the initialization script works correctly, and incorporate it into the operating systems startup and shutdown sequence

  2. Create and mount top-level volumes

  3. Create and mount volumes to store system binaries in AFS

  4. Enable access to foreign cells

  5. Institute additional security measures

  6. Remove client functionality if desired

Verifying the AFS Initialization Script

At this point you run the machine's AFS initialization script to verify that it correctly invokes all of the necessary programs and AFS processes, and that they start correctly. The following are the relevant commands:

On system types that use a dynamic loader program, you must reboot the machine before running the initialization script, so that it can freshly load AFS modifications into the kernel.

If there are problems during the initialization, attempt to resolve them. The AFS Product Support group can provide assistance if necessary.

  1. Issue the bos shutdown command to shut down the AFS server processes other than the BOS Server. Include the -wait flag to delay return of the command shell prompt until all processes shut down completely.
          
       # /usr/afs/bin/bos shutdown <machine name> -wait
       
    

  2. Issue the ps command to learn the BOS Server's process ID number (PID), and then the kill command to stop the bosserver process.
       
       # ps appropriate_ps_options | grep bosserver
       
       # kill bosserver_PID
       
    

  3. Issue the appropriate commands to run the AFS initialization script for this system type.

    On AIX systems:

    1. Reboot the machine and log in again as the local superuser root.
         
         # shutdown -r now
         login: root
         Password: root_password
         
      

    2. Run the AFS initialization script.
         
         # /etc/rc.afs
         
      

    On Digital UNIX systems:

    1. Run the AFS initialization script.
         
         # /sbin/init.d/afs  start
         
      

    On HP-UX systems:

    1. Run the AFS initialization script.
         
         # /sbin/init.d/afs  start
         
      

    On IRIX systems:

    1. If you have configured the machine to use the ml dynamic loader program, reboot the machine and log in again as the local superuser root.
         # shutdown -i6 -g0 -y
         login: root
         Password: root_password
         
      

    2. Issue the chkconfig command to activate the afsserver and afsclient configuration variables.
         # /etc/chkconfig -f afsserver on
       
         # /etc/chkconfig -f afsclient on 
         
      

    3. Run the AFS initialization script.
         
         # /etc/init.d/afs  start
         
      

    On Linux systems:

    1. Reboot the machine and log in again as the local superuser root.
        
         # shutdown -r now
         login: root
         Password: root_password
         
      

    2. Run the AFS initialization script.
         
         # /etc/rc.d/init.d/afs  start
         
      

    On Solaris systems:

    1. Reboot the machine and log in again as the local superuser root.
         # shutdown -i6 -g0 -y
         login: root
         Password: root_password
         
      

    2. Run the AFS initialization script.
         
         # /etc/init.d/afs  start
         
      

  4. Wait for the message that confirms that Cache Manager initialization is complete.

    On machines that use a disk cache, it can take a while to initialize the Cache Manager for the first time, because the afsd program must create all of the Vn files in the cache directory. Subsequent Cache Manager initializations do not take nearly as long, because the Vn files already exist.

    As a basic test of correct AFS functioning, issue the klog command to authenticate as the admin user. Provide the password (admin_passwd) you defined in Initializing Cell Security.

       
       # /usr/afs/bin/klog admin
       Password:  admin_passwd
       
    

  5. Issue the tokens command to verify that the klog command worked correctly. If it did, the output looks similar to the following example for the abc.com cell, where admin's AFS UID is 1. If the output does not seem correct, resolve the problem. Changes to the AFS initialization script are possibly necessary. The AFS Product Support group can provide assistance as necessary.
       
       # /usr/afs/bin/tokens
       Tokens held by the Cache Manager:
      
       User's (AFS ID 1) tokens for afs@abc.com [Expires May 22 11:52]
           --End of list--
       
    

  6. Issue the bos status command to verify that all server processes are running normally. The output for each process reads Currently running normally.
       
       # /usr/afs/bin/bos status <machine name>
       
    

  7. Change directory to the local file system root (/) and issue the fs checkvolumes command.
       
       # cd /
       
       # /usr/afs/bin/fs checkvolumes
       
    

Activating the AFS Initialization Script

Now that you have confirmed that the AFS initialization script works correctly, take the action necessary to have it run automatically at each reboot. The instructions differ for each system type.

On AIX systems:

  1. Edit the AIX initialization file, /etc/inittab, adding the following line to invoke the AFS initialization script. Place it just after the line that starts NFS daemons.
       
       rcafs:2:wait:/etc/rc.afs > /dev/console 2>&1 # Start AFS services
       
    

  2. (Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /etc directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS CD-ROM if necessary.
       
       # cd  /usr/vice/etc
       
       # rm  rc.afs
      
       # ln -s  /etc/rc.afs
       
    

On Digital UNIX systems:

  1. Change to the /sbin/init.d directory and issue the ln -s command to create symbolic links that incorporate the AFS initialization script into the Digital UNIX startup and shutdown sequence.
       
       # cd  /sbin/init.d
       
       # ln -s  ../init.d/afs  /sbin/rc3.d/S67afs
       
       # ln -s  ../init.d/afs  /sbin/rc0.d/K66afs
       
    

  2. (Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /sbin/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS CD-ROM if necessary.
       
       # cd /usr/vice/etc
       
       # rm afs.rc
      
       # ln -s  /sbin/init.d/afs  afs.rc
       
    

On HP-UX systems:

  1. Change to the /sbin/init.d directory and issue the ln -s command to create symbolic links that incorporate the AFS initialization script into the HP-UX startup and shutdown sequence.
       
       # cd /sbin/init.d
       
       # ln -s ../init.d/afs /sbin/rc2.d/S460afs
      
       # ln -s ../init.d/afs /sbin/rc2.d/K800afs
       
    

  2. (Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /sbin/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS CD-ROM if necessary.
       
       # cd /usr/vice/etc
       
       # rm afs.rc
      
       # ln -s  /sbin/init.d/afs  afs.rc
       
    

On IRIX systems:

  1. Change to the /etc/init.d directory and issue the ln -s command to create symbolic links that incorporate the AFS initialization script into the IRIX startup and shutdown sequence.
       
       # cd /etc/init.d
       
       # ln -s ../init.d/afs /etc/rc2.d/S35afs
      
       # ln -s ../init.d/afs /etc/rc0.d/K35afs
       
    

  2. (Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /etc/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS CD-ROM if necessary.
       
       # cd /usr/vice/etc
       
       # rm afs.rc
      
       # ln -s  /etc/init.d/afs  afs.rc
       
    

On Linux systems:

  1. Issue the chkconfig command to activate the afs configuration variable. Based on the instruction in the AFS initialization file that begins with the string #chkconfig, the command automatically creates the symbolic links that incorporate the script into the Linux startup and shutdown sequence.
       
       # /sbin/chkconfig  --add afs
       
    

  2. (Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /etc/init.d directories, and copies of the afsd options file in both the /usr/vice/etc and /etc/sysconfig directories. If you want to avoid potential confusion by guaranteeing that the two copies of each file are always the same, create a link between them. You can always retrieve the original script or options file from the AFS CD-ROM if necessary.
       
       # cd /usr/vice/etc
       
       # rm afs.rc afs.conf
        
       # ln -s  /etc/rc.d/init.d/afs  afs.rc
       
       # ln -s  /etc/sysconfig/afs  afs.conf
       
    

On Solaris systems:

  1. Change to the /etc/init.d directory and issue the ln -s command to create symbolic links that incorporate the AFS initialization script into the Solaris startup and shutdown sequence.
       
       # cd /etc/init.d
      
       # ln -s ../init.d/afs /etc/rc3.d/S99afs
      
       # ln -s ../init.d/afs /etc/rc0.d/K66afs
       
    

  2. (Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /etc/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS CD-ROM if necessary.
       
       # cd /usr/vice/etc
       
       # rm afs.rc
      
       # ln -s  /etc/init.d/afs  afs.rc
       
    

Configuring the Top Levels of the AFS Filespace

If you have not previously run AFS in your cell, configure the top levels of your cell's AFS filespace. If you have run a previous version of AFS, the filespace is already configured. Proceed to Storing AFS Binaries in AFS.

You created the root.afs volume in Starting the File Server, Volume Server, and Salvager, and the Cache Manager mounted it automatically on the local /afs directory when you ran the AFS initialization script in Verifying the AFS Initialization Script. You now set the access control list (ACL) on the /afs directory: creating, mounting, and setting the ACL are the three steps required when creating any volume. The default ACL on a new volume grants all seven access rights to the system:administrators group. Add an entry that grants the l (lookup) and r (read) permissions to the system:anyuser group, to enable all users to traverse through the /afs directory. If you want to restrict access to your cell's filespace to locally authenticated users only, substitute the system:authuser group for the system:anyuser group.

After setting the ACL on the root.afs volume, you create your cell's root.cell volume, mount it as a subdirectory of the /afs directory, and set the ACL. Create both a ReadWrite and a regular mount point for the root.cell volume. The ReadWrite mount point enables you to access the ReadWrite version of replicated volumes when necessary. Creating both mount points essentially creates separate read-only and read-write copies of your filespace, and enables the Cache Manager to traverse the filespace on a ReadOnly path or ReadWrite path as appropriate. For further discussion of these concepts, see the chapter in the AFS System Administrator's Guide about administering volumes.

Then replicate both the root.afs and root.cell volumes. This is required if you want to replicate any other volumes in your cell, because all volumes mounted above a replicated volume must themselves be replicated in order for the Cache Manager to access the replica.

When the root.afs volume is replicated, the Cache Manager is programmed to access its ReadOnly version (root.afs.readonly) whenever possible. To make changes to the contents of the root.afs volume (when, for example, you mount another cell's root.cell volume at the second level in your file tree), you must mount the root.afs volume temporarily, make the changes, release the volume and remove the temporary mount point. For instructions, see Enabling Access to Foreign Cells.

  1. Issue the fs setacl command to add an entry to the ACL for the /afs directory that grants the l and r permissions to the system:anyuser group.
       
       # /usr/afs/bin/fs setacl /afs system:anyuser rl
       
    

  2. Issue the vos create command to create the root.cell volume and the fs mkmount command to mount it as a subdirectory of the /afs directory, where it serves as the root of your cell's AFS filespace. Issue the fs setacl command to grant the l and r permissions to the system:anyuser group on its ACL.

    For the partition name argument, substitute the name of one of the machine's AFS server partitions (such as /vicepa). For the cellname argument, substitute your cell's fully-qualified Internet domain name (such as abc.com).

       
       # /usr/afs/bin/vos create  <machine name> <partition name> root.cell 
       
       # /usr/afs/bin/fs mkmount /afs/cellname  root.cell
       
       # /usr/afs/bin/fs setacl /afs/cellname  system:anyuser rl
       
    

  3. (Optional) Create a link to a shortened cell name, to reduce the length of pathnames for users in the local cell. For example, in the abc.com cell the directory /afs/abc is a link to /afs/abc.com.
         
       # cd /afs
       
       # ln -s full_cellname    short_cellname
       
    

  4. Issue the fs mkmount command to create a ReadWrite mount point for the root.cell volume (you created a regular mount point in Step 2).

    By convention, the name of a ReadWrite mount point begins with a period, both to distinguish it from the regular mount point and to make it visible only when the -a flag is used on the ls command.

    Change directory to /usr/afs/bin to make it easier to access the command binaries.

       
       # cd /usr/afs/bin
       
       # ./fs mkmount   /afs/.cellname   root.cell -rw
       
    

  5. Issue the vos addsite command to define a replication site for both the root.afs and root.cell volumes. In each case, substitute for the partition name argument the partition where the volume's ReadWrite version resides. When you install additional file server machines, it is a good idea to create replication sites on them as well.
       
       # ./vos addsite <machine name> <partition name> root.afs
       
       # ./vos addsite <machine name> <partition name> root.cell
       
    

  6. Issue the fs examine command to verify that the Cache Manager can access both the root.afs and root.cell volumes, before you attempt to replicate them. The output lists each volume's name, volumeID number, quota, size, and the size of the partition that houses them. If you get an error message instead, do not continue before taking corrective action.
     
       # ./fs examine /afs
       
       # ./fs examine /afs/cellname
       
    

  7. Issue the vos release command to release a replica of the root.afs and root.cell volumes to the sites you defined in Step 5.
       
       # ./vos release root.afs
       
       # ./vos release root.cell
       
    

  8. Issue the fs checkvolumes to force the Cache Manager to notice that you have released ReadOnly versions of the volumes, then issue the fs examine command again. This time its output mentions the ReadOnly version of the volumes (root.afs.readonly and root.cell.readonly) instead of the ReadWrite versions, because of the Cache Manager's bias to access the ReadOnly version of the root.afs volume if it exists.
       
       # ./fs checkvolumes
       
       # ./fs examine /afs
       
       # ./fs examine /afs/cellname
       
    

Storing AFS Binaries in AFS

In the conventional configuration, you make AFS client binaries and configuration files available in the subdirectories of the /usr/afsws directory on client machines (afsws is an acronym for AFS workstation). You can conserve local disk space by creating /usr/afsws as a link to an AFS volume that houses the AFS client binaries and configuration files for this system type.

In this section you create the necessary volumes. The conventional location to which to link /usr/afsws is /afs/cellname/sysname/usr/afsws, where sysname is the appropriate system type name as specified in the AFS Release Notes. The instructions in Installing Additional Client Machines assume that you have followed the instructions in this section.

As you install client machines of different system types, it is appropriate to create new volumes and directories for each type. Instructions for creating volumes as you install a client machine of a new system type appear in Installing Additional Client Machines.

If you have previously run AFS in the cell, the volumes possibly already exist. If so, you need to perform Step 8 only.

The current working directory is still /usr/afs/bin, which houses the fs and vos command suite binaries. In the following commands, it is possible you still need to specify the pathname to the commands, depending on how your PATH environment variable is set.

  1. Issue the vos create command to create volumes for storing the AFS client binaries for this system type. The following example instruction creates volumes called sysname, sysname.usr, and sysname.usr.afsws. Refer to the AFS Release Notes to learn the proper value of sysname for this system type.
        
       # vos create <machine name> <partition name> sysname
         
       # vos create <machine name> <partition name> sysname.usr
         
       # vos create <machine name> <partition name> sysname.usr.afsws
       
       
    

  2. Issue the fs mkmount command to mount the newly created volumes. Because the root.cell volume is replicated, you must precede the cellname part of the pathname with a period to specify the ReadWrite mount point, as shown. Then issue the vos release command to release a new replica of the root.cell volume, and the fs checkvolumes command to force the local Cache Manager to access them.
       
       # fs mkmount -dir /afs/.cellname/sysname -vol sysname
       
       # fs mkmount -dir /afs/.cellname/sysname.usr  -vol sysname.usr
       
       # fs mkmount -dir /afs/.cellname/sysname.usr.afsws -vol sysname.usr.afsws
       
       # vos release root.cell
       
       # fs checkvolumes
       
    

  3. Issue the fs setacl command to grant the l (lookup) and r (read) permissions to the system:anyuser group on each new directory's ACL.
       
       # cd /afs/.cellname/sysname
       
       # fs setacl  -dir  .  usr  usr/afsws  -acl  system:anyuser rl 
       
    

  4. Issue the fs setquota command to set an unlimited quota on the volume mounted at the /afs/cellname/sysname/usr/afsws directory. This enables you to copy all of the appropriate files from the CD-ROM into the volume without exceeding the volume's quota.

    If you wish, you can set the volume's quota to a finite value after you complete the copying operation. At that point, use the vos examine command to determine how much space the volume is occupying. Then issue the fs setquota command to set a quota value that is slightly larger.

       
       # fs setquota /afs/.cellname/sysname/usr/afsws  0
       
    

  5. Mount the AFS CD-ROM for this machine's system type that is labeled International Edition on the local /cdrom directory, if it is not already. For instructions on mounting CD-ROMs (either locally or remotely via NFS), consult the operating system documentation.

  6. Copy the contents of the indicated directories from the CD-ROM into the /afs/cellname/sysname/usr/afsws directory.
       
       # cd /afs/.cellname/sysname/usr/afsws
       
       # cp -rp /cdrom/sysname/bin  .
       
       # cp -rp /cdrom/sysname/etc  .
       
       # cp -rp /cdrom/sysname/include  .
       
       # cp -rp /cdrom/sysname/lib  .
       
    

  7. Issue the fs setacl command to set the ACL on each directory appropriately.

    To comply with the terms of your AFS License agreement, you must prevent unauthorized users from accessing AFS software. To enable access for locally authenticated users only, set the ACL on the etc, include, and lib subdirectories to grant the l and r permissions to the system:authuser group rather than the system:anyuser group. The system:anyuser group must retain the l and r permissions on the bin subdirectory to enable unauthenticated users to access the klog binary.

    To ensure that unauthorized users are not accessing AFS software, check periodically that the ACLs on these directories are set properly.

    The command appears on multiple lines only for legibility.

         
       # cd /afs/.cellname/sysname/usr/afsws
       
       # fs setacl   -dir  etc  include  lib  -acl  system:authuser rl  \
                  system:anyuser none
       
    

  8. Create /usr/afsws on the local disk as a symbolic link to the directory /afs/cellname/@sys/usr/afsws. You can specify the actual system name instead of @sys if you wish, but the advantage of using @sys is that it remains valid if you upgrade this machine to a different system type.
       
       # ln -s /afs/cellname/@sys/usr/afsws  /usr/afsws
       
    

    To enable users to issue commands from the AFS suites (such as fs) without having to specify a pathname to their binaries, include the /usr/afsws/bin and /usr/afsws/etc directories in the PATH environment variable you define in each user's shell initialization file (such as .cshrc).


Storing AFS Documents in AFS

The AFS distribution includes documentation in two formats:

The AFS CD-ROM for each system type that is labeled International Edition includes the following AFS documentation in the top-level Documentation directory:

This section explains how to create and mount a volume to house the HTML version of the documents, making them available for online viewing by your users. The recommended mount point for the volume is /afs/cellname/afsdoc. If you wish, you can create a link to the mount point on each client machine's local disk, called /usr/afsdoc. Alternatively, you can create a link to the mount point in each user's home directory. You can also choose to permit users to access only certain documents (most probably, the AFS User's Guide) by creating different mount points or setting different ACLs on different document directories.

This section also includes optional instructions for storing the PostScript version of the documents in AFS.

The current working directory is still /usr/afs/bin, which houses the fs and vos command suite binaries you use to create and mount volumes. In the following commands, it is possible you still need to specify the pathname to the commands, depending on how your PATH environment variable is set.

  1. Issue the vos create command to create a volume for storing the AFS documentation.
       
       # vos create <machine name> <partition name>  afsdoc 
         
    

  2. Issue the fs mkmount command to mount the new volume. Because the root.cell volume is replicated, you must precede the cellname with a period to specify the ReadWrite mount point, as shown. Then issue the vos release command to release a new replica of the root.cell volume, and the fs checkvolumes command to force the local Cache Manager to access them.
         
       # fs mkmount -dir /afs/.cellname/afsdoc -vol afsdoc
       
       # vos release root.cell
       
       # fs checkvolumes
        
    

  3. Issue the fs setacl command to grant the rl permissions to the system:anyuser group on the new directory's ACL.
           
       # cd /afs/.cellname/afsdoc 
        
       # fs setacl  .  system:anyuser rl 
       
    

  4. Issue the fs setquota command to set an unlimited quota on the volume. This enables you to copy all of the appropriate files from the CD-ROM into the volume without exceeding the volume's quota.

    If you wish, you can set the volume's quota to a finite value after you complete the copying operations. At that point, use the vos examine command to determine how much space the volume is occupying. Then issue the fs setquota command to set a quota value that is slightly larger.

       
       # fs setquota -max 0  
       
    

  5. Mount a AFS CD-ROM labeled International Edition on the local /cdrom directory, if one is not already. For instructions on mounting CD-ROMs (either locally or remotely via NFS), consult the operating system documentation.

  6. Copy the HTML version of the AFS documents from the CD-ROM into the /afs/cellname/afsdoc directory.

    In addition to a subdirectory for each document, there are several files in the afsdoc directory with a .gif extension, which enable readers to move easily between sections of a document. The file called index.htm is an introductory HTML page that contains a hyperlink to each of the documents. For online viewing to work properly, these files must remain in the /afs/cellname/afsdoc directory.

        
       # cp -rp /cdrom/afsdoc/Html  .
          
    

  7. (Optional) Copy the PostScript versions of the AFS documents to the PostScript subdirectory.
             
       # cp -rp /cdrom/afsdoc/PostScript  .
          
    

  8. (Optional) If you believe it is helpful to your users to access the HTML version of AFS documents via a local disk directory, create /usr/afsdoc on the local disk as a symbolic link to the directory /afs/cellname/afsdoc, which was created in Storing AFS Documents in AFS.
       
       # ln -s /afs/cellname/afsdoc  /usr/afsdoc
    

    An alternative is to create a link in each user's home directory to the /afs/cellname/afsdoc mount point.


Storing System Binaries in AFS

You can also choose to store other system binaries in AFS volumes, such as the standard UNIX programs conventionally located in local disk directories such as /etc, /bin, and /lib. Storing such binaries in an AFS volume not only frees local disk space, but makes it easier to update binaries on all client machines.

The following is a suggested scheme for storing system binaries in AFS. It does not include instructions, but you can use the instructions in Storing AFS Binaries in AFS (which are for AFS-specific binaries) as a template.

Some files must remain on the local disk for use when AFS is inaccessible (during bootup and file server or network outages). The required binaries include the following:

In most cases, it is more secure to enable only locally authenticated users to access system binaries, by granting the l (lookup) and r (read) permissions to the system:authuser group on the ACLs of directories that contain the binaries. If users need to access a binary while unauthenticated, however, the ACL on its directory must grant those permissions to the system:anyuser group.

The following chart summarizes the suggested volume and mount point names for storing system binaries. It uses a separate volume for each directory. You already created a volume called sysname for this machine's system type when you followed the instructions in Storing AFS Binaries in AFS.

You can name volumes in any way you wish, and mount them at other locations than those suggested here. However, this scheme has several advantages:


Volume Name Mount Point
sysname /afs/cellname/sysname
sysname.bin /afs/cellname/sysname/bin
sysname.etc /afs/cellname/sysname/etc
sysname.usr /afs/cellname/sysname/usr
sysname.usr.afsws /afs/cellname/sysname/usr/afsws
sysname.usr.bin /afs/cellname/sysname/usr/bin
sysname.usr.etc /afs/cellname/sysname/usr/etc
sysname.usr.inc /afs/cellname/sysname/usr/include
sysname.usr.lib /afs/cellname/sysname/usr/lib
sysname.usr.loc /afs/cellname/sysname/usr/local
sysname.usr.man /afs/cellname/sysname/usr/man
sysname.usr.sys /afs/cellname/sysname/usr/sys

Enabling Access to Foreign Cells

In this section you create a mount point in your AFS filespace for the root.cell volume of each foreign cell that you want to enable your users to access. For users working on a client machine to access the cell, there must in addition be an entry for it in the client machine's local /usr/vice/etc/CellServDB file. The file you created in Creating the Client CellServDB File lists all of the cells that had agreed to participate in the AFS global namespace at the time your AFS distribution CD-ROM was created. As mentioned in that section, the AFS Product Support group also maintains a copy of the file, updating it as necessary.

The chapter in the AFS System Administrator's Guide about cell administration and configuration issues discusses the implications of participating in the global AFS namespace. The chapter about administering client machines explains how to maintain knowledge of foreign cells on client machines, and includes suggestions for maintaining a central version of the file in AFS.

  1. Issue the fs mkmount command to mount each foreign cell's root.cell volume on a directory called /afs/foreign_cell. Because the root.afs volume is replicated, you must create a temporary mount point for its ReadWrite version in a directory to which you have write access (such as your cell's /afs/.cellname directory). Create the mount points, issue the vos release command to release new replicas to the ReadOnly sites for the root.afs volume, and issue the fs checkvolumes command to force the local Cache Manager to access the new replica.
    Note:You need to issue the fs mkmount command only once for each foreign cell's root.cell volume. You do not need to repeat the command on each client machine.

    Substitute your cell's name for cellname.

       
       # cd /afs/.cellname
       
       # /usr/afs/bin/fs  mkmount  temp  root.afs
       
    

    Repeat the fs mkmount command for each foreign cell you wish to mount at this time.

       
       # /usr/afs/bin/fs mkmount temp/foreign_cell root.cell -c foreign_cell
       
    

    Issue the following commands only once.

         
       # /usr/afs/bin/fs rmmount temp
       
       # /usr/afs/bin/vos release root.afs
       
       # /usr/afs/bin/fs checkvolumes
       
    

  2. If this machine is going to remain an AFS client after you complete the installation, verify that the local /usr/vice/etc/CellServDB file includes an entry for each foreign cell.

    For each cell that does not already have an entry, complete the following instructions:

    1. Create an entry in the CellServDB file. Be sure to comply with the formatting instructions in Creating the Client CellServDB File.

    2. Issue the fs newcell command to add an entry for the cell directly to the list that the Cache Manager maintains in kernel memory. Provide each database server machine's fully qualified hostname.
         
         # /usr/afs/bin/fs newcell <foreign_cell> <dbserver1> [<dbserver2>]   \
                  [<dbserver3>]
         
      

    3. If you plan to maintain a central version of the CellServDB file (the conventional location is /afs/cellname/common/etc/CellServDB), create it now as a copy of the local /usr/vice/etc/CellServDB file. Verify that it includes an entry for each foreign cell you want your users to be able to access.
         
         # mkdir common
         
         # mkdir common/etc
         
         # cp  /usr/vice/etc/CellServDB  common/etc
         
      

  3. Issue the ls command to verify that the new cell's mount point is visible in your filespace. The output lists the directories at the top level of the new cell's AFS filespace.
       
       # ls /afs/foreign_cell
       
    

  4. Please register your cell with the AFS Product Support group at this time. If you do not want to participate in the global AFS namespace, they list your cell in a private CellServDB file that is not available to other AFS cells.

Improving Cell Security

This section discusses ways to improve the security of AFS data in your cell. Also see the chapter in the AFS System Administrator's Guide about configuration and administration issues.

Controlling root Access

As on any machine, it is important to prevent unauthorized users from logging onto an AFS server or client machine as the local superuser root. Take care to keep the root password secret.

The local root superuser does not have special access to AFS data (as members of the system:administrators group do), but it does have the following privileges:

Controlling System Administrator Access

Following are suggestions for managing AFS administrative privilege:

Protecting Sensitive AFS Directories

Some subdirectories of the /usr/afs directory contain files crucial to cell security. Unauthorized users must not read or write to these files because of the potential for misuse of the information they contain.

As the BOS Server initializes for the first time on a server machine, it creates several files and directories (see Starting the BOS Server). It sets their owner to the local superuser root and sets their mode bits to enable writing by the owner only; in some cases, it also restricts reading.

At each subsequent restart, the BOS Server checks that the owner and mode bits on these files are still set appropriately. If they are not, it write the following message to the /usr/afs/logs/BosLog file:

Bosserver reports inappropriate access on server directories
   

The BOS Server does not reset the mode bits, which enables you to set alternate values if you wish.

The following charts lists the expected mode bit settings. A question mark indicates that the BOS Server does not check that mode bit.
/usr/afs drwxr?xr-x
/usr/afs/backup drwx???---
/usr/afs/bin drwxr?xr-x
/usr/afs/db drwx???---
/usr/afs/etc drwxr?xr-x
/usr/afs/etc/KeyFile -rw????---
/usr/afs/etc/UserList -rw?????--
/usr/afs/local drwx???---
/usr/afs/logs drwxr?xr-x


Removing Client Functionality

Follow the instructions in this section only if you do not wish this machine to remain an AFS client. Removing client functionality will make the machine unable to access files in AFS.

  1. Remove the files from the /usr/vice/etc directory. The command does not remove the directory for files used by the dynamic kernel loader program, if it exists on this system type. Those files are still needed on a server-only machine.
        
       # cd /usr/vice/etc
       
       # rm  * 
       
    

  2. Create symbolic links to the ThisCell and CellServDB files in the /usr/afs/etc directory. This makes it possible to issue commands from the AFS command suites (such as bos and fs) on this machine.
         
       # ln -s /usr/afs/etc/ThisCell ThisCell
       
       # ln -s /usr/afs/etc/CellServDB CellServDB
       
    

  3. On IRIX systems, issue the chkconfig command to deactivate the afsclient configuration variable.
       
       # /etc/chkconfig -f afsclient off
       
    

  4. Reboot the machine. Most system types use the shutdown command, but the appropriate options vary.
       
       # shutdown appropriate_options
       
    

[Return to Library] [Contents] [Previous Topic] [Top of Topic] [Next Topic] [Index]



© IBM Corporation 1999. All Rights Reserved