Installation Guide


[Return to Library] [Contents] [Previous Topic] [Bottom of Topic] [Next Topic] [Index]


Installing Additional Server Machines

Instructions for the following procedures appear in the indicated section of this chapter.

The instructions make the following assumptions.


Installing an Additional File Server Machine

The procedure for installing a new file server machine is similar to installing the first file server machine in your cell. There are a few parts of the installation that differ depending on whether the machine is the same AFS system type as an existing file server machine, or is the first file server machine of its system type in your cell. The differences mostly concern the source for the needed binaries and files, and what portions of the Update Server you install:

These instructions are brief; for more detailed information, refer to the corresponding steps in Installing the First AFS Machine.

To install a new file server machine, perform the following procedures:

  1. Copy needed binaries and files onto this machine's local disk

  2. Incorporate AFS modifications into the kernel

  3. Configure partitions for storing volumes

  4. Replace the standard fsck utility with the AFS-modified version on some system types

  5. Start the Basic OverSeer (BOS) Server

  6. Start the appropriate portion of the Update Server

  7. Start the fs process, which incorporates three component processes: the File Server, Volume Server, and Salvager

  8. Start the controller process (called runntp) for the Network Time Protocol Daemon, which synchronizes clocks

After completing the instructions in this section, you can install database server functionality on the machine according to the instructions in Installing Database Server Functionality.

Creating AFS Directories and Beginning with Platform-Specific Tasks

Create the /usr/afs and /usr/vice/etc directories on the local disk. Subsequent instructions copy files from the AFS distribution CD-ROM into them, at the appropriate point for each system type.

      
   # mkdir /usr/afs
      
   # mkdir /usr/afs/bin
      
   # mkdir /usr/vice
      
   # mkdir /usr/vice/etc
   
   # mkdir /cdrom
     

As on the first file server machine, three of the initial procedures in installing an additional file server machine vary a good deal from platform to platform. For convenience, the following sections group together all three of the procedures for a system type. Most of the remaining procedures are the same on every system type, but differences are noted as appropriate. The three initial procedures are the following.

To continue, proceed to the section for this system type:

Getting Started on AIX Systems

Begin by running the AFS initialization script to call the AIX kernel extension facility, which dynamically loads AFS modifications into the kernel. Then configure partitions and replace the AIX fsck program with a version that correctly handles AFS volumes.

  1. Mount the AFS CD-ROM labeled AFS for AIX, International Edition on the local /cdrom directory. For instructions on mounting CD-ROMs (either locally or remotely via NFS), see your AIX documentation.

  2. Copy the AFS kernel library files from the CD-ROM to the local /usr/vice/etc/dkload directory, and the AFS initialization script to the /etc directory.
       # cd  /cdrom/rs_aix42/root.client/usr/vice/etc
       
       # cp -rp  dkload  /usr/vice/etc
       
       # cp -p  rc.afs  /etc/rc.afs
        
    

  3. Edit the /etc/rc.afs script, setting the NFS variable as indicated.
    Note:For the machine to function as an NFS/AFS translator, NFS must already be loaded into the kernel. It is loaded automatically on systems running AIX 4.1.1 and later, as long as the file /etc/exports exists.

  4. Invoke the /etc/rc.afs script to load AFS modifications into the kernel. You can ignore any error messages about the inability to start the BOS Server or the AFS client.
       #    /etc/rc.afs   
    

  5. Create a directory called /vicepxx for each AFS server partition you are configuring (there must be at least one). Repeat the command for each partition.
       
       # mkdir /vicepxx
       
    

  6. Use the SMIT program to create a journaling file system on each partition to be configured as an AFS server partition.

  7. Mount each partition at one of the /vicepxx directories. Choose one of the following three methods:

    Also configure the partitions so that they are mounted automatically at each reboot. For more information, refer to the AIX documentation.

  8. Add the following line to the /etc/vfs file. It enables the Cache Manager to unmount AFS correctly during shutdown.
       
         afs     4     none     none
       
    

  9. Move the AIX fsck program helper to a safe location and install the version from the AFS distribution in its place. The AFS CD-ROM must still be mounted at the /cdrom directory.
       
       # cd /sbin/helpers
       
       # mv v3fshelper v3fshelper.noafs
       
       # cp -p /cdrom/rs_aix42/root.server/etc/v3fshelper v3fshelper
       
     
    

  10. Proceed to Starting Server Programs.

Getting Started on Digital UNIX Systems

Begin by building AFS modifications into the kernel, then configure server partitions and replace the Digital UNIX fsck program with a version that correctly handles AFS volumes.

If the machine's hardware and software configuration exactly matches another Digital UNIX machine on which AFS is already built into the kernel, you can copy the kernel from that machine to this one. In general, however, it is better to build AFS modifications into the kernel on each machine according to the following instructions.

  1. Create a copy called AFS of the basic kernel configuration file included in the Digital UNIX distribution as /usr/sys/conf/machine_name, where machine_name is the machine's hostname in all uppercase letters.
       # cd /usr/sys/conf
       
       # cp machine_name AFS
       
    

  2. Add AFS to the list of options in the configuration file you created in the previous step, so that the result looks like the following:
              .                   .
              .                   .
           options               UFS
           options               NFS
           options               AFS
              .                   .
              .                   .
       
    

  3. Add an entry for AFS to two places in the /usr/sys/conf/files file.

  4. Add an entry for AFS to two places in the /usr/sys/vfs/vfs_conf.c file.

  5. Mount the AFS CD-ROM labeled AFS for Digital UNIX, International Edition on the local /cdrom directory. For instructions on mounting CD-ROMs (either locally or remotely via NFS), see your Digital UNIX documentation.

  6. Copy the AFS initialization file from the distribution directory to the local directory for initialization files on Digital UNIX machines, /sbin/init.d by convention. Note the removal of the .rc extension as you copy the file.
       
       # cd /cdrom/alpha_dux40/root.client
       
       # cp usr/vice/etc/afs.rc  /sbin/init.d/afs
       
    

  7. Copy the AFS kernel module from the distribution directory to the local /usr/sys/BINARY directory.

    If the machine's kernel supports NFS server functionality:

      
       # cp bin/libafs.o /usr/sys/BINARY/afs.mod
       
    

    If the machine's kernel does not support NFS server functionality:

      
       # cp bin/libafs.nonfs.o /usr/sys/BINARY/afs.mod
       
    

  8. Configure and build the kernel. Respond to any prompts by pressing <Return>. The resulting kernel resides in the file /sys/AFS/vmunix.
       
       # doconfig -c AFS
       
    

  9. Rename the existing kernel file and copy the new, AFS-modified file to the standard location.
       
       # mv /vmunix /vmunix_save
       
       # cp /sys/AFS/vmunix /vmunix
       
    

  10. Reboot the machine to start using the new kernel.
       
       # shutdown -r now
       
    

  11. Create a directory called /vicepxx for each AFS server partition you are configuring (there must be at least one). Repeat the command for each partition.
       
       # mkdir /vicepxx
       
    

  12. Add a line with the following format to the file systems registry file, /etc/fstab, for each directory just created. The entry maps the directory name to the disk partition to be mounted on it.
       
       /dev/disk /vicepx ufs rw 0 2
    

    For example,

       
       /dev/rz3a /vicepa ufs rw 0 2
       
    

  13. Create a file system on each partition that is to be mounted at a /vicep directory. The following command is probably appropriate, but consult the Digital UNIX documentation for more information.
       
       # newfs -v /dev/disk
       
    

  14. Mount each partition by issuing either the mount -a command to mount all partitions at once or the mount command to mount each partition in turn.

  15. Move the Digital UNIX fsck binaries to a safe location, install the version from the AFS distribution (the vfsck binary), and link the Digital UNIX program names to it. The AFS CD-ROM must still be mounted at the /cdrom directory.
       
       # mv /sbin/ufs_fsck /sbin/ufs_fsck.noafs
       
       # mv /usr/sbin/ufs_fsck /usr/sbin/ufs_fsck.noafs
       
       # cd /cdrom/alpha_dux40/root.server/etc
       
       # cp vfsck /sbin/vfsck
       
       # cp vfsck /usr/sbin/vfsck
       
       # ln -s /sbin/vfsck /sbin/ufs_fsck
       
       # ln -s /usr/sbin/vfsck /usr/sbin/ufs_fsck
       
    

  16. Proceed to Starting Server Programs.

Getting Started on HP-UX Systems

Begin by building AFS modifications into the kernel, then configure server partitions and replace the HP-UX fsck program with a version that correctly handles AFS volumes.

If the machine's hardware and software configuration exactly matches another HP-UX machine on which AFS is already built into the kernel, you can copy the kernel from that machine to this one. In general, however, it is better to build AFS modifications into the kernel on each machine according to the following instructions.

  1. Move the existing kernel-related files to a safe location.
       
       # cp /stand/vmunix /stand/vmunix.noafs
       
       # cp /stand/system /stand/system.noafs
       
    

  2. Mount the AFS CD-ROM labeled AFS for HP-UX, International Edition on the local /cdrom directory. For instructions on mounting CD-ROMs (either locally or remotely via NFS), see your HP-UX documentation.

  3. Copy the AFS initialization file from the AFS CD-ROM to the local directory for initialization files on HP-UX machines, /sbin/init.d by convention. Note the removal of the .rc extension as you copy the file.
       
       # cd /cdrom/hp_ux110/root.client
       
       # cp usr/vice/etc/afs.rc  /sbin/init.d/afs
       
    

  4. Copy the file afs.driver from the AFS CD-ROM to the local /usr/conf/master.d directory, changing its name to afs as you do so.
         
       # cp  usr/vice/etc/afs.driver  /usr/conf/master.d/afs
       
    

  5. Copy the AFS kernel module from the AFS CD-ROM to the local /usr/conf/lib directory.

    If the machine's kernel supports NFS server functionality:

       
       # cp bin/libafs.a /usr/conf/lib
       
    

    If the machine's kernel does not support NFS server functionality:

       
       # cp bin/libafs.nonfs.a /usr/conf/lib
       
    

  6. Incorporate the AFS driver into the kernel, either using the SAM program or a series of individual commands.

  7. Create a directory called /vicepxx for each AFS server partition you are configuring (there must be at least one). Repeat the command for each partition.
       
       # mkdir /vicepxx
       
    

  8. Use the SAM program to create a file system on each partition. For instructions, consult the HP-UX documentation.

  9. On some HP-UX systems that use logical volumes, the SAM program automatically mounts the partitions. If it has not, mount each partition by issuing either the mount -a command to mount all partitions at once or the mount command to mount each partition in turn.

  10. Create the command configuration file /sbin/lib/mfsconfig.d/afs. Use a text editor to place the indicated two lines in it:
       
       format_revision 1
       fsck            0        m,P,p,d,f,b:c:y,n,Y,N,q,
       
    

  11. Create an AFS-specific command directory called /sbin/fs/afs.
       
       # mkdir /sbin/fs/afs
       
    

  12. Copy the AFS-modified version of the fsck program (the vfsck binary) and related files from the distribution directory to the new AFS-specific command directory. Change the vfsck binary's name to fsck.
       
       # cd  /cdrom/hp_ux110/root.server/etc
       
       # cp -p  *  /sbin/fs/afs
       
       # mv  vfsck  fsck
       
    

  13. Set the mode bits appropriately on all of the files in the /sbin/fs/afs directory.
       
       # cd  /sbin/fs/afs
       
       # chmod  755  *
       
    

  14. Edit the /etc/fstab file, changing the file system type for each AFS server (/vicep) partition from hfs to afs. This ensures that the AFS-modified fsck program runs on the appropriate partitions.

    The sixth line in the following example of an edited file shows an AFS server partition, /vicepa.

       
       /dev/vg00/lvol1 / hfs defaults 0 1
       /dev/vg00/lvol4 /opt hfs defaults 0 2
       /dev/vg00/lvol5 /tmp hfs defaults 0 2
       /dev/vg00/lvol6 /usr hfs defaults 0 2
       /dev/vg00/lvol8 /var hfs defaults 0 2
       /dev/vg00/lvol9 /vicepa afs defaults 0 2
       /dev/vg00/lvol7 /usr/vice/cache hfs defaults 0 2
       
    

  15. Proceed to Starting Server Programs.

Getting Started on IRIX Systems

Begin by incorporating AFS modifications into the kernel. Either use the ml dynamic loader program, or build a static kernel. Then configure partitions to house AFS volumes. AFS supports use of both EFS and XFS partitions for housing AFS volumes. SGI encourages use of XFS partitions.

You do not need to replace IRIX fsck program, because the version that SGI distributes handles AFS volumes properly.

  1. Incorporate AFS into the kernel, either using the ml program or by building AFS modifications into a static kernel.

  2. Create a directory called /vicepxx for each AFS server partition you are configuring (there must be at least one). Repeat the command for each partition.
       
       # mkdir /vicepxx
       
    

  3. Add a line with the following format to the file systems registry file, /etc/fstab, for each partition (or logical volume created with the XLV volume manager) to be mounted on one of the directories created in the previous step.

    For an XFS partition or logical volume:

       
       /dev/dsk/disk /vicepxx xfs rw,raw=/dev/rdsk/disk 0 0
       
    

    For an EFS partition:

       
       /dev/dsk/disk /vicepxx efs rw,raw=/dev/rdsk/disk 0 0
       
    

    The following are examples of an entry for each file system type:

       
       /dev/dsk/dks0d2s6 /vicepa  xfs rw,raw=/dev/rdsk/dks0d2s6  0 0
       /dev/dsk/dks0d3s1 /vicepa  efs rw,raw=/dev/rdsk/dks0d3s1  0 0
       
    

  4. Create a file system on each partition that is to be mounted on a /vicep directory. The following commands are probably appropriate, but consult the IRIX documentation for more information.

    For XFS file systems, include the indicated options to configure the partition or logical volume with inodes large enough to accommodate special AFS-specific information:

       
       # mkfs -t xfs -i size=512 -l size=4000b device
       
    

    For EFS file systems:

       
       # mkfs -t efs device
       
    

    In both cases, device is a raw device name like /dev/rdsk/dks0d0s0 for a single disk partition or /dev/rxlv/xlv0 for a logical volume.

  5. Mount each partition by issuing either the mount -a command to mount all partitions at once or the mount command to mount each partition in turn.

  6. Proceed to Starting Server Programs.

Getting Started on Linux Systems

Begin by running the AFS initialization script to call the insmod program, which dynamically loads AFS modifications into the kernel. Then create partitions for storing AFS volumes. You do not need to replace the Linux fsck program.

  1. Mount the AFS CD-ROM labeled AFS for Linux, International Edition on the local /cdrom directory. For instructions on mounting CD-ROMs (either locally or remotely via NFS), see your Linux documentation.

  2. Copy the AFS kernel library files from the CD-ROM to the local /usr/vice/etc/modload directory. The filenames for the libraries have the format libafs-version.o, where version indicates the kernel build level. The string .mp in the version indicates that the file is appropriate for machines running a multiprocessor kernel.
       
       # cd  /cdrom/i386_linux22/root.client/usr/vice/etc
      
       # cp -rp  modload  /usr/vice/etc
       
    

  3. Copy the AFS initialization file from the CD-ROM to the local directory for initialization files on Linux machines, /etc/rc.d/init.d by convention. Note the removal of the .rc extension as you copy the file.
       
       # cp -p   afs.rc  /etc/rc.d/init.d/afs 
        
    

  4. Run the AFS initialization script to load AFS extensions into the kernel. The script invokes the insmod command, automatically determining which kernel library file to use based on the Linux kernel version installed on this machine.

    You can ignore any error messages about the inability to start the BOS Server or Cache Manager.

       
       # /etc/rc.d/init.d/afs  start
       
    

  5. Create a directory called /vicepxx for each AFS server partition you are configuring (there must be at least one). Repeat the command for each partition.
       
       # mkdir /vicepxx
       
    

  6. Add a line with the following format to the file systems registry file, /etc/fstab, for each directory just created. The entry maps the directory name to the disk partition to be mounted on it.
       
       /dev/disk /vicepx ext2 defaults 0 2
       
    

    For example,

       
       /dev/sda8 /vicepa ext2 defaults 0 2
       
    

  7. Create a file system on each partition that is to be mounted at a /vicep directory. The following command is probably appropriate, but consult the Linux documentation for more information.
       
       # mkfs -v /dev/disk
       
    

  8. Mount each partition by issuing either the mount -a command to mount all partitions at once or the mount command to mount each partition in turn.

  9. Proceed to Starting Server Programs.

Getting Started on Solaris Systems

Begin by running the AFS initialization script to call the modload program, which dynamically loads AFS modifications into the kernel. Then configure partitions and replace the Solaris fsck program with a version that correctly handles AFS volumes.

  1. Mount the AFS CD-ROM labeled AFS for Solaris, International Edition on the /cdrom directory. For instructions on mounting CD-ROMs (either locally or remotely via NFS), see your Solaris documentation.

  2. Copy the AFS initialization file from the CD-ROM to the local directory for initialization files on Solaris machines, /etc/init.d by convention. Note the removal of the .rc extension as you copy the file.
       
       # cd  /cdrom/sun4x_56/root.client/usr/vice/etc
       
       # cp -p  afs.rc  /etc/init.d/afs
       
    

  3. Copy the appropriate AFS kernel library file from the CD-ROM to the local file /kernel/fs/afs.

    If the machine's kernel supports NFS server functionality and the nfsd process is running:

       
       # cp -p modload/libafs.o /kernel/fs/afs
       
    

    If the machine's kernel does not support NFS server functionality or if the nfsd process is not running:

       
       # cp -p modload/libafs.nonfs.o /kernel/fs/afs
       
    

  4. Invoke the AFS initialization script to load AFS modifications into the kernel. It automatically creates an entry for AFS in slot 105 of the local /etc/name_to_sysnum file if necessary, reboots the machine to start using the new version of the file, and runs the modload command. You can ignore any error messages about the inability to start the BOS Server or the AFS client.
          
       # /etc/init.d/afs start
       
    

  5. Create a directory called /vicepxx for each AFS server partition you are configuring (there must be at least one). Repeat the command for each partition.
       
       # mkdir /vicepxx
       
    

  6. Add a line with the following format to the file systems registry file, /etc/vfstab, for each partition to be mounted on a directory created in the previous step.
       
       /dev/dsk/disk   /dev/rdsk/disk   /vicepxx   ufs   boot_order  yes
      
    

    The following is an example.

      
       /dev/dsk/c0t6d0s1 /dev/rdsk/c0t6d0s1 /vicepa ufs 3 yes
      
    

  7. Create a file system on each partition that is to be mounted at a /vicep directory. The following command is probably appropriate, but consult the Solaris documentation for more information.
      
       # newfs -v /dev/rdsk/disk
      
    

  8. Issue the mountall command to mount all partitions at once.

  9. Create the /usr/lib/fs/afs directory to house AFS library files.
      
       # mkdir /usr/lib/fs/afs
      
    

  10. Copy the AFS-modified fsck program (vfsck) from the CD-ROM distribution directory to the newly created directory.
      
       # cd /cdrom/sun4x_56/root.server/etc
      
       # cp vfsck /usr/lib/fs/afs/fsck
      
    

  11. Working in the /usr/lib/fs/afs directory, create the following links to Solaris libraries:
      
       # cd /usr/lib/fs/afs	
       # ln -s /usr/lib/fs/ufs/clri	
       # ln -s /usr/lib/fs/ufs/df
       # ln -s /usr/lib/fs/ufs/edquota
       # ln -s /usr/lib/fs/ufs/ff
       # ln -s /usr/lib/fs/ufs/fsdb	
       # ln -s /usr/lib/fs/ufs/fsirand
       # ln -s /usr/lib/fs/ufs/fstyp
       # ln -s /usr/lib/fs/ufs/labelit
       # ln -s /usr/lib/fs/ufs/lockfs
       # ln -s /usr/lib/fs/ufs/mkfs	
       # ln -s /usr/lib/fs/ufs/mount
       # ln -s /usr/lib/fs/ufs/ncheck
       # ln -s /usr/lib/fs/ufs/newfs
       # ln -s /usr/lib/fs/ufs/quot
       # ln -s /usr/lib/fs/ufs/quota
       # ln -s /usr/lib/fs/ufs/quotaoff
       # ln -s /usr/lib/fs/ufs/quotaon
       # ln -s /usr/lib/fs/ufs/repquota
       # ln -s /usr/lib/fs/ufs/tunefs
       # ln -s /usr/lib/fs/ufs/ufsdump
       # ln -s /usr/lib/fs/ufs/ufsrestore
       # ln -s /usr/lib/fs/ufs/volcopy
       
    

  12. Append the following line to the end of the file /etc/dfs/fstypes.
      
       afs AFS Utilities
      
    

  13. Edit the /sbin/mountall file, making two changes.

  14. Proceed to Starting Server Programs.

Starting Server Programs

In this section you initialize the BOS Server, the Update Server, the controller process for NTPD, and the File Server. You begin by copying the necessary server files to the local disk.

  1. Copy file server binaries to the local /usr/afs/bin directory.

  2. Copy the contents of the /usr/afs/etc directory from an existing file server machine, using a remote file transfer protocol such as ftp or NFS. If you run the United States Edition of AFS and run a system control machine, it is best to copy the contents of its /usr/afs/etc directory. If you run the international edition of AFS (or do not use a system control machine), copy the directory's contents from any existing file server machine.

  3. Change to the /usr/afs/bin directory and start the BOS Server (bosserver process). Include the -noauth flag to prevent the AFS processes from performing authorization checking. This is a grave compromise of security; finish the remaining instructions in this section in an uninterrupted pass.
        
       # cd /usr/afs/bin
        
       # ./bosserver -noauth &
        
    

  4. If using the United States edition of AFS, create the upclientetc process as an instance of the client portion of the Update Server. It accepts updates of the common configuration files stored in the system control machine's /usr/afs/etc directory from the upserver process (server portion of the Update Server) running on that machine. The cell's first file server machine was installed as the system control machine in Starting the Server Portion of the Update Server.

    Do not issue this command if using the international edition of AFS. The contents of the /usr/afs/etc directory are too sensitive to cross the network unencrypted, but the necessary encryption routines are not included in the international edition of AFS. You must update the contents of the /usr/afs/etc directory on each file server machine, using the appropriate bos commands. See the AFS System Administrator's Guide for instructions.

    By default, the Update Server performs updates every 300 seconds (five minutes). Use the -t argument to specify a different number of seconds. For the machine name argument, substitute the name of the machine you are installing. The command appears on multiple lines here only for legibility reasons.

          
       # ./bos create  <machine name> upclientetc simple  \ 
             "/usr/afs/bin/upclient  <system control machine>  \  
             [-t  <time>]  /usr/afs/etc" -cell  <cellname>  -noauth
       
    

  5. Create an instance of the Update Server to handle distribution of the file server binaries stored in the /usr/afs/bin directory.

  6. Start the runntp process, which configures the Network Time Protocol Daemon (NTPD) to refer to a database server machine chosen randomly from the local /usr/afs/etc/CellServDB file as its time source. In the standard configuration, the first database server machine installed in your cell refers to a time source outside the cell, and serves as the basis for clock synchronization on all server machines.
    Note:Do not run the runntp process if NTPD or another time synchronization protocol is already running on the machine. Attempting to run multiple instances of the NTPD causes an error. Running NTPD together with another time synchronization protocol is unnecessary and can cause instability in the clock setting.

    Some versions of some operating systems run a time synchronization program by default. For correct NTPD functioning, it is best to disable the default program. See the AFS Release Notes for details.

       
       # ./bos create  <machine name> runntp simple  \ 
             /usr/afs/bin/runntp -cell <cell name>  -noauth
       
    

  7. Start the fs process, which binds together the File Server, Volume Server, and Salvager. The command appears on multiple lines here only for legibility reasons.
       
       # ./bos create  <machine name> fs fs   \ 
             /usr/afs/bin/fileserver /usr/afs/bin/volserver  \ 
             /usr/afs/bin/salvager -cell <cellname>  -noauth
       
    

Installing Client Functionality

If you want this machine to be a client as well as a server, follow the instructions in this section. Otherwise, skip to Completing the Installation.

Begin by loading the necessary client files to the local disk. Then create the necessary configuration files and start the Cache Manager. For more detailed explanation of the procedures involved, see the corresponding instructions in Installing the First AFS Machine (in the sections following Overview: Installing Client Functionality).

If another AFS machine of this machine's system type exists, the AFS binaries are probably already accessible in your AFS filespace (the conventional location is /afs/cellname/sysname/usr/afsws). If not, or if this is the first AFS machine of its type, copy the AFS binaries for this system type into an AFS volume by following the instructions in Storing AFS Binaries in AFS. Because this machine is not yet an AFS client, you must perform the procedure on an existing AFS machine. However, remember to perform the final step--linking the local directory /usr/afsws to the appropriate location in the AFS file tree--on this machine (the new file server machine). If you also want to create AFS volumes to house UNIX system binaries for the new system type, see Storing System Binaries in AFS.

  1. Copy client binaries and files to the local disk.

  2. Change to the /usr/vice/etc directory and create the ThisCell file as a copy of the /usr/afs/etc/ThisCell file. You must first remove the symbolic link to the /usr/afs/etc/ThisCell file that the BOS Server created automatically in Starting Server Programs.
       
       # cd  /usr/vice/etc
       
       # rm ThisCell
     
       # cp  /usr/afs/etc/ThisCell  ThisCell
          
    

  3. Remove the symbolic link to the /usr/afs/etc/CellServDB file.
       
       # rm   CellServDB
       
    

  4. Create the /usr/vice/etc/CellServDB file. Use a network file transfer program such as ftp or NFS to copy it from one of the following sources, which are listed in decreasing order of preference:

  5. Create the cacheinfo file for either a disk cache or a memory cache. For a discussion of the appropriate values to record in the file, see Configuring the Cache.

    To configure a disk cache:

       
       # mkdir /usr/vice/cache
       
       # echo "/afs:/usr/vice/cache:#blocks" > cacheinfo
       
    

    To configure a memory cache:

        
       # echo "/afs:/usr/vice/cache:#blocks" > cacheinfo
       
    

  6. Create the local directory on which to mount the AFS filespace, by convention /afs. If the directory already exists, verify that it is empty.
       
       # mkdir /afs
       
    

  7. On Linux systems, copy the afsd options file from the /usr/vice/etc directory to the /etc/sysconfig directory. Note the removal of the .conf extension as you copy the file.
       # cp /usr/vice/etc/afs.conf /etc/sysconfig/afs
       
    

  8. Edit the machine's AFS initialization script or afsd options file to set appropriate values for afsd command parameters. The script resides in the indicated location on each system type:

    Use one of the methods described in Configuring the Cache Manager to add the following flags to the afsd command line. If you intend for the machine to remain an AFS client, also set any performance-related arguments you wish.

  9. Incorporate AFS into the machine's authentication system, following the instructions in Enabling AFS Login. On Solaris systems, the instructions also explain how to alter the file systems clean-up script.

  10. If appropriate, follow the instructions in Storing AFS Binaries in AFS to copy the AFS binaries for this system type into an AFS volume. See the introduction to this section for further discussion.

Completing the Installation

At this point you run the machine's AFS initialization script to verify that it correctly loads AFS modifications into the kernel and starts the BOS Server, which starts the other server processes. If you have installed client files, the script also starts the Cache Manager. If the script works correctly, perform the steps that incorporate it into the machine's startup and shutdown sequence. If there are problems during the initialization, attempt to resolve them. The AFS Product Support group can provide assistance if necessary.

If the machine is configured as a client using a disk cache, it can take a while for the afsd program to create all of the Vn files in the cache directory. Messages on the console trace the initialization process.

  1. Issue the bos shutdown command to shut down the AFS server processes other than the BOS Server. Include the -wait flag to delay return of the command shell prompt until all processes shut down completely.
          
       # /usr/afs/bin/bos shutdown <machine name> -wait
       
    

  2. Issue the ps command to learn the BOS Server's process ID number (PID), and then the kill command to stop the bosserver process.
       
       # ps appropriate_ps_options | grep bosserver
       
       # kill bosserver_PID
       
    

  3. Run the AFS initialization script by issuing the appropriate commands for this system type.

    On AIX systems:

    1. Reboot the machine and log in again as the local superuser root.
         
         # shutdown -r now
         login: root
         Password: root_password
         
      

    2. Run the AFS initialization script.
         
         # /etc/rc.afs
         
      

    3. Edit the AIX initialization file, /etc/inittab, adding the following line to invoke the AFS initialization script. Place it just after the line that starts NFS daemons.
         
         rcafs:2:wait:/etc/rc.afs > /dev/console 2>&1 # Start AFS services
         
      

    4. (Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /etc directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS CD-ROM if necessary.
         
         # cd  /usr/vice/etc
         
         # rm  rc.afs
        
         # ln -s  /etc/rc.afs
         
      

    5. Proceed to Step 4.

    On Digital UNIX systems:

    1. Run the AFS initialization script.
         
         # /sbin/init.d/afs  start
         
      

    2. Change to the /sbin/init.d directory and issue the ln -s command to create symbolic links that incorporate the AFS initialization script into the Digital UNIX startup and shutdown sequence.
         
         # cd  /sbin/init.d
         
         # ln -s  ../init.d/afs  /sbin/rc3.d/S67afs
         
         # ln -s  ../init.d/afs  /sbin/rc0.d/K66afs
         
      

    3. (Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /sbin/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS CD-ROM if necessary.
         
         # cd /usr/vice/etc
         
         # rm afs.rc
        
         # ln -s  /sbin/init.d/afs  afs.rc
         
      

    4. Proceed to Step 4.

    On HP-UX systems:

    1. Run the AFS initialization script.
         
         # /sbin/init.d/afs  start
         
      

    2. Change to the /sbin/init.d directory and issue the ln -s command to create symbolic links that incorporate the AFS initialization script into the HP-UX startup and shutdown sequence.
         
         # cd /sbin/init.d
         
         # ln -s ../init.d/afs /sbin/rc2.d/S460afs
        
         # ln -s ../init.d/afs /sbin/rc2.d/K800afs
         
      

    3. (Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /sbin/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS CD-ROM if necessary.
         
         # cd /usr/vice/etc
         
         # rm afs.rc
        
         # ln -s  /sbin/init.d/afs  afs.rc
         
      

    4. Proceed to Step 4.

    On IRIX systems:

    1. If you have configured the machine to use the ml dynamic loader program, reboot the machine and log in again as the local superuser root.
         # shutdown -i6 -g0 -y
         login: root
         Password: root_password
         
      

    2. Issue the chkconfig command to activate the afsserver configuration variable.
         # /etc/chkconfig -f afsserver on
         
      

      If you have configured this machine as an AFS client and want to it remain one, also issue the chkconfig command to activate the afsclient configuration variable.

         # /etc/chkconfig -f afsclient on 
         
      

    3. Run the AFS initialization script.
         
         # /etc/init.d/afs  start
         
      

    4. Change to the /etc/init.d directory and issue the ln -s command to create symbolic links that incorporate the AFS initialization script into the IRIX startup and shutdown sequence.
         
         # cd /etc/init.d
         
         # ln -s ../init.d/afs /etc/rc2.d/S35afs
        
         # ln -s ../init.d/afs /etc/rc0.d/K35afs
         
      

    5. (Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /etc/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS CD-ROM if necessary.
         
         # cd /usr/vice/etc
         
         # rm afs.rc
        
         # ln -s  /etc/init.d/afs  afs.rc
         
      

    6. Proceed to Step 4.

    On Linux systems:

    1. Reboot the machine and log in again as the local superuser root.
        
         # shutdown -r now
         login: root
         Password: root_password
         
      

    2. Run the AFS initialization script.
         
         # /etc/rc.d/init.d/afs  start
         
      

    3. Issue the chkconfig command to activate the afs configuration variable. Based on the instruction in the AFS initialization file that begins with the string #chkconfig, the command automatically creates the symbolic links that incorporate the script into the Linux startup and shutdown sequence.
         
         # /sbin/chkconfig  --add afs
         
      

    4. (Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /etc/init.d directories, and copies of the afsd options file in both the /usr/vice/etc and /etc/sysconfig directories. If you want to avoid potential confusion by guaranteeing that the two copies of each file are always the same, create a link between them. You can always retrieve the original script or options file from the AFS CD-ROM if necessary.
         
         # cd /usr/vice/etc
         
         # rm afs.rc afs.conf
          
         # ln -s  /etc/rc.d/init.d/afs  afs.rc
         
         # ln -s  /etc/sysconfig/afs  afs.conf
         
      

    5. Proceed to Step 4.

    On Solaris systems:

    1. Reboot the machine and log in again as the local superuser root.
         # shutdown -i6 -g0 -y
         login: root
         Password: root_password
         
      

    2. Run the AFS initialization script.
         
         # /etc/init.d/afs  start
         
      

    3. Change to the /etc/init.d directory and issue the ln -s command to create symbolic links that incorporate the AFS initialization script into the Solaris startup and shutdown sequence.
         
         # cd /etc/init.d
        
         # ln -s ../init.d/afs /etc/rc3.d/S99afs
        
         # ln -s ../init.d/afs /etc/rc0.d/K66afs
         
      

    4. (Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /etc/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS CD-ROM if necessary.
         
         # cd /usr/vice/etc
         
         # rm afs.rc
        
         # ln -s  /etc/init.d/afs  afs.rc
         
      

  4. Verify that /usr/afs and its subdirectories on the new file server machine meet the ownership and mode bit requirements outlined in Protecting Sensitive AFS Directories. If necessary, use the chmod command to correct the mode bits.

  5. To configure this machine as a database server machine, proceed to Installing Database Server Functionality.

Installing Database Server Functionality

This section explains how to install database server functionality. Note the following requirements.

Summary of Procedures

To install a database server machine, perform the following procedures.

  1. Install the bos suite of commands locally, as a precaution

  2. Add the new machine to the /usr/afs/etc/CellServDB file on existing file server machines

  3. Update your cell's central CellServDB source file and the file you make available to foreign cells

  4. Update every client machine's /usr/vice/etc/CellServDB file and kernel memory list of database server machines

  5. Start the database server processes (Authentication Server, Backup Server, Protection Server, and Volume Location Server)

  6. Restart the database server processes on every database server machine

  7. Notify the AFS Product Support group that you have installed a new database server machine

Instructions

  1. You can perform the following instructions on either a server or client machine. Login as an AFS administrator listed in the /usr/afs/etc/UserList file on all server machines.
    Note:The following instructions assume that your PATH environment variable includes the directory that houses the AFS command binaries. If not, you possibly need to precede the command names with the appropriate pathname.
       
       % klog admin_user
       Password: admin_password
       
    

  2. If you are working on a client machine configured in the conventional manner, the bos command suite resides in the /usr/afsws/bin directory, a symbolic link to an AFS directory. An error during installation can potentially block access to AFS, in which case it is helpful to have a copy of the bos binary on the local disk.
       
    % cp  /usr/afsws/bin/bos   /tmp
       
    

  3. Issue the bos addhost command to add the new database server machine to the /usr/afs/etc/CellServDB file on existing server machines (as well as the new database server machine itself).

    Substitute the new database server machine's fully-qualified hostname for the host name argument.

    If you use the United States edition of AFS and a system control machine, substitute its fully-qualified hostname for the machine name argument. If you use the international edition of AFS, repeat the bos addhost command once for each server machine in your cell (including the new database server machine itself), by substituting each one's fully-qualified hostname for the machine name argument in turn.

       
       % bos addhost <machine name>  <host name>
       
    

    If using the United States edition of AFS, wait for the Update Server to distribute the new CellServDB file, which takes up to five minutes by default. If using the international edition, attempt to issue all of the bos addhost commands within five minutes.

  4. Issue the bos listhosts command on each server machine to verify that the new database server machine appears in its CellServDB file.
       
       % bos listhosts <machine name>
       
    

  5. Add the new database server machine to your cell's central CellServDB source file, if you use one. The standard location is /afs/cellname/common/etc/CellServDB.

    If you are willing to make your cell accessible by users in foreign cells, add the new database server machine to the file that lists your cell's database server machines. The conventional location is /afs/cellname/service/etc/CellServDB.local.

  6. If this machine's IP address is lower than any existing database server machine's, update every client machine's /usr/vice/etc/CellServDB file and kernel memory list to include this machine. (If this machine's IP address is not the lowest, it is acceptable to wait until Step 12.)

    There are several ways to update the CellServDB file on client machines, as detailed in the chapter of the AFS System Administrator's Guide about administering client machines. One option is to copy over the central update source (which you updated in Step 5), with or without using the package program. To update the machine's kernel memory list, you can either reboot after changing the CellServDB file or issue the fs newcell command.

  7. Start the Authentication Server (the kaserver process).
       
       % bos create <machine name> kaserver simple /usr/afs/bin/kaserver
       
    

  8. Start the Backup Server (the buserver process). You must perform other configuration procedures before actually using the AFS Backup System, as detailed in the AFS System Administrator's Guide.
       
       % bos create <machine name> buserver simple /usr/afs/bin/buserver
       
    

  9. Start the Protection Server (the ptserver process).
       
       % bos create <machine name> ptserver simple /usr/afs/bin/ptserver
        
    

  10. Start the Volume Location (VL) Server (the vlserver process).
          
       % bos create <machine name> vlserver simple /usr/afs/bin/vlserver
       
    

  11. Issue the bos restart command on every database server machine in the cell, including the new server, to restart the Authentication, Backup, Protection, and VL Servers. This forces an election of a new Ubik coordinator for each process; the new machine votes in the election and is considered as a potential new coordinator.

    A cell-wide service outage is possible during the election of a new coordinator for the VL Server, but it normally lasts less than five minutes. Such an outage is particularly likely if you are installing your cell's second database server machine. Messages tracing the progress of the election appear on the console.

    Repeat this command on each of your cell's database server machines in quick succession. Begin with the machine with the lowest IP address.

       
       %  bos restart <machine name> kaserver buserver ptserver vlserver 
       
    

    If an error occurs, restart all server processes on the database server machines again by using one of the following methods:

  12. If you did not update the CellServDB file on client machines in Step 6, do so now.

  13. Send the new database server machine's name and IP address to the AFS Product Support group.

    If you wish to participate in the AFS global name space, your cell's entry appear in a CellServDB file that the AFS Product Support group makes available to all AFS sites. Otherwise, they list your cell in a private file that they do not share with other AFS sites.


Removing Database Server Functionality

Removing database server machine functionality is nearly the reverse of installing it.

Summary of Procedures

To decommission a database server machine, perform the following procedures.

  1. Install the bos suite of commands locally, as a precaution

  2. Notify the AFS Product Support group that you are decommissioning a database server machine

  3. Update your cell's central CellServDB source file and the file you make available to foreign cells

  4. Update every client machine's /usr/vice/etc/CellServDB file and kernel memory list of database server machines

  5. Remove the machine from the /usr/afs/etc/CellServDB file on file server machines

  6. Stop the database server processes and remove them from the /usr/afs/local/BosConfig file if desired

  7. Restart the database server processes on the remaining database server machines

Instructions

  1. You can perform the following instructions on either a server or client machine. Login as an AFS administrator listed in the /usr/afs/etc/UserList file on all server machines.
    Note:The following instructions assume that your PATH environment variable includes the directory that houses the AFS command binaries. If not, you possibly need to precede the command names with the appropriate pathname.
       
       % klog admin_user
       Password: admin_password
       
    

  2. If you are working on a client machine configured in the conventional manner, the bos command suite resides in the /usr/afsws/bin directory, a symbolic link to an AFS directory. An error during installation can potentially block access to AFS, in which case it is helpful to have a copy of the bos binary on the local disk.
          
       % cp  /usr/afsws/bin/bos   /tmp
       
    

  3. Send the revised list of your cell's database server machines to the AFS Product Support group.

    This step is particularly important if your cell is included in the global CellServDB file. If the administrators in foreign cells do not learn about the change in your cell, they cannot update the CellServDB file on their client machines. Users in foreign cells continue to send database requests to the decommissioned machine, which creates needless network traffic and activity on the machine. Also, the users experience time-out delays while their request is forwarded to a valid database server machine.

  4. Remove the decommissioned machine from your cell's central CellServDB source file, if you use one. The conventional location is /afs/cellname/common/etc/CellServDB.

    If you maintain a file that users in foreign cells can access to learn about your cell's database server machines, update it also. The conventional location is /afs/cellname/service/etc/CellServDB.local.

  5. Update every client machine's /usr/vice/etc/CellServDB file and kernel memory list to exclude this machine. Altering the CellServDB file and kernel memory list before stopping the actual database server processes avoids possible time-out delays that result when users send requests to a decommissioned database server machine that is still listed in the file.

    There are several ways to update the CellServDB file on client machines, as detailed in the chapter of the AFS System Administrator's Guide about administering client machines. One option is to copy over the central update source (which you updated in Step 5), with or without using the package program. To update the machine's kernel memory list, you can either reboot after changing the CellServDB file or issue the fs newcell command.

  6. Issue the bos removehost command to remove the decommissioned database server machine from the /usr/afs/etc/CellServDB file on server machines.

    Substitute the decommissioned database server machine's fully-qualified hostname for the host name argument.

    If you use the United States edition of AFS and a system control machine, substitute its fully-qualified hostname for the machine name argument. If you use the international edition of AFS, repeat the bos removehost command once for each server machine in your cell (including the decommissioned database server machine itself), by substituting each one's fully-qualified hostname for the machine name argument in turn.

       
       % bos removehost <machine name>  <host name>
       
    

    If using the United States edition of AFS, wait for the Update Server to distribute the new CellServDB file, which takes up to five minutes by default. If using the international edition, attempt to issue all of the bos removehost commands within five minutes.

  7. Issue the bos listhosts command on each server machine to verify that the decommissioned database server machine no longer appears in its CellServDB file.
       
       % bos listhosts <machine name>
       
    

  8. Issue the bos stop command to stop the database server processes on the machine, by substituting its fully-qualified hostname for the machine name argument. The command changes each process' status in the /usr/afs/local/BosConfig file to NotRun, but does not remove its entry from the file.
       
       % bos stop <machine name> kaserver buserver ptserver vlserver
        
    

  9. (Optional) Issue the bos delete command to remove the entries for database server processes from the BosConfig file. Do not perform this step if you plan to reinstall the database server functionality on this machine soon.
       
       % bos delete <machine name> kaserver buserver ptserver vlserver
        
    

  10. Issue the bos restart command on every database server machine in the cell, to restart the Authentication, Backup, Protection, and VL Servers. This forces the election of a Ubik coordinator for each process, ensuring that the remaining database server processes recognize that the machine is no longer a database server.

    A cell-wide service outage is possible during the election of a new coordinator for the VL Server, but it normally lasts less than five minutes. Messages tracing the progress of the election appear on the console.

    Repeat this command on each of your cell's database server machines in quick succession. Begin with the machine with the lowest IP address.

       
       %  bos restart <machine name> kaserver buserver ptserver vlserver 
       
    

    If an error occurs, restart all server processes on the database server machines again by using one of the following methods:


[Return to Library] [Contents] [Previous Topic] [Top of Topic] [Next Topic] [Index]



© IBM Corporation 1999. All Rights Reserved