Release Notes


[Return to Library] [Contents] [Previous Topic] [Bottom of Topic]


AFS(R) 3.5 Release Notes

This file documents new features, upgrade procedures, and remaining limitations associated with the general availability (GA) release of AFS(R) 3.5.


Summary of New Features

AFS 3.5 includes the following new features, many of which improve system performance.

Integrated Support for NT and Windows Systems

The AFS 3.5 offering for Windows and NT systems includes the following components:

Enhancements to the Client component on NT include support for both File Server and Volume Location (VL) Server preference ranks, and support for whole file locking. For further details, see the AFS(R) Suite for Windows(R) Release Notes.

The File Server uses POSIX threads

The File Server process now uses the POSIX-compliant threading package provided by the operating system, rather than the proprietary threading package used in previous AFS versions. The change makes the File Server truly multithreaded and increases throughput.

The one exception is the File Server for HP-UX 11.0, which still uses the proprietary threading package as in AFS 3.4a.

The Backup System is More Efficient

There are numerous performance improvements to the Backup System, many of which reduce the load that the Backup System places on other AFS servers and the network. For example, the procedures for compiling the list of volumes to be included in a dump is more efficient.

Ubik is More Efficient

The Ubik Coordinator on the synchronization site for a given database now distributes database changes to the secondary sites in a more efficient manner. A change to Ubik's database locking method also prevents write starvation, a problem in which a secondary site is so busy answering read requests that it cannot accept changes from the synchronization site.

Support for Multihomed Database Server Machines

This feature is available on UNIX platforms only.

The AFS 3.5 version of the Ubik library properly handles communication between database server machines with multiple interface addresses, which enables you to run multihomed database server machines. However, the non-database server processes (such as the File Server) and the Cache Manager still only use one address per database server machine (the one listed in the server or client CellServDB file respectively). They do not switch to alternate interfaces if that address becomes inaccessible. To preserve the level of database access you currently enjoy, you must continue to replicate the databases.

Support for Multihomed Client Machines

This feature is available on UNIX platforms only.

AFS 3.5 includes support for multihomed client machines. When the Cache Manager first contacts a given File Server, it registers the addresses of its client machine. Thereafter, when the File Server initiates communication with the client machine, it can choose the address to which to send its message. If that address is inaccessible, it automatically switches to an alternate address.

Note that the File Server does not use the registered list of addresses when it responds to requests that the Cache Manager initiates--it still responds to the interface from which the request originated. Similarly, the Cache Manager does not use the list when choosing the interface to use for sending a request to a File Server.

You can control which addresses the Cache Manager registers with File Servers by creating one or both of the following files in the client machine's local /usr/vice/etc directory: NetInfo and NetRestrict. If the NetInfo file exists when the Cache Manager initializes, the Cache Managers uses its contents as the basis for a list of the machine's interfaces. If the file does not exist, the Cache Manager instead uses the network interfaces configured with the operating system. If the NetRestrict file exists, the Cache Manager removes any addresses included in it from the list it is compiling. It records the completed list in kernel memory.

To display the interface addresses listed in kernel memory, use the new fs getclientaddrs command. To change the list without rebooting the client machine, use the new fs setclientaddrs command.

Improved Rx and Jumbogram Implementation

AFS 3.5 improves the performance of AFS's RPC facility, Rx, by implementing the algorithms for slow start, congestion avoidance, fast retransmit, and fast recovery that are described in Internet RFC (Request for Comments) number 2001. You can access the RFC via http://info.internet.isi.edu:80/in-notes.

Also, the AFS 3.5 implementation of jumbograms is improved. Rx packets are now fixed length, and Rx begins transmissions by sending one packet per datagram. It gradually increases the number of packets per datagram as long as the recipient does not return any errors. In case of error, Rx reverts to sending only one packet per datagram. When retransmitting data, Rx always sends only one packet per datagram.

Improved Software Engineering and Overall Quality

The AFS Development team has made several changes designed to improve AFS's overall quality and stability. These include a thorough reorganization of the source code, use of a more extensive suite of system tests during a longer testing period before release, and a larger staff.

New and Modified Commands and Options

There are several new commands and new options to existing commands in AFS 3.5. See Changes to AFS Commands and Files.


Supported System Types

AFS 3.5 supports the following system types.
alpha_dux40 DEC AXP system with one or more processors running Digital UNIX 4.0d
hp_ux110 Hewlett-Packard 9000 Series and PA8000 Series 700 and 800 systems with one or more processors running the 32-bit version of HP-UX 11.0
i386_linux22 IBM-compatible PC with one or more processors running Linux kernel version 2.2.2 or 2.2.3
rs_aix42 IBM RS/6000 with one or more processors running the 32-bit version of AIX 4.2, 4.2.1, 4.3, 4.3.1, or 4.3.2
sgi_65 Silicon Graphics system with one or more processors running IRIX 6.5. The following processor types are supported: IP19, IP20, IP21, IP22, IP25, IP26, IP27, IP28, IP30, IP32
sun4x_56 Sun SPARCstation with one or more processors of kernel architecture sun4c, sun4d, sun4m, or sun4u running Solaris 2.6


Accessing the AFS Documentation

The AFS 3.5 documentation set includes the following books:

and the AFS distribution includes a copy of each in the following two formats:

There are three sources for the documents:

This section explains how to create and mount a volume to house the HTML version of the documents, making them available for online viewing by your users. The recommended mount point for the volume is /afs/cellname/afsdoc. If you wish, you can create a link to the mount point on each client machine's local disk, called /usr/afsdoc. Alternatively, you can create a link to the mount point in each user's home directory. You can also choose to permit users to access only certain documents (most probably, the AFS User's Guide) by creating different mount points or setting different ACLs on different document directories.

This section also includes optional instructions for storing the PostScript version of the documents in AFS.

  1. Issue the vos create command to create a volume for storing the AFS documentation. Include the -maxquota argument to set an unlimited quota on the volume.

    If you wish, you can set the volume's quota to a finite value after you complete the copying operations. At that point, use the vos examine command to determine how much space the volume is occupying. Then issue the fs setquota command to set a quota value that is slightly larger.

       
       % vos create <machine name> <partition name>  afsdoc  -maxquota 0 
         
    

  2. Issue the fs mkmount command to mount the new volume. If your root.cell volume is replicated, you must precede the cellname with a period to specify the ReadWrite mount point, as shown. Then issue the vos release command to release a new replica of the root.cell volume, and the fs checkvolumes command to force the local Cache Manager to access them.
         
       % fs mkmount -dir /afs/.cellname/afsdoc -vol afsdoc
       
       % vos release root.cell
       
       % fs checkvolumes
        
    

  3. Issue the fs setacl command to grant the rl permissions to the system:anyuser group on the new directory's ACL.
           
       % cd /afs/.cellname/afsdoc 
        
       % fs setacl  .  system:anyuser rl 
       
    

  4. Access the documents via one of the three sources listed in the introduction to this section. Copy the HTML version of the AFS documents from the doc_source directory you select into the /afs/cellname/afsdoc directory.

    In addition to a subdirectory for each document, several files with a .gif extension are copied to the afsdoc directory. They enable readers to move easily between sections of a document. The file called index.htm is an introductory HTML page that contains a hyperlink to each of the documents. For online viewing to work properly, these files must remain in the /afs/cellname/afsdoc directory.

        
       # cp -rp  doc_source/Html  .
          
    

  5. (Optional) Copy the PostScript versions of the AFS documents to the PostScript subdirectory.
             
       # cp -rp  doc_source/PostScript  .
          
    

  6. (Optional) If you believe it is helpful to your users to access the HTML version of AFS documents via a local disk directory, create /usr/afsdoc on the local disk as a symbolic link to the directory /afs/cellname/afsdoc.
       
       # ln -s /afs/cellname/afsdoc  /usr/afsdoc
    

    An alternative is to create a link in each user's home directory to the /afs/cellname/afsdoc mount point.


Upgrading Server and Client Machines to AFS 3.5

This section explains how to upgrade server and client machines from AFS 3.4a to AFS 3.5. Before performing an upgrade, please read all of the introductory material in this section.

If you are installing AFS for the first time, skip this chapter and refer to the AFS Installation Guide.

AFS provides backward compatibility to the previous release only: AFS 3.5 is certified to be compatible with AFS 3.4a but not necessarily with earlier versions.
Note:Upgrading from AFS 3.3 or earlier directly to AFS 3.5 is not supported, because a VLDB conversion is required between AFS 3.3 and AFS 3.4a, and file system conversions are required on some system types. Contact the AFS Product Support group for assistance in upgrading to AFS 3.4a.

Prerequisites for Upgrading

You must meet the following requirements to upgrade successfully to AFS 3.5:

Obtaining the Binary Distribution

Use one of the following methods to obtain the AFS distribution of each system type for which you are licensed. To access the distribution by network, you must have an authentication account in the Transarc cell; contact AFS Product Support for assistance.

Storing Binaries in AFS

It is conventional to store many of the programs and files in the AFS binary distribution in a separate volume for each system type mounted in your AFS filespace at /afs/cellname/sysname/usr/afsws. These instructions rename the volume currently mounted at this location and create a new volume for AFS 3.5 binaries.

Repeat the instructions for each system type.

  1. Authenticate as an administrator listed in the /usr/afs/etc/UserList file.

  2. Issue the vos create command to create a new volume for AFS 3.5 binaries called sysname.3.5. Set an unlimited quota on the volume to avoid running out of spaces as you copy files from the distribution.
       
       % vos create <machine name> <partition name> sysname.3.5  -maxquota  0   
        
    

  3. Issue the fs mkmount command to mount the volume at a temporary location.
       
       % fs mkmount  /afs/.cellname/temp  sysname.3.5 
        
    

  4. Prepare to access the files using the method you have selected:

  5. Copy files from the distribution into the sysname.3.5 volume.
       
       % cp -rp  bin  /afs/.cellname/temp  
       
       % cp -rp  etc  /afs/.cellname/temp  
          
       % cp -rp  include  /afs/.cellname/temp  
       
       % cp -rp  lib  /afs/.cellname/temp
    

  6. (Optional) By convention, the contents of the distribution's root.client directory are not stored in AFS. However, if you are upgrading client functionality on many machines, it can be simpler to copy the client files from your local AFS space than from the CD-ROM, the /afs/transarc.com cell, or the unpacked tar file. If you wish to store the contents of the root.client directory in AFS temporarily, copy them now.
       
       % cp -rp  root.client  /afs/.cellname/temp  
      
    

  7. Issue the vos rename command to change the name of the volume currently mounted at the /afs/cellname/sysname/usr/afsws directory. A possible value for the extension reflects the AFS version and build level (for example: 3.4-bld5.67).

    If you do not plan to retain the old volume, you can substitute the vos remove command in this step.

       
       %  vos rename sysname.usr.afsws  sysname.usr.afsws.extension   
        
    

  8. Issue the vos rename command to change the name of the sysname.3.5 volume to sysname.usr.afsws.
       
       %  vos rename sysname.usr.afsws.3.5  sysname.usr.afsws   
        
    

  9. Issue the fs mkmount command to remove the temporary mount point for the sysname.3.5 volume.
        
       % fs rmmount  /afs/.cellname/temp    
    

Upgrading the Operating System

AFS 3.5 supports a single revision level of some operating systems (for example, Digital UNIX 4.0d only), so in some cases you must upgrade the operating system before installing AFS 3.5. When performing any operating system upgrade, you must take several actions to preserve AFS functionality, including the following:

In addition, you must perform a file system conversion on AFS server partitions when upgrading to the following operating systems:

Instructions for each operating system follow. Before performing the conversion, move all AFS volumes to other file server machines or back them up. If creating backups, either use the AFS Backup System or another AFS-aware backup utility to create full dumps on tape, or use the vos dump command to create dump files on partitions that you are not converting (non-/vicep partitions).

For extra protection, create a tape copy of the complete contents of the /usr/afs directory on a database server machine. In the unlikely event that the contents of the /usr/afs directory are damaged, you can use the tape backup to restore it. This is particularly important for the VLDB and other administrative databases in the /usr/afs/db directory.

Upgrading File Server Machines to Digital UNIX 4.0d

  1. Use the vos move command to move all AFS volumes to other file server machines, or back them up using the AFS Backup System, another AFS-aware backup utility, or the vos dump command.

  2. Upgrade the operating system to Digital UNIX 4.0d.

  3. Use the Digital UNIX 4.0d newfs utility to reformat all AFS server partitions.

  4. Move or restore volumes to the AFS server partitions as desired.

Upgrading File Server Machines to HP-UX 11.0

If you are upgrading to HP-UX 11.0 from version 10.10 or earlier, use the following instructions. If you already upgraded to HP-UX 10.20 while running AFS 3.4a, no action is necessary.

  1. Use the vos move command to move all AFS volumes to other file server machines, or back them up using the AFS Backup System, another AFS-aware backup utility, or the vos dump command.

  2. Upgrade the operating system to HP-UX 10.20.

  3. Use the HP-UX 10.20 newfs utility to reformat all AFS server partitions.

  4. Upgrade the operating system to HP-UX 11.0.

  5. Move or restore volumes to the AFS server partitions as desired.

Upgrading File Server Machines to Solaris 2.6

Before upgrading an AFS file server machine to Solaris 2.6, you must run the fs_conv_sol26 utility on all AFS server partitions. The utility works on machines running Solaris 2.4, 2.5, or 2.6; if the machine is running an earlier version of Solaris or SunOS, upgrade it to Solaris 2.4 or 2.5.

The Solaris 2.6 version of the fs process group must not run if there are unconverted partitions. The following instructions therefore run the utility before upgrading the operating system or AFS. This way you do not need to comment the AFS initialization script out of the machine's startup sequence (which you must otherwise do because it is likely to run automatically during the operating system upgrade and start the fileserver process).

The fs_conv_sol26 binary is in the root.server/usr/afs/bin directory of the AFS 3.5 distribution. Since the utility must run before you actually upgrade AFS or the operating system, the suggested method is to copy only the fs_conv_sol26 binary into the machine's /usr/afs/bin directory at first.

To ensure that there is no other activity on the AFS server partitions as the fs_conv_sol26 utility runs, the instructions unmount them. An additional reason to unmount partitions is that running the utility on a mounted partition can corrupt data on it. The instructions also comment out all server partition entries in the /etc/vfstab file to prevent the vendor version of the fsck program from running on the partitions in case an error during the operating system upgrade results in a reboot.

  1. Log in to the machine as the local superuser root and authenticate with AFS as a fully-privileged administrator.

  2. Use the vos move command to move all AFS volumes to other file server machines, or back them up using the AFS Backup System, another AFS-aware backup utility, or the vos dump command.

  3. Copy the fs_conv_sol26 binary from the distribution to the /usr/afs/bin directory.

  4. Shut down all AFS server processes on the machine.
           
           # bos shutdown <machine name> -cell <cell name>
       
    

  5. Unmount each AFS server partition.
    Note:Running the fs_conv_sol26 utility on a mounted partition can cause data corruption.
       
       # umount /vicepxx
       
    

  6. Comment out the entry for all server partitions in the /etc/vfstab file. This ensures that the standard operating system version of the fsck program does not access these partitions if the machine reboots unexpectedly.

  7. Run the fs_conv_sol26 utility on each raw device that corresponds to an AFS server partition.
          
       # fs_conv_sol26  convert  -device <raw device> -force  [-verbose] > logfile
    

    where

    -device
    Names the raw device pathname (in the /dev/rdsk directory rather than the /dev/dsk directory) of an AFS server partition.

    -force
    Enables the command actually to run. If you omit this flag, the command instead writes to the standard output a stream a trace of changes to be made during the actual conversion.

    -verbose
    Is useful mainly if a previous attempt to run the command has failed for a partition, and debugging information is needed. It can make the log file very long (tens to hundreds of megabytes, depending on the number of inodes on the partition).

    logfile
    Specifies the full pathname of the local disk file to which to write a trace of the conversion.

    The following is an example of correct command format.

       
       # fs_conv_sol26 convert -device /dev/rdsk/c0t1d0s3  -force > /tmp/s3log
       
    

    The following type of message in the log confirms that the conversion of a partition was successful:

       
       /vicepa: 477 AFS inodes were converted to a SunOS 5.6    \
           format; 0 already converted.
       
    

  8. Upgrade the operating system to Solaris 2.6, following the instructions from the operating system vendor.

  9. Uncomment the server partition entries in the /etc/vfstab file. Edit them to conform to the following format. The main change is in the fourth field: changing ufs in the existing entry to afs.
       
       /dev/dsk/disk  /dev/rdsk/disk  /vicepxx  afs  boot order  yes
       
    
    
    

    For example:

       
       /dev/dsk/c0t6d0s1 /dev/rdsk/c0t6d0s1 /vicepa afs 3 yes
       
    

  10. Issue the mountall command to mount all partitions.

For reference, the complete syntax of the fs_conv_sol26 command is as follows:

  fs_conv_sol26	{convert | unconvert | help] [-verbose] [-force]
                {-device <raw device name>+ | -part </vicepx>+}
   

where

convert
Converts each AFS server partition to a format compatible with Solaris 2.6.

unconvert
Converts each AFS server partition back to a format compatible with versions of Solaris prior to 2.6.

-verbose
Lists the inodes on the partition and reports the changes that the conversion process makes to them. If you provide this flag without the -force option, the output lists the changes to be made during the conversion, without actually performing them; it is not necessary to unmount the partitions in this case.

-force
Performs the actual conversion. If you provide this flag without either of the -part or -device arguments, the command converts all AFS server partitions listed in the /etc/vfstab file. This is not recommended, because data corruption can result if the command runs on mounted partitions.

-device
Specifies the raw device name of each AFS server partition to convert (for example, /dev/rdsk/c0t1d0s3). Provide this argument or the -part argument.

-part
Specifies the directory name of each AFS server partition to convert (for example, /vicepa). The utility verifies the partition's entry in the /etc/vfstab file before performing the conversion, so the entry cannot be commented if you provide this argument. Provide this argument or the -device argument.

The following command consults the /etc/vfstab file and converts all AFS server partitions listed in it. Allowing automatic conversion in this way is admittedly easier than the partition-by-partition method outlined in the preceding instructions, but it is not recommended. It requires that you leave all AFS server partition entries uncommented in the /etc/vfstab file, introducing the possibility that the Solaris version of the fsck program can access them if the machine reboots spontaneously during the upgrade process.

   fs_conv_sol26 convert -force    /* not recommended */   
    

Distributing Binaries to Server Machines

The instructions in this section explain how to use the Update Server to distribute server binaries from a binary distribution machine of each system type. Repeat the steps for each binary distribution machine in your cell. If you do not use the Update Server, repeat the steps on every server machine in your cell.

If you are copying files from the AFS product tree or via the Web, the server machine must also be configured as an AFS client machine.

  1. If you are upgrading the operating system on any server machine to Digital UNIX 4.0d, to HP-UX 11.0 from version 10.10 or earlier, or to Solaris 2.6, you must convert the format of server partitions before the AFS 3.5 File Server runs. Perform the instructions in Upgrading the Operating System before continuing.

  2. Become the local superuser root, if you are not already.

  3. Create a temporary subdirectory of the /usr/afs/bin directory to store the AFS 3.5 server binaries.
       
       # mkdir /usr/afs/bin.35
        
    

  4. Prepare to access server files using the method you have selected from those listed in Obtaining the Binary Distribution:

  5. Copy the server binaries from the distribution into the /usr/afs/bin.35 directory.
       
       # cp -p  *  /usr/afs/bin.35 
       
    

  6. If you use the United States edition of AFS and a system control machine, copy the encryption-enabled version of the upclient binary.

  7. Rename the current /usr/afs/bin directory to /usr/afs/bin.old and the /usr/afs/bin.35 directory to the standard location.
       
       # cd /usr/afs
       
       # mv  bin  bin.old
          
       # mv  bin.35  bin 
       
    

Upgrading Server Machines

Repeat the following instructions on each server machine. Perform them first on the database server machine with the lowest IP address, next on the other database server machines, and finally on other server machines.

The AFS data stored on a server machine is inaccessible to client machines during the upgrade process, so it is best to perform it at the time and in the manner that will disturb your users least.

  1. If you have just followed the steps in Distributing Binaries to Server Machines to install the server binaries on binary distribution machines, wait the required interval (by default, five minutes) for the upclientbin process running on this machine to retrieve the binaries.

    If you do not use binary distribution machines, perform the instructions in Distributing Binaries to Server Machines on this machine.

  2. If you are upgrading a server machine to Digital UNIX 4.0d, to HP-UX 11.0 from version 10.10 or earlier, or to Solaris 2.6, you must already have performed a file system conversion. For instructions, see Upgrading the Operating System.

  3. Become the local superuser root, if you are not already, by issuing the su command.
       
       % su root
       Password: root_password
       
    

  4. If the machine also functions as a client machine, prepare to access client files using the method you have selected from those listed in Obtaining the Binary Distribution:

  5. If the machine also functions as a client machine, copy the AFS 3.5 version of the afsd binary and other files to the /usr/vice/etc directory.
    Note:Some files in the /usr/vice/etc directory, such as the AFS initialization file (called afs.rc on many system types), do not necessarily need to change for a new release. It is a good policy to compare the contents of the distribution directory and the /usr/vice/etc directory before performing the copying operation. If there are files in the /usr/vice/etc directory that you created for AFS 3.4a and that you want to retain, either move them to a safe location before performing the following instructions, or alter the following instructions to copy over only the appropriate files.
         
       # cp  -p  usr/vice/etc/*   /usr/vice/etc   
       
       # cp  -rp  usr/vice/etc/C  /usr/vice/etc
       
    

    If you have not yet incorporated AFS into the machine's authentication system, perform the instructions in the section titled Enabling AFS Login in the AFS Installation Guide chapter about configuring client machines. If this machine was running the same operating system revision with AFS 3.4a, you presumably already incorporated AFS into its authentication system. You can consult that section to verify that the configuration is correct.

  6. AFS performance is most dependable if the AFS release version of the kernel extensions and server processes is the same. Therefore, it is best to incorporate the AFS 3.5 kernel libraries into the kernel at this point.
    Note:If the machine also serves as a client and you upgraded the client files in the previous step, you must upgrade the kernel extensions now and reboot the machine to use them and the new Cache Manager.

    Begin by shutting down the server processes. This prevents them from restarting accidently before you have a chance to incorporate the AFS 3.5 extensions into the kernel.

       
       # bos shutdown <machine name> -localauth -wait
       
    

    Now perform the instructions in Incorporating AFS into the Kernel, which have you reboot the machine. Assuming that the machine's AFS initialization file is configured to invoke the bosserver command as specified in the AFS Installation Guide, the BOS Server starts and starts up the other AFS server processes listed in the local /usr/afs/local/BosConfig file.

    If you choose to upgrade the kernel extensions later, you can restart all server processes at this point by issuing the bos restart command with the -bosserver flag. Alternatively, you wait for the processes to restart automatically at the time specified in the /usr/afs/local/BosConfig file.

  7. Once you are satisfied that the machine is functioning correctly at AFS 3.5, there is no need to retain AFS 3.4a versions of the server binaries in the /usr/afs/bin directory. (You can always use the bos install command to reinstall them if it becomes necessary to downgrade). If you use the Update Server, the upclientbin process renamed them with a .old extension in Step 1. To reclaim the disk space occupied in the /usr/afs/bin directory by .bak and .old files, you can use the following command:
       
       # bos prune <machine name> -bak -old -localauth
    

    Step 7 of Distributing Binaries to Server Machines had you move the AFS 3.4a version of the binaries to the /usr/afs/bin.old directory. You can also remove that directory on any machine where you created it.

       
       # rm -rf  /usr/afs/bin.old
    

Upgrading Client Machines

  1. Become the local superuser root, if you are not already, by issuing the su command.
    % su root
    Password: root_password
    

  2. Prepare to access client files using the method you have selected from those listed in Obtaining the Binary Distribution:

  3. Copy the AFS 3.5 version of the afsd binary and other files to the /usr/vice/etc directory.
    Note:Some files in the /usr/vice/etc directory, such as the AFS initialization file (called afs.rc on many system types), do not necessarily need to change for a new release. It is a good policy to compare the contents of the distribution directory and the /usr/vice/etc directory before performing the copying operation. If there are files in the /usr/vice/etc directory that you created for AFS 3.4a and that you want to retain, either move them to a safe location before performing the following instructions, or alter the following instructions to copy over only the appropriate files.
         
       # cp  -p  usr/vice/etc/*   /usr/vice/etc   
       
       # cp  -rp  usr/vice/etc/C  /usr/vice/etc
       
    

    If you have not yet incorporated AFS into the machine's authentication system, perform the instructions in the section titled Enabling AFS Login in the AFS Installation Guide chapter about configuring client machines. If this machine was running the same operating system revision with AFS 3.4a, you presumably already incorporated AFS into its authentication system. You can consult that section to verify that the configuration is correct.

  4. Perform the instructions in Incorporating AFS into the Kernel to incorporate AFS extensions into the kernel. The instructions conclude with a reboot of the machine, which starts the new Cache Manager.

Incorporating AFS into the Kernel

As part of the upgrade process, you must incorporate AFS 3.5 extensions into the kernel on every AFS server and client machine. The following sections provide instructions for using a kernel dynamic loader or building a static kernel as appropriate.

Loading AFS into the Kernel on AIX Systems

The AIX kernel extension facility is the dynamic kernel loader provided by IBM Corporation for AIX. AIX does not support building AFS modifications into a static kernel.

For AFS to function correctly, the kernel extension facility must run each time the machine reboots. The simplest way to guarantee this is to invoke the facility in the machine's AFS initialization file. In the following instructions you edit the rc.afs initialization script provided in the AFS distribution, selecting the appropriate options depending on whether NFS is also to run.

After editing the script, you verify that there is an entry in the AIX inittab file that invokes it, then reboot the machine to incorporate the new AFS extensions into the kernel and restart the Cache Manager.

  1. Access the AFS distribution by changing directory as indicated. Substitute rs_aix42 for the sysname variable.

  2. Copy the AFS kernel library files to the local /usr/vice/etc/dkload directory, and the AFS initialization script to the /etc directory.
         
       # cd  usr/vice/etc
       
       # cp -rp  dkload  /usr/vice/etc
       
       # cp -p  rc.afs  /etc/rc.afs
        
    

  3. Edit the /etc/rc.afs script, setting the NFS variable as indicated.
    Note:For the machine to function as an NFS/AFS translator, NFS must already be loaded into the kernel. It is loaded automatically on systems running AIX 4.1.1 and later, as long as the file /etc/exports exists.

  4. Place the following line in the AIX initialization file, /etc/inittab, to invoke the AFS initialization script. It appears just after the line that starts NFS daemons.
       
       rcafs:2:wait:/etc/rc.afs > /dev/console 2>&1 # Start AFS services
       
    

  5. (Optional) There are now copies of the AFS initialization file in both the /usr/vice/etc and /etc directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS distribution if necessary.
       
       # cd  /usr/vice/etc
       
       # rm  rc.afs
      
       # ln -s  /etc/rc.afs
       
    

  6. Reboot the machine.
       
          # shutdown -r now
       
    

  7. If you are upgrading a server machine, login again as the local superuser root, then return to Step 7 in Upgrading Server Machines.
      
       login: root
       Password: root_password     
    

Building AFS into the Kernel on Digital UNIX Systems

On Digital UNIX systems, you must build AFS modifications into a new static kernel; Digital UNIX does not support dynamic loading. If the machine's hardware and software configuration exactly matches another Digital UNIX machine on which AFS 3.5 is already built into the kernel, you can choose to copy the kernel from that machine to this one. In general, however, it is better to build AFS modifications into the kernel on each machine according to the following instructions.

If the machine was running a revision of Digital UNIX 4.0 and AFS 3.4a, the configuration changes specified in Step 1 through Step 4 are presumably already in place.

  1. Create a copy called AFS of the basic kernel configuration file included in the Digital UNIX distribution as /usr/sys/conf/machine_name, where machine_name is the machine's hostname in all uppercase letters.
       # cd /usr/sys/conf
       
       # cp machine_name AFS
       
    

  2. Add AFS to the list of options in the configuration file you created in the previous step, so that the result looks like the following:
              .                   .
              .                   .
           options               UFS
           options               NFS
           options               AFS
              .                   .
              .                   .
       
    

  3. Add an entry for AFS to two places in the /usr/sys/conf/files file.

  4. Add an entry for AFS to two places in the /usr/sys/vfs/vfs_conf.c file.

  5. Access the AFS distribution by changing directory as indicated. Substitute alpha_dux40 for the sysname variable.

  6. If you ran a revision of Digital UNIX 4.0 on this machine with AFS 3.4a, the appropriate AFS initialization file possibly already exists as /sbin/init.d/afs. Compare it to the version in the root.client/usr/vice/etc directory of the AFS 3.5 distribution to see if any changes are needed.

    If the initialization file is not already in place, copy it now. Note the removal of the .rc extension as you copy.

       
       # cp  usr/vice/etc/afs.rc  /sbin/init.d/afs
       
    

  7. Copy the AFS kernel module to the local /usr/sys/BINARY directory.

    The initial GA distribution of AFS 3.5 includes only the libafs.nonfs.o version of the library, because Digital UNIX machines are not supported as NFS/AFS Translator machines.

    If later AFS 3.5 distributions support NFS/AFS Translator functionality on Digital UNIX, on translator machines you can instead copy the libafs.o version of the library (in this case, the machine's kernel must also support NFS server functionality).

      
       # cp  bin/libafs.nonfs.o  /usr/sys/BINARY/afs.mod
       
    

  8. Configure and build the kernel. Respond to any prompts by pressing <Return>. The resulting kernel resides in the file /sys/AFS/vmunix.
       
       # doconfig -c AFS
       
    

  9. Rename the existing kernel file and copy the new, AFS-modified file to the standard location.
       
       # mv  /vmunix  /vmunix_save
       
       # cp  /sys/AFS/vmunix  /vmunix
       
    

  10. Verify the existence of the symbolic links specified in the following commands, which incorporate the AFS initialization script into the Digital UNIX startup and shutdown sequence. If necessary, issue the commands to create the links.
       
       # cd  /sbin/init.d
       
       # ln -s  ../init.d/afs  /sbin/rc3.d/S67afs
       
       # ln -s  ../init.d/afs  /sbin/rc0.d/K66afs
       
    

  11. (Optional) If the machine is configured as a client, there are now copies of the AFS initialization file in both the /usr/vice/etc and /sbin/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS distribution if necessary.
       
       # cd  /usr/vice/etc
       
       # rm  afs.rc
      
       # ln -s  /sbin/init.d/afs  afs.rc
       
    

  12. Reboot the machine.
       
       # shutdown -r now
       
    

  13. If you are upgrading a server machine, login again as the local superuser root, then return to Step 7 in Upgrading Server Machines.
      
       login: root
       Password: root_password     
    

Building AFS into the Kernel on HP-UX Systems

On HP-UX systems, you must build AFS modifications into a new kernel; HP-UX does not support dynamic loading. If the machine's hardware and software configuration exactly matches another HP-UX machine on which AFS 3.5 is already built into the kernel, you can choose to copy the kernel from that machine to this one. In general, however, it is better to build AFS modifications into the kernel on each machine according to the following instructions.

  1. Move the existing kernel-related files to a safe location.
       
       # cp /stand/vmunix /stand/vmunix.noafs
       
       # cp /stand/system /stand/system.noafs
       
    

  2. Access the AFS distribution by changing directory as indicated. Substitute hp_ux110 for the sysname variable.

  3. If you ran HP-UX 11.0 on this machine with AFS 3.4a, the appropriate AFS initialization file possibly already exists as /sbin/init.d/afs. Compare it to the version in the root.client/usr/vice/etc directory of the AFS 3.5 distribution to see if any changes are needed.

    If the initialization file is not already in place, copy it now. Note the removal of the .rc extension as you copy.

       
       # cp  usr/vice/etc/afs.rc  /sbin/init.d/afs
       
    

  4. Copy the file afs.driver to the local /usr/conf/master.d directory, changing its name to afs as you do so.
         
       # cp  usr/vice/etc/afs.driver  /usr/conf/master.d/afs
       
    

  5. Copy the AFS kernel module to the local /usr/conf/lib directory.

    The initial GA distribution of AFS 3.5 includes only the libafs.nonfs.o version of the library, because HP-UX machines are not supported as NFS/AFS Translator machines. Change the library's name to libafs.a as you copy it.

    If later AFS 3.5 distributions support NFS/AFS Translator functionality on HP-UX, on translator machines instead copy the libafs.a version of the library (in this case, the machine's kernel must also support NFS server functionality).

       
       # cp  bin/libafs.nonfs.a  /usr/conf/lib/libafs.a
       
    

  6. Verify the existence of the symbolic links specified in the following commands, which incorporate the AFS initialization script into the HP-UX startup and shutdown sequence. If necessary, issue the commands to create the links.
          # cd /sbin/init.d
       
       # ln -s ../init.d/afs /sbin/rc2.d/S460afs
      
       # ln -s ../init.d/afs /sbin/rc2.d/K800afs
       
    

  7. (Optional) If the machine is configured as a client, there are now copies of the AFS initialization file in both the /usr/vice/etc and /sbin/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS distribution if necessary.
       
       # cd /usr/vice/etc
       
       # rm afs.rc
      
       # ln -s  /sbin/init.d/afs  afs.rc
       
    

  8. Incorporate the AFS driver into the kernel, either using the SAM program or a series of individual commands. Both methods reboot the machine, which loads the new kernel and starts the Cache Manager.

  9. If you are upgrading a server machine, login again as the local superuser root, then return to Step 7 in Upgrading Server Machines.
      
       login: root
       Password: root_password     
    

Incorporating AFS into the Kernel on IRIX Systems

To incorporate AFS into the kernel on IRIX systems, choose one of two methods:

Using the ml Program on IRIX Systems

The ml program is the dynamic kernel loader provided by SGI for IRIX systems.

If you choose to use the ml program rather than to build AFS modifications into a static kernel, then for AFS to function correctly the ml program must run each time the machine reboots. The simplest way to guarantee this is to invoke the program in the machine's AFS initialization script, which is included in the AFS distribution. In this section you activate the configuration variables that trigger the appropriate commands in the script.

  1. Issue the uname -m command to determine the machine's CPU type. The IPxx value in the output must match one of the supported CPU types listed in the AFS Release Notes for the current version of AFS.
       
       # uname -m
       
    

  2. Access the AFS distribution by changing directory as indicated. Substitute sgi_65 for the sysname variable.

  3. Copy the appropriate AFS kernel library file to the local /usr/vice/etc/sgiload directory; the IPxx portion of the library file name must match the value returned by the uname -m command. Also choose the file appropriate to whether the machine's kernel supports NFS server functionality (NFS must be supported for the machine to act as an NFS/AFS Translator). Single- and multiprocessor machines use the same library file.

    You can choose to copy all of the kernel library files into the /usr/vice/etc/sgiload directory, but they require a significant amount of space.

       
       # cd  usr/vice/etc/sgiload   
       
    

    If the machine's kernel supports NFS server functionality:

       
       # cp -p   libafs.IPxx.o   /usr/vice/etc/sgiload   
       
    

    If the machine's kernel does not support NFS server functionality:

       
       # cp -p  libafs.nonfs.IPxx.o  /usr/vice/etc/sgiload
       
    

  4. Proceed to Enabling the AFS Initialization Script.

Building AFS into the Kernel on IRIX Systems

If you prefer to build a kernel, and the machine's hardware and software configuration exactly matches another IRIX machine on which AFS 3.5 is already built into the kernel, you can choose to copy the kernel from that machine to this one. In general, however, it is better to build AFS modifications into the kernel on each machine according to the following instructions.

  1. Access the AFS distribution by changing directory as indicated. Substitute sgi_65 for the sysname variable.

  2. Issue the uname -m command to determine the machine's CPU type. The IPxx value in the output must match one of the supported CPU types listed in the AFS Release Notes for the current version of AFS.
       
       # uname -m
        
    

  3. Copy the appropriate AFS kernel library file to the local file /var/sysgen/boot/afs.a; the IPxx portion of the library file name must match the value returned by the uname -m command. Also choose the file appropriate to whether the machine's kernel supports NFS server functionality (NFS must be supported for the machine to act as an NFS/AFS Translator). Single- and multiprocessor machines use the same library file.
       
       # cd  bin   
       
    

    If the machine's kernel supports NFS server functionality:

       
       # cp -p   libafs.IPxx.a   /var/sysgen/boot/afs.a   
       
    

    If the machine's kernel does not support NFS server functionality:

       
       # cp -p  libafs.nonfs.IPxx.a   /var/sysgen/boot/afs.a
       
    

  4. Copy the kernel initialization file afs.sm to the local /var/sysgen/system directory, and the kernel master file afs to the local /var/sysgen/master.d directory.
        
       # cp -p  afs.sm  /var/sysgen/system
       
       # cp -p  afs  /var/sysgen/master.d
       
    

  5. Copy the existing kernel file, /unix, to a safe location and compile the new kernel. It is created as /unix.install, and overwrites the existing /unix file when the machine reboots.
       
       # cp /unix /unix_orig
       
       # autoconfig
       
    

  6. Proceed to Enabling the AFS Initialization Script.

Enabling the AFS Initialization Script

  1. If you ran IRIX 6.5 on this machine with AFS 3.4a, the appropriate AFS initialization file possibly already exists as /etc/init.d/afs. Compare it to the version in the root.client/usr/vice/etc directory of the AFS 3.5 distribution to see if any changes are needed.

    If the initialization file is not already in place, copy it now. If the machine is configured as a client machine, you already copied the script to the local /usr/vice/etc directory. Otherwise, change directory as indicated, substituting sgi_65 for the sysname variable.

    Now copy the script. Note the removal of the .rc extension as you copy.

       
       # cp   script_location/afs.rc  /etc/init.d/afs
       
    

  2. If the afsml configuration variable is not already set appropriately, issue the chkconfig command.

    If you are using the ml program:

       
       # /etc/chkconfig -f afsml on
       
    

    If you built AFS into a static kernel:

       
       # /etc/chkconfig -f afsml off
       
    

    If the machine is to function as an NFS/AFS Translator, the kernel supports NFS server functionality, and the afsxnfs variable is not already set appropriately, set it now.

       
       # /etc/chkconfig -f afsxnfs on
       
    

  3. Verify the existence of the symbolic links specified in the following commands, which incorporate the AFS initialization script into the IRIX startup and shutdown sequence. If necessary, issue the commands to create the links.
       
       # cd /etc/init.d
       
       # ln -s ../init.d/afs /etc/rc2.d/S35afs
      
       # ln -s ../init.d/afs /etc/rc0.d/K35afs
       
    

  4. (Optional) If the machine is configured as a client, there are now copies of the AFS initialization file in both the /usr/vice/etc and /etc/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS distribution if necessary.
       
       # cd /usr/vice/etc
       
       # rm afs.rc
      
       # ln -s  /etc/init.d/afs  afs.rc
       
    

  5. Reboot the machine.

       
       # shutdown -i6 -g0 -y
       
    

  6. If you are upgrading a server machine, login again as the local superuser root, then return to Step 7 in Upgrading Server Machines.
      
       login: root
       Password: root_password     
    

Loading AFS into the Kernel on Linux Systems

The insmod program is the dynamic kernel loader for Linux. Linux does not support building AFS modifications into a static kernel.

For AFS to function correctly, the insmod program must run each time the machine reboots. The simplest way to guarantee this is to invoke the program in the machine's AFS initialization file. As distributed, the initialization file includes commands that select the appropriate AFS library file and run the insmod program automatically. In this section you run the script to load AFS modifications into the kernel.

  1. Access the AFS distribution by changing directory as indicated. Substitute i386_linux22 for the sysname variable.

  2. Copy the AFS kernel library files to the local /usr/vice/etc/modload directory. The filenames for the libraries have the format libafs-version.o, where version indicates the kernel build level. The string .mp in the version indicates that the file is appropriate for use with symmetric multiprocessor (SMP) kernels.
       
       # cd  usr/vice/etc
       
       # cp -rp  modload  /usr/vice/etc
       
    

  3. If you ran Linux on this machine with AFS 3.4a, the appropriate AFS initialization file possibly already exists as /etc/rc.d/init.d/afs. Compare it to the version in the root.client/usr/vice/etc directory of the AFS 3.5 distribution to see if any changes are needed.

    If the initialization file is not already in place, copy it now. Note the removal of the .rc extension as you copy.

       
       # cp -p   afs.rc  /etc/rc.d/init.d/afs 
        
    

    Similarly, the afsd options file possibly already exists as /etc/sysconfig/afs from running AFS 3.4a on this machine. Compare it to the version in the root.client/usr/vice/etc directory of the AFS 3.5 distribution to see if any changes are needed.

    If the options file is not already in place, copy it now. Note the removal of the .conf extension as you copy.

        
       # cp  afs.conf  /etc/sysconfig/afs
        
    

    If necessary, edit the options file to invoke the desired arguments on the afsd command in the initialization script. For further information, see the section titled Configuring the Cache Manager in the AFS Installation Guide chapter about configuring client machines.

  4. Issue the chkconfig command to activate the afs configuration variable, if it is not already. Based on the instruction in the AFS initialization file that begins with the string #chkconfig, the command automatically creates the symbolic links that incorporate the script into the Linux startup and shutdown sequence.
       
       # /sbin/chkconfig  --add afs
       
    

  5. (Optional) If the machine is configured as a client, there are now copies of the AFS initialization file in both the /usr/vice/etc and /etc/init.d directories, and copies of the afsd options file in both the /usr/vice/etc and /etc/sysconfig directories. If you want to avoid potential confusion by guaranteeing that the two copies of each file are always the same, create a link between them. You can always retrieve the original script or options file from the AFS distribution if necessary.
       
       # cd /usr/vice/etc
       
       # rm afs.rc afs.conf
        
       # ln -s  /etc/rc.d/init.d/afs  afs.rc
       
       # ln -s  /etc/sysconfig/afs  afs.conf
       
    

  6. Reboot the machine.
      
       # shutdown -r now
         
    

  7. If you are upgrading a server machine, login again as the local superuser root, then return to Step 7 in Upgrading Server Machines.
      
       login: root
       Password: root_password     
    

Loading AFS into the Kernel on Solaris Systems

The modload program is the dynamic kernel loader provided by Sun Microsystems for Solaris systems. Solaris does not support building AFS modifications into a static kernel.

For AFS to function correctly, the modload program must run each time the machine reboots. The simplest way to guarantee this is to invoke the program in the machine's AFS initialization file. In this section you copy an AFS library file to the location where the modload program can access it, /kernel/fs/afs. Select the appropriate library file based on whether NFS is also running.

  1. Access the AFS distribution by changing directory as indicated. Substitute sun4x_56 for the sysname variable.

  2. If you ran Solaris on this machine with AFS 3.4a, the appropriate AFS initialization file possibly already exists as /etc/init.d/afs. Compare it to the version in the root.client/usr/vice/etc directory of the AFS 3.5 distribution to see if any changes are needed.

    If the initialization file is not already in place, copy it now. Note the removal of the .rc extension as you copy.

       
       # cd  usr/vice/etc
     
       # cp  afs.rc  /etc/init.d/afs
       
    

  3. Copy the appropriate AFS kernel library file to the local file /kernel/fs/afs.

    If the machine's kernel supports NFS server functionality and the nfsd process is running:

       
       # cp -p  modload/libafs.o  /kernel/fs/afs
       
    

    If the machine's kernel does not support NFS server functionality or if the nfsd process is not running:

       
       # cp -p  modload/libafs.nonfs.o  /kernel/fs/afs
       
    

  4. Verify the existence of the symbolic links specified in the following commands, which incorporate the AFS initialization script into the Solaris startup and shutdown sequence. If necessary, issue the commands to create the links.
       
       # cd /etc/init.d
      
       # ln -s ../init.d/afs /etc/rc3.d/S99afs
      
       # ln -s ../init.d/afs /etc/rc0.d/K66afs
       
    

  5. (Optional) If the machine is configured as a client, there are now copies of the AFS initialization file in both the /usr/vice/etc and /etc/init.d directories. If you want to avoid potential confusion by guaranteeing that they are always the same, create a link between them. You can always retrieve the original script from the AFS distribution if necessary.
       
       # cd /usr/vice/etc
       
       # rm afs.rc
      
       # ln -s  /etc/init.d/afs  afs.rc
       
    

  6. Reboot the machine.
       # shutdown -i6 -g0 -y
          
    

  7. If you are upgrading a server machine, login again as the local superuser root, then return to Step 7 in Upgrading Server Machines.
      
       login: root
       Password: root_password     
    

Requirements and Limitations

This section summarizes limitations and requirements for AFS 3.5, grouping them by system type where appropriate.

Requirements and Limitations for All System Types

Limitations for AIX Systems

Requirements and Limitations for Digital UNIX Systems

Requirements and Limitations for HP-UX Systems

Requirements and Limitations for IRIX Systems

Requirements and Limitations for Linux Systems

Requirements and Limitations for Solaris Systems


Changes to AFS Commands and Files

This section briefly describes commands, command options, and configuration files that are new in AFS 3.5. The items appear in alphabetical order in each section. It also lists obsolete commands removed from the AFS distribution.

New Commands and Files

AFS 3.5 includes the following new commands and files. All are documented completely in the AFS Command Reference Manual, and many are also discussed in the AFS System Administrator's Guide.

New Command Options and Functionality

AFS 3.5 adds the following new options and functionality to existing commands. All are documented completely in the AFS Command Reference Manual, and many are also discussed in the AFS System Administrator's Guide.

Deleted Commands

The following commands and command options have been removed from the AFS distribution, because the functionality they provide is no longer supported. As indicated, you can still use some of them if you type the command name in full; this level of support is provided for existing cells that are possibly using the commands in scripts.


[Return to Library] [Contents] [Previous Topic] [Top of Topic]



© IBM Corporation 1999. All Rights Reserved