Installation Guide
Instructions for the following procedures appear in the
indicated section of this chapter.
The instructions make the following assumptions.
- You have already installed your cell's first file server machine by
following the instructions in Installing the First AFS Machine
- You are logged in as the local superuser root
- You are working at the console
- A standard version of one of the operating systems supported by the
current version of AFS is running on the machine
- You can access the data on the AFS CD-ROMs, either through a local CD-ROM
drive or via an NFS mount of a CD-ROM drive attached to a machine that is
accessible by network
The procedure for installing a new file server machine is
similar to installing the first file server machine in your cell. There
are a few parts of the installation that differ depending on whether the
machine is the same AFS system type as an existing file server machine, or is
the first file server machine of its system type in your cell. The
differences mostly concern the source for the needed binaries and files, and
what portions of the Update Server you install:
- On a new system type, you must load files and binaries from the AFS
CD-ROM. You install the server portion of the Update Server to make
this machine the binary distribution machine for its system type.
- On an existing system type, you can copy files and binaries from a
previously installed file server machine, rather than from the CD-ROM.
You install the client portion of the Update Server to accept updates of
binaries, because a previously installed machine of this type was installed as
the binary distribution machine.
These instructions are brief; for more detailed information, refer to the
corresponding steps in Installing the First AFS Machine.
To install a new file server machine, perform the following
procedures:
- Copy needed binaries and files onto this machine's local disk
- Incorporate AFS modifications into the kernel
- Configure partitions for storing volumes
- Replace the standard fsck utility with the AFS-modified version
on some system types
- Start the Basic OverSeer (BOS) Server
- Start the appropriate portion of the Update Server
- Start the fs process, which incorporates three component
processes: the File Server, Volume Server, and Salvager
- Start the controller process (called runntp) for the Network
Time Protocol Daemon, which synchronizes clocks
After completing the instructions in this section, you can install database
server functionality on the machine according to the instructions in Installing Database Server Functionality.
Create the /usr/afs and /usr/vice/etc directories
on the local disk. Subsequent instructions copy files from the AFS
distribution CD-ROM into them, at the appropriate point for each system
type.
# mkdir /usr/afs
# mkdir /usr/afs/bin
# mkdir /usr/vice
# mkdir /usr/vice/etc
# mkdir /cdrom
As on the first file server machine, three of the initial procedures in
installing an additional file server machine vary a good deal from platform to
platform. For convenience, the following sections group together all
three of the procedures for a system type. Most of the remaining
procedures are the same on every system type, but differences are noted as
appropriate. The three initial procedures are the following.
- Incorporate AFS modifications into the kernel, either by using a dynamic
kernel loader program or by building a new static kernel
- Configure server partitions to house AFS volumes
- Replace the operating system vendor's fsck program with a
version that recognizes AFS data
To continue, proceed to the section for this system type:
Begin by running the AFS initialization script to call the
AIX kernel extension facility, which dynamically loads AFS modifications into
the kernel. Then configure partitions and replace the AIX
fsck program with a version that correctly handles AFS
volumes.
- Mount the AFS CD-ROM labeled AFS for AIX, International Edition
on the local /cdrom directory. For instructions on mounting
CD-ROMs (either locally or remotely via NFS), see your AIX
documentation.
- Copy the AFS kernel library files from the CD-ROM to the local
/usr/vice/etc/dkload directory, and the AFS initialization script
to the /etc directory.
# cd /cdrom/rs_aix42/root.client/usr/vice/etc
# cp -rp dkload /usr/vice/etc
# cp -p rc.afs /etc/rc.afs
- Edit the /etc/rc.afs script, setting the NFS
variable as indicated.
Note: | For the machine to function as an NFS/AFS translator, NFS must already be
loaded into the kernel. It is loaded automatically on systems running
AIX 4.1.1 and later, as long as the file /etc/exports
exists.
|
- If the machine is not to function as an NFS/AFS Translator, set the NFS
variable as follows:
NFS=$NFS_NONE
- If the machine is to function as an NFS/AFS Translator and is running AIX
4.2 (base level), set the NFS variable as follows. Only sites
that have a license for the NFS/AFS Translator are allowed to run translator
machines.
NFS=$NFS_NFS
- If the machine is to function as an NFS/AFS Translator and is running AIX
4.2.1 or higher, issue the following commands. Only sites
that have a license for the NFS/AFS Translator are allowed to run translator
machines.
NFS=$NFS_IAUTH
- Invoke the /etc/rc.afs script to load AFS modifications
into the kernel. You can ignore any error messages about the inability
to start the BOS Server or the AFS client.
# /etc/rc.afs
- Create a directory called /vicepxx for each AFS server
partition you are configuring (there must be at least one). Repeat the
command for each partition.
# mkdir /vicepxx
- Use the SMIT program to create a journaling file system on each
partition to be configured as an AFS server partition.
- Mount each partition at one of the /vicepxx
directories. Choose one of the following three methods:
- Use the SMIT program
- Use the mount -a command to mount all partitions at once
- Use the mount command on each partition in turn
Also configure the partitions so that they are mounted automatically at
each reboot. For more information, refer to the AIX
documentation.
- Add the following line to the /etc/vfs file. It enables
the Cache Manager to unmount AFS correctly during shutdown.
afs 4 none none
- Move the AIX fsck program helper to a safe location and install
the version from the AFS distribution in its place. The AFS CD-ROM must
still be mounted at the /cdrom directory.
# cd /sbin/helpers
# mv v3fshelper v3fshelper.noafs
# cp -p /cdrom/rs_aix42/root.server/etc/v3fshelper v3fshelper
- Proceed to Starting Server Programs.
Begin by building AFS modifications into the kernel, then
configure server partitions and replace the Digital UNIX fsck
program with a version that correctly handles AFS volumes.
If the machine's hardware and software configuration exactly matches
another Digital UNIX machine on which AFS is already built into the kernel,
you can copy the kernel from that machine to this one. In general,
however, it is better to build AFS modifications into the kernel on each
machine according to the following instructions.
- Create a copy called AFS of the basic kernel configuration file
included in the Digital UNIX distribution as
/usr/sys/conf/machine_name, where machine_name is
the machine's hostname in all uppercase letters.
# cd /usr/sys/conf
# cp machine_name AFS
- Add AFS to the list of options in the configuration file you created in
the previous step, so that the result looks like the following:
. .
. .
options UFS
options NFS
options AFS
. .
. .
- Add an entry for AFS to two places in the /usr/sys/conf/files
file.
- Add a line for AFS to the list of OPTIONS, so that the result
looks like the following:
. . .
. . .
OPTIONS/nfs optional nfs define_dynamic
OPTIONS/afs optional afs define_dynamic
OPTIONS/cdfs optional cdfs define_dynamic
. . .
. . .
- Add an entry for AFS to the list of MODULES, so that the result
looks like the following:
. . . .
. . . .
#
MODULE/nfs_server optional nfs_server Binary
nfs/nfs_server.c module nfs_server optimize -g3
nfs/nfs3_server.c module nfs_server optimize -g3
#
MODULE/afs optional afs Binary
afs/libafs.c module afs
#
- Add an entry for AFS to two places in the
/usr/sys/vfs/vfs_conf.c file.
- Add AFS to the list of defined file systems, so that the result looks like
the following:
. .
. .
#include <afs.h>
#if defined(AFS) && AFS
extern struct vfsops afs_vfsops;
#endif
. .
. .
- Put a declaration for AFS in the vfssw[] table's
MOUNT_ADDON slot, so that the result looks like the following:
. . .
. . .
&fdfs_vfsops;, "fdfs", /* 12 = MOUNT_FDFS */
#if defined(AFS)
&afs_vfsops;, "afs",
#else
(struct vfsops *)0, "", /* 13 = MOUNT_ADDON */
#endif
#if NFS && INFS_DYNAMIC "nfsv3", /* 14 = MOUNT_NFS3 */
&nfs3_vfsops;,
- Mount the AFS CD-ROM labeled AFS for Digital UNIX, International
Edition on the local /cdrom directory. For
instructions on mounting CD-ROMs (either locally or remotely via NFS), see
your Digital UNIX documentation.
- Copy the AFS initialization file from the distribution directory to the
local directory for initialization files on Digital UNIX machines,
/sbin/init.d by convention. Note the removal of the
.rc extension as you copy the file.
# cd /cdrom/alpha_dux40/root.client
# cp usr/vice/etc/afs.rc /sbin/init.d/afs
- Copy the AFS kernel module from the distribution directory to the local
/usr/sys/BINARY directory.
If the machine's kernel supports NFS server functionality:
# cp bin/libafs.o /usr/sys/BINARY/afs.mod
If the machine's kernel does not support NFS server
functionality:
# cp bin/libafs.nonfs.o /usr/sys/BINARY/afs.mod
- Configure and build the kernel. Respond to any prompts by pressing
<Return>. The resulting kernel resides in the file
/sys/AFS/vmunix.
# doconfig -c AFS
- Rename the existing kernel file and copy the new, AFS-modified file to the
standard location.
# mv /vmunix /vmunix_save
# cp /sys/AFS/vmunix /vmunix
- Reboot the machine to start using the new kernel.
# shutdown -r now
- Create a directory called /vicepxx for each AFS server
partition you are configuring (there must be at least one). Repeat the
command for each partition.
# mkdir /vicepxx
- Add a line with the following format to the file systems registry file,
/etc/fstab, for each directory just created. The entry maps
the directory name to the disk partition to be mounted on it.
/dev/disk /vicepx ufs rw 0 2
For example,
/dev/rz3a /vicepa ufs rw 0 2
- Create a file system on each partition that is to be mounted at a
/vicep directory. The following command is probably
appropriate, but consult the Digital UNIX documentation for more
information.
# newfs -v /dev/disk
- Mount each partition by issuing either the mount -a command to
mount all partitions at once or the mount command to mount each
partition in turn.
- Move the Digital UNIX fsck binaries to a safe location, install
the version from the AFS distribution (the vfsck binary), and link
the Digital UNIX program names to it. The AFS CD-ROM must still be
mounted at the /cdrom directory.
# mv /sbin/ufs_fsck /sbin/ufs_fsck.noafs
# mv /usr/sbin/ufs_fsck /usr/sbin/ufs_fsck.noafs
# cd /cdrom/alpha_dux40/root.server/etc
# cp vfsck /sbin/vfsck
# cp vfsck /usr/sbin/vfsck
# ln -s /sbin/vfsck /sbin/ufs_fsck
# ln -s /usr/sbin/vfsck /usr/sbin/ufs_fsck
- Proceed to Starting Server Programs.
Begin by building AFS modifications into the kernel, then
configure server partitions and replace the HP-UX fsck program with
a version that correctly handles AFS volumes.
If the machine's hardware and software configuration exactly matches
another HP-UX machine on which AFS is already built into the kernel, you can
copy the kernel from that machine to this one. In general, however, it
is better to build AFS modifications into the kernel on each machine according
to the following instructions.
- Move the existing kernel-related files to a safe location.
# cp /stand/vmunix /stand/vmunix.noafs
# cp /stand/system /stand/system.noafs
- Mount the AFS CD-ROM labeled AFS for HP-UX, International
Edition on the local /cdrom directory. For
instructions on mounting CD-ROMs (either locally or remotely via NFS), see
your HP-UX documentation.
- Copy the AFS initialization file from the AFS CD-ROM to the local
directory for initialization files on HP-UX machines,
/sbin/init.d by convention. Note the removal of the
.rc extension as you copy the file.
# cd /cdrom/hp_ux110/root.client
# cp usr/vice/etc/afs.rc /sbin/init.d/afs
- Copy the file afs.driver from the AFS CD-ROM to the
local /usr/conf/master.d directory, changing its name to
afs as you do so.
# cp usr/vice/etc/afs.driver /usr/conf/master.d/afs
- Copy the AFS kernel module from the AFS CD-ROM to the local
/usr/conf/lib directory.
If the machine's kernel supports NFS server functionality:
# cp bin/libafs.a /usr/conf/lib
If the machine's kernel does not support NFS server
functionality:
# cp bin/libafs.nonfs.a /usr/conf/lib
- Incorporate the AFS driver into the kernel, either using the
SAM program or a series of individual commands.
- To use the SAM program:
- Invoke the SAM program, specifying the hostname of the local
machine as local_hostname. The SAM graphical user
interface pops up.
# sam -display local_hostname:0
- Choose the Kernel Configuration icon, then the
Drivers icon. From the list of drivers, select
afs.
- Open the pull-down Actions menu and choose the Add Driver
to Kernel option.
- Open the Actions menu again and choose the Create a New
Kernel option.
- Confirm your choices by choosing Yes and OK when
prompted by subsequent pop-up windows. The SAM program
builds the kernel and reboots the system.
- To use individual commands:
- Edit the file /stand/system, adding an entry for afs
to the Subsystems section.
- Change to the /stand/build directory and issue the
mk_kernel command to build the kernel.
# cd /stand/build
# mk_kernel
- Move the new kernel to the standard location (/stand/vmunix)
and reboot the machine to start using it.
# mv /stand/build/vmunix_test /stand/vmunix
# shutdown -r now
- Create a directory called /vicepxx for each AFS server
partition you are configuring (there must be at least one). Repeat the
command for each partition.
# mkdir /vicepxx
- Use the SAM program to create a file system on each
partition. For instructions, consult the HP-UX documentation.
- On some HP-UX systems that use logical volumes, the SAM program
automatically mounts the partitions. If it has not, mount each
partition by issuing either the mount -a command to mount all
partitions at once or the mount command to mount each partition in
turn.
- Create the command configuration file
/sbin/lib/mfsconfig.d/afs. Use a text editor to place
the indicated two lines in it:
format_revision 1
fsck 0 m,P,p,d,f,b:c:y,n,Y,N,q,
- Create an AFS-specific command directory called
/sbin/fs/afs.
# mkdir /sbin/fs/afs
- Copy the AFS-modified version of the fsck program (the
vfsck binary) and related files from the distribution directory to
the new AFS-specific command directory. Change the vfsck
binary's name to fsck.
# cd /cdrom/hp_ux110/root.server/etc
# cp -p * /sbin/fs/afs
# mv vfsck fsck
- Set the mode bits appropriately on all of the files in the
/sbin/fs/afs directory.
# cd /sbin/fs/afs
# chmod 755 *
- Edit the /etc/fstab file, changing the file system type for
each AFS server (/vicep) partition from hfs to
afs. This ensures that the AFS-modified fsck
program runs on the appropriate partitions.
The sixth line in the following example of an edited file shows an AFS
server partition, /vicepa.
/dev/vg00/lvol1 / hfs defaults 0 1
/dev/vg00/lvol4 /opt hfs defaults 0 2
/dev/vg00/lvol5 /tmp hfs defaults 0 2
/dev/vg00/lvol6 /usr hfs defaults 0 2
/dev/vg00/lvol8 /var hfs defaults 0 2
/dev/vg00/lvol9 /vicepa afs defaults 0 2
/dev/vg00/lvol7 /usr/vice/cache hfs defaults 0 2
- Proceed to Starting Server Programs.
Begin by incorporating AFS modifications into the
kernel. Either use the ml dynamic loader program, or build a
static kernel. Then configure partitions to house AFS volumes.
AFS supports use of both EFS and XFS partitions for housing AFS
volumes. SGI encourages use of XFS partitions.
You do not need to replace IRIX fsck program, because the
version that SGI distributes handles AFS volumes properly.
- Incorporate AFS into the kernel, either using the ml program or
by building AFS modifications into a static kernel.
- To use the ml program:
- Mount the AFS CD-ROM labeled AFS for IRIX, International
Edition on the local /cdrom directory. For
instructions on mounting CD-ROMs (either locally or remotely via NFS), see
your IRIX documentation.
- Issue the uname -m command to determine the machine's CPU
type. The IPxx value in the output must match one
of the supported CPU types listed in the AFS Release Notes for the
current version of AFS.
# uname -m
- Copy the appropriate AFS kernel library file from the CD-ROM to the local
/usr/vice/etc/sgiload directory; the IPxx
portion of the library file name must match the value returned by the
uname -m command. Also choose the file appropriate to
whether the machine's kernel supports NFS server functionality (NFS must
be supported for the machine to act as an NFS/AFS Translator). Single-
and multiprocessor machines use the same library file.
You can choose to copy all of the kernel library files into the
/usr/vice/etc/sgiload directory, but they require a significant
amount of space.
# mkdir /usr/vice/etc/sgiload
# cd /cdrom/sgi_65/root.client/usr/vice/etc
If the machine's kernel supports NFS server functionality:
# cp -p sgiload/libafs.IPxx.o /usr/vice/etc/sgiload
If the machine's kernel does not support NFS server
functionality:
# cp -p sgiload/libafs.nonfs.IPxx.o /usr/vice/etc/sgiload
- Copy the AFS initialization file from the CD-ROM to the local directory
for initialization files on IRIX machines, /etc/init.d by
convention. Note the removal of the .rc extension as
you copy the file.
# cp -p afs.rc /etc/init.d/afs
- Issue the chkconfig command to activate the afsml
configuration variable.
# /etc/chkconfig -f afsml on
If the machine is to function as an NFS/AFS Translator and the kernel
supports NFS server functionality, activate the afsxnfs
variable.
# /etc/chkconfig -f afsxnfs on
- Invoke the /etc/init.d/afs script to load AFS extensions
into the kernel. The script invokes the ml command,
automatically determining which kernel library file to use based on this
machine's CPU type and the activation state of the afsxnfs
variable.
You can ignore any error messages about the inability to start the BOS
Server or Cache Manager.
# /etc/init.id/afs start
- If you prefer to build a kernel, and the machine's hardware and
software configuration exactly matches another IRIX machine on which AFS is
already built into the kernel, you can copy the kernel from that machine to
this one. In general, however, it is better to build AFS modifications
into the kernel on each machine according to the following
instructions.
- Mount the AFS CD-ROM labeled AFS for IRIX, International
Edition on the /cdrom directory. For instructions on
mounting CD-ROMs (either locally or remotely via NFS), see your IRIX
documentation.
- Copy the AFS initialization file from the CD-ROM to the local directory
for initialization files on IRIX machines, /etc/init.d by
convention. Note the removal of the .rc extension as
you copy the file.
# cd /cdrom/sgi_65/root.client
# cp -p usr/vice/etc/afs.rc /etc/init.d/afs
- Copy the kernel initialization file afs.sm to the local
/var/sysgen/system directory, and the kernel master file
afs to the local /var/sysgen/master.d
directory.
# cp -p bin/afs.sm /var/sysgen/system
# cp -p bin/afs /var/sysgen/master.d
- Issue the uname -m command to determine the machine's CPU
type. The IPxx value in the output must match one
of the supported CPU types listed in the AFS Release Notes for the
current version of AFS.
# uname -m
- Copy the appropriate AFS kernel library file from the CD-ROM to the local
file /var/sysgen/boot/afs.a; the IPxx
portion of the library file name must match the value returned by the
uname -m command. Also choose the file appropriate to
whether the machine's kernel supports NFS server functionality (NFS must
be supported for the machine to act as an NFS/AFS Translator). Single-
and multiprocessor machines use the same library file.
If the machine's kernel supports NFS server functionality:
# cp -p bin/libafs.IPxx.a /var/sysgen/boot/afs.a
If the machine's kernel does not support NFS server
functionality:
# cp -p bin/libafs.nonfs.IPxx.a /var/sysgen/boot/afs.a
- Issue the chkconfig command to deactivate the afsml
configuration variable.
# /etc/chkconfig -f afsml off
If the machine is to function as an NFS/AFS Translator and the kernel
supports NFS server functionality, activate the afsxnfs
variable.
# /etc/chkconfig -f afsxnfs on
- Copy the existing kernel file, /unix, to a safe location and
compile the new kernel. It is created as
/unix.install, and overwrites the existing /unix
file when the machine reboots in the next step.
# cp /unix /unix_orig
# autoconfig
- Reboot the machine to start using the new kernel.
# shutdown -i6 -g0 -y
- Create a directory called /vicepxx for each AFS server
partition you are configuring (there must be at least one). Repeat the
command for each partition.
# mkdir /vicepxx
- Add a line with the following format to the file systems registry file,
/etc/fstab, for each partition (or logical volume created with the
XLV volume manager) to be mounted on one of the directories created in the
previous step.
For an XFS partition or logical volume:
/dev/dsk/disk /vicepxx xfs rw,raw=/dev/rdsk/disk 0 0
For an EFS partition:
/dev/dsk/disk /vicepxx efs rw,raw=/dev/rdsk/disk 0 0
The following are examples of an entry for each file system type:
/dev/dsk/dks0d2s6 /vicepa xfs rw,raw=/dev/rdsk/dks0d2s6 0 0
/dev/dsk/dks0d3s1 /vicepa efs rw,raw=/dev/rdsk/dks0d3s1 0 0
- Create a file system on each partition that is to be mounted on a
/vicep directory. The following commands are probably
appropriate, but consult the IRIX documentation for more information.
For XFS file systems, include the indicated options to configure the
partition or logical volume with inodes large enough to accommodate special
AFS-specific information:
# mkfs -t xfs -i size=512 -l size=4000b device
For EFS file systems:
# mkfs -t efs device
In both cases, device is a raw device name like
/dev/rdsk/dks0d0s0 for a single disk partition or
/dev/rxlv/xlv0 for a logical volume.
- Mount each partition by issuing either the mount -a command to
mount all partitions at once or the mount command to mount each
partition in turn.
- Proceed to Starting Server Programs.
Begin by running the AFS initialization script to call the
insmod program, which dynamically loads AFS modifications into the
kernel. Then create partitions for storing AFS volumes. You do
not need to replace the Linux fsck program.
- Mount the AFS CD-ROM labeled AFS for Linux, International
Edition on the local /cdrom directory. For
instructions on mounting CD-ROMs (either locally or remotely via NFS), see
your Linux documentation.
- Copy the AFS kernel library files from the CD-ROM to the local
/usr/vice/etc/modload directory. The filenames for the
libraries have the format
libafs-version.o, where version
indicates the kernel build level. The string .mp in
the version indicates that the file is appropriate for machines
running a multiprocessor kernel.
# cd /cdrom/i386_linux22/root.client/usr/vice/etc
# cp -rp modload /usr/vice/etc
- Copy the AFS initialization file from the CD-ROM to the local directory
for initialization files on Linux machines,
/etc/rc.d/init.d by convention. Note the
removal of the .rc extension as you copy the file.
# cp -p afs.rc /etc/rc.d/init.d/afs
- Run the AFS initialization script to load AFS extensions into the
kernel. The script invokes the insmod command, automatically
determining which kernel library file to use based on the Linux kernel version
installed on this machine.
You can ignore any error messages about the inability to start the BOS
Server or Cache Manager.
# /etc/rc.d/init.d/afs start
- Create a directory called /vicepxx for each AFS server
partition you are configuring (there must be at least one). Repeat the
command for each partition.
# mkdir /vicepxx
- Add a line with the following format to the file systems registry file,
/etc/fstab, for each directory just created. The entry maps
the directory name to the disk partition to be mounted on it.
/dev/disk /vicepx ext2 defaults 0 2
For example,
/dev/sda8 /vicepa ext2 defaults 0 2
- Create a file system on each partition that is to be mounted at a
/vicep directory. The following command is probably
appropriate, but consult the Linux documentation for more information.
# mkfs -v /dev/disk
- Mount each partition by issuing either the mount -a command to
mount all partitions at once or the mount command to mount each
partition in turn.
- Proceed to Starting Server Programs.
Begin by running the AFS initialization script to call the
modload program, which dynamically loads AFS modifications into the
kernel. Then configure partitions and replace the Solaris
fsck program with a version that correctly handles AFS
volumes.
- Mount the AFS CD-ROM labeled AFS for Solaris, International
Edition on the /cdrom directory. For instructions on
mounting CD-ROMs (either locally or remotely via NFS), see your Solaris
documentation.
- Copy the AFS initialization file from the CD-ROM to the local directory
for initialization files on Solaris machines, /etc/init.d by
convention. Note the removal of the .rc extension as
you copy the file.
# cd /cdrom/sun4x_56/root.client/usr/vice/etc
# cp -p afs.rc /etc/init.d/afs
- Copy the appropriate AFS kernel library file from the CD-ROM to the local
file /kernel/fs/afs.
If the machine's kernel supports NFS server functionality and the
nfsd process is running:
# cp -p modload/libafs.o /kernel/fs/afs
If the machine's kernel does not support NFS server functionality or
if the nfsd process is not running:
# cp -p modload/libafs.nonfs.o /kernel/fs/afs
- Invoke the AFS initialization script to load AFS modifications into the
kernel. It automatically creates an entry for AFS in slot 105 of the
local /etc/name_to_sysnum file if necessary, reboots the machine to
start using the new version of the file, and runs the modload
command. You can ignore any error messages about the inability to start
the BOS Server or the AFS client.
# /etc/init.d/afs start
- Create a directory called /vicepxx for each AFS server
partition you are configuring (there must be at least one). Repeat the
command for each partition.
# mkdir /vicepxx
- Add a line with the following format to the file systems registry file,
/etc/vfstab, for each partition to be mounted on a directory
created in the previous step.
/dev/dsk/disk /dev/rdsk/disk /vicepxx ufs boot_order yes
The following is an example.
/dev/dsk/c0t6d0s1 /dev/rdsk/c0t6d0s1 /vicepa ufs 3 yes
- Create a file system on each partition that is to be mounted at a
/vicep directory. The following command is probably
appropriate, but consult the Solaris documentation for more
information.
# newfs -v /dev/rdsk/disk
- Issue the mountall command to mount all partitions at
once.
- Create the /usr/lib/fs/afs directory to house AFS library
files.
# mkdir /usr/lib/fs/afs
- Copy the AFS-modified fsck program (vfsck) from the
CD-ROM distribution directory to the newly created directory.
# cd /cdrom/sun4x_56/root.server/etc
# cp vfsck /usr/lib/fs/afs/fsck
- Working in the /usr/lib/fs/afs directory, create the following
links to Solaris libraries:
# cd /usr/lib/fs/afs
# ln -s /usr/lib/fs/ufs/clri
# ln -s /usr/lib/fs/ufs/df
# ln -s /usr/lib/fs/ufs/edquota
# ln -s /usr/lib/fs/ufs/ff
# ln -s /usr/lib/fs/ufs/fsdb
# ln -s /usr/lib/fs/ufs/fsirand
# ln -s /usr/lib/fs/ufs/fstyp
# ln -s /usr/lib/fs/ufs/labelit
# ln -s /usr/lib/fs/ufs/lockfs
# ln -s /usr/lib/fs/ufs/mkfs
# ln -s /usr/lib/fs/ufs/mount
# ln -s /usr/lib/fs/ufs/ncheck
# ln -s /usr/lib/fs/ufs/newfs
# ln -s /usr/lib/fs/ufs/quot
# ln -s /usr/lib/fs/ufs/quota
# ln -s /usr/lib/fs/ufs/quotaoff
# ln -s /usr/lib/fs/ufs/quotaon
# ln -s /usr/lib/fs/ufs/repquota
# ln -s /usr/lib/fs/ufs/tunefs
# ln -s /usr/lib/fs/ufs/ufsdump
# ln -s /usr/lib/fs/ufs/ufsrestore
# ln -s /usr/lib/fs/ufs/volcopy
- Append the following line to the end of the file
/etc/dfs/fstypes.
afs AFS Utilities
- Edit the /sbin/mountall file, making two changes.
- Add an entry for AFS to the case statement for option 2, so
that it reads as follows:
case "$2" in
ufs) foptions="-o p"
;;
afs) foptions="-o p"
;;
s5) foptions="-y -t /var/tmp/tmp$$ -D"
;;
*) foptions="-y"
;;
- Edit the file so that all AFS and UFS partitions are checked in
parallel. Replace the following section of code:
# For fsck purposes, we make a distinction between ufs and
# other file systems
#
if [ "$fstype" = "ufs" ]; then
ufs_fscklist="$ufs_fscklist $fsckdev"
saveentry $fstype "$OPTIONS" $special $mountp
continue
fi
with the following section of code:
# For fsck purposes, we make a distinction between ufs/afs
# and other file systems.
#
if [ "$fstype" = "ufs" -o "$fstype" = "afs" ]; then
ufs_fscklist="$ufs_fscklist $fsckdev"
saveentry $fstype "$OPTIONS" $special $mountp
continue
fi
- Proceed to Starting Server Programs.
In this section you initialize the BOS Server, the Update
Server, the controller process for NTPD, and the File Server. You begin
by copying the necessary server files to the local disk.
- Copy file server binaries to the local /usr/afs/bin
directory.
- On a machine of an existing system type, you can either load files from
the AFS CD-ROM or use a remote file transfer protocol to copy files from an
existing server machine of the same system type. To load from the
CD-ROM, see the instructions just following for a machine of a new system
type. If using a remote file transfer protocol, copy the complete
contents of the /usr/afs/bin directory.
- On a machine of a new system type, you must use the following instructions
to copy files from the AFS CD-ROM.
- On the local /cdrom directory, mount the AFS CD-ROM for this
machine's system type that is labeled International Edition,
if it is not already. For instructions on mounting CD-ROMs (either
locally or remotely via NFS), consult the operating system
documentation.
- Copy files from the CD-ROM to the local /usr/afs
directory.
# cd /cdrom/sysname/root.server/usr/afs
# cp -rp * /usr/afs
- If you use the United States edition of AFS, mount at the
/cdrom directory the AFS CD-ROM that is labeled Encryption
Files, Domestic Edition.
- Copy files from the CD-ROM to the local /usr/afs/bin
directory.
# cd /cdrom/sysname/root.server/usr/afs/bin
# cp -p * /usr/afs/bin
- Copy the contents of the /usr/afs/etc directory from an
existing file server machine, using a remote file transfer protocol such as
ftp or NFS. If you run the United States Edition of AFS and
run a system control machine, it is best to copy the contents of its
/usr/afs/etc directory. If you run the international edition
of AFS (or do not use a system control machine), copy the directory's
contents from any existing file server machine.
- Change to the /usr/afs/bin directory and start the BOS Server
(bosserver process). Include the -noauth flag to
prevent the AFS processes from performing authorization checking. This
is a grave compromise of security; finish the remaining instructions in this
section in an uninterrupted pass.
# cd /usr/afs/bin
# ./bosserver -noauth &
- If using the United States edition of AFS, create the
upclientetc process as an instance of the client portion of the
Update Server. It accepts updates of the common configuration files
stored in the system control machine's /usr/afs/etc directory
from the upserver process (server portion of the Update Server)
running on that machine. The cell's first file server machine was
installed as the system control machine in Starting the Server Portion of the Update Server.
Do not issue this command if using the international edition of AFS.
The contents of the /usr/afs/etc directory are too sensitive to
cross the network unencrypted, but the necessary encryption routines are not
included in the international edition of AFS. You must update the
contents of the /usr/afs/etc directory on each file server machine,
using the appropriate bos commands. See the AFS System
Administrator's Guide for instructions.
By default, the Update Server performs updates every 300 seconds (five
minutes). Use the -t argument to specify a different number
of seconds. For the machine name argument, substitute the name
of the machine you are installing. The command appears on multiple
lines here only for legibility reasons.
# ./bos create <machine name> upclientetc simple \
"/usr/afs/bin/upclient <system control machine> \
[-t <time>] /usr/afs/etc" -cell <cellname> -noauth
- Create an instance of the Update Server to handle distribution
of the file server binaries stored in the /usr/afs/bin
directory.
- If this is the first file server machine of its AFS system type, create
the upserver process as an instance of the server portion of the
Update Server. It distributes its copy of the file server process
binaries to the other file server machines of this system type that you
install in future. Creating this process makes this machine the binary
distribution machine for its type.
# ./bos create <machine name> upserver simple \
"/usr/afs/bin/upserver -clear /usr/afs/bin" \
-cell <cellname> -noauth
- If this machine is an existing system type, create the
upclientbin process as an instance of the client portion of the
Update Server. It accepts updates of the AFS binaries from the
upserver process running on the binary distribution machine for its
system type. For distribution to work properly, the upserver
process must already by running on that machine.
Use the -clear argument to specify that the
upclientbin process request unencrypted transfer of the binaries in
the /usr/afs/bin directory. Binaries are not sensitive and
encrypting them is time-consuming.
By default, the Update Server performs updates every 300 seconds (five
minutes). Use the -t argument to specify an different number
of seconds.
# ./bos create <machine name> upclientbin simple \
"/usr/afs/bin/upclient <binary distribution machine>
[-t <time>] -clear /usr/afs/bin" -cell <cellname> -noauth
- Start the runntp process, which configures the Network Time
Protocol Daemon (NTPD) to refer to a database server machine chosen randomly
from the local /usr/afs/etc/CellServDB file as its time
source. In the standard configuration, the first database server
machine installed in your cell refers to a time source outside the cell, and
serves as the basis for clock synchronization on all server machines.
Note: | Do not run the runntp process if NTPD or another time
synchronization protocol is already running on the machine. Attempting
to run multiple instances of the NTPD causes an error. Running NTPD
together with another time synchronization protocol is unnecessary and can
cause instability in the clock setting.
Some versions of some operating systems run a time synchronization program
by default. For correct NTPD functioning, it is best to disable the
default program. See the AFS Release Notes for
details.
|
# ./bos create <machine name> runntp simple \
/usr/afs/bin/runntp -cell <cell name> -noauth
- Start the fs process, which binds together the File Server,
Volume Server, and Salvager. The command appears on multiple lines here
only for legibility reasons.
# ./bos create <machine name> fs fs \
/usr/afs/bin/fileserver /usr/afs/bin/volserver \
/usr/afs/bin/salvager -cell <cellname> -noauth
If you want this machine to be a client as well as a server,
follow the instructions in this section. Otherwise, skip to Completing the Installation.
Begin by loading the necessary client files to the local disk. Then
create the necessary configuration files and start the Cache Manager.
For more detailed explanation of the procedures involved, see the
corresponding instructions in Installing the First AFS Machine (in the sections following Overview: Installing Client Functionality).
If another AFS machine of this machine's system type exists, the AFS
binaries are probably already accessible in your AFS filespace (the
conventional location is
/afs/cellname/sysname/usr/afsws).
If not, or if this is the first AFS machine of its type, copy the AFS binaries
for this system type into an AFS volume by following the instructions in Storing AFS Binaries in AFS. Because this machine is not yet an AFS client, you
must perform the procedure on an existing AFS machine. However,
remember to perform the final step--linking the local directory
/usr/afsws to the appropriate location in the AFS file
tree--on this machine (the new file server machine). If you also
want to create AFS volumes to house UNIX system binaries for the new system
type, see Storing System Binaries in AFS.
- Copy client binaries and files to the local disk.
- On a machine of an existing system type, you can either load files from
the AFS CD-ROM or use a remote file transfer protocol to copy files from an
existing server machine of the same system type. To load from the
CD-ROM, see the instructions just following for a machine of a new system
type. If using a remote file transfer protocol, copy the complete
contents of the /usr/vice/etc directory.
- On a machine of a new system type, you must use the following instructions
to copy files from the AFS CD-ROM.
- On the local /cdrom directory, mount the AFS CD-ROM for this
machine's system type that is labeled International Edition,
if it is not already. For instructions on mounting CD-ROMs (either
locally or remotely via NFS), consult the operating system
documentation.
- Copy files from the CD-ROM to the local /usr/vice/etc
directory.
Note: | This step places a copy of the AFS initialization script (and related files,
if applicable) into the /usr/vice/etc directory. In the
preceding instructions for incorporating AFS into the kernel, you copied the
script directly to the operating system's conventional location for
initialization files. Later, you link the two files to avoid the
potential confusion of having the two files differ; instructions appear in Activating the AFS Initialization Script.
On some system types that use a kernel dynamic loader program, you
previously copied AFS library files into a subdirectory of the
/usr/vice/etc directory. On other system types, you copied
the appropriate AFS library file directly to the directory where the operating
system accesses it. The following instruction does not copy (or recopy)
the AFS library files into the dynamic-loader subdirectory, because on some
system types the library files consume a large amount of space. If you
want to copy the library files as well, add the -r flag to the
first cp command and skip the second cp command.
|
# cd /cdrom/sysname/root.client/usr/vice/etc
# cp -p * /usr/vice/etc
# cp -rp C /usr/vice/etc
- Change to the /usr/vice/etc directory and create the
ThisCell file as a copy of the /usr/afs/etc/ThisCell
file. You must first remove the symbolic link to the
/usr/afs/etc/ThisCell file that the BOS Server created
automatically in Starting Server Programs.
# cd /usr/vice/etc
# rm ThisCell
# cp /usr/afs/etc/ThisCell ThisCell
- Remove the symbolic link to the /usr/afs/etc/CellServDB
file.
# rm CellServDB
- Create the /usr/vice/etc/CellServDB file. Use a network
file transfer program such as ftp or NFS to copy it from one of the
following sources, which are listed in decreasing order of preference:
- Your cell's central CellServDB source file (the
conventional location is
/afs/cellname/common/etc/CellServDB)
- The global CellServDB file maintained by the AFS Product
Support group
- An existing client machine in your cell
- The CellServDB.sample file included in the
sysname/root.client/usr/vice/etc directory of each
AFS CD-ROM labeled International Edition; add an entry for the
local cell by following the instructions in Creating the Client CellServDB File
- Create the cacheinfo file for either a disk cache or a memory
cache. For a discussion of the appropriate values to record in the
file, see Configuring the Cache.
To configure a disk cache:
# mkdir /usr/vice/cache
# echo "/afs:/usr/vice/cache:#blocks" > cacheinfo
To configure a memory cache:
# echo "/afs:/usr/vice/cache:#blocks" > cacheinfo
- Create the local directory on which to mount the AFS filespace, by
convention /afs. If the directory already exists, verify
that it is empty.
# mkdir /afs
- On Linux systems, copy the afsd options file from the
/usr/vice/etc directory to the /etc/sysconfig
directory. Note the removal of the .conf extension as
you copy the file.
# cp /usr/vice/etc/afs.conf /etc/sysconfig/afs
- Edit the machine's AFS initialization script or afsd
options file to set appropriate values for afsd command
parameters. The script resides in the indicated location on each system
type:
- On AIX systems, /etc/rc.afs
- On Digital UNIX systems, /sbin/init.d/afs
- On HP-UX systems, /sbin/init.d/afs
- On IRIX systems, /etc/init.d/afs
- On Linux systems, /etc/sysconfig/afs (the afsd
options file)
- On Solaris systems, /etc/init.d/afs
Use one of the methods described in Configuring the Cache Manager to add the following flags to the afsd command
line. If you intend for the machine to remain an AFS client, also set
any performance-related arguments you wish.
- Add the -nosettime flag, because this is a file server machine
that is also a client.
- Add the -memcache flag if the machine is to use a memory
cache.
- Add the -verbose flag to display a trace of the Cache
Manager's initialization on the standard output stream.
- Incorporate AFS into the machine's authentication system, following
the instructions in Enabling AFS Login. On Solaris systems, the instructions also explain
how to alter the file systems clean-up script.
- If appropriate, follow the instructions in Storing AFS Binaries in AFS to copy the AFS binaries for this system type into an AFS
volume. See the introduction to this section for further
discussion.
At this point you run the machine's AFS initialization
script to verify that it correctly loads AFS modifications into the kernel and
starts the BOS Server, which starts the other server processes. If you
have installed client files, the script also starts the Cache Manager.
If the script works correctly, perform the steps that incorporate it into the
machine's startup and shutdown sequence. If there are problems
during the initialization, attempt to resolve them. The AFS Product
Support group can provide assistance if necessary.
If the machine is configured as a client using a disk cache, it can take a
while for the afsd program to create all of the
Vn files in the cache directory. Messages on the
console trace the initialization process.
- Issue the bos shutdown command to shut down the AFS server
processes other than the BOS Server. Include the -wait flag
to delay return of the command shell prompt until all processes shut down
completely.
# /usr/afs/bin/bos shutdown <machine name> -wait
- Issue the ps command to learn the BOS Server's process ID
number (PID), and then the kill command to stop the
bosserver process.
# ps appropriate_ps_options | grep bosserver
# kill bosserver_PID
- Run the AFS initialization script by issuing the appropriate commands for
this system type.
On AIX systems:
- Reboot the machine and log in again as the local superuser
root.
# shutdown -r now
login: root
Password: root_password
- Run the AFS initialization script.
# /etc/rc.afs
- Edit the AIX initialization file, /etc/inittab, adding the
following line to invoke the AFS initialization script. Place it just
after the line that starts NFS daemons.
rcafs:2:wait:/etc/rc.afs > /dev/console 2>&1 # Start AFS services
- (Optional) There are now copies of the AFS initialization file
in both the /usr/vice/etc and /etc directories.
If you want to avoid potential confusion by guaranteeing that they are always
the same, create a link between them. You can always retrieve the
original script from the AFS CD-ROM if necessary.
# cd /usr/vice/etc
# rm rc.afs
# ln -s /etc/rc.afs
- Proceed to Step 4.
On Digital UNIX systems:
- Run the AFS initialization script.
# /sbin/init.d/afs start
- Change to the /sbin/init.d directory and issue the
ln -s command to create symbolic links that incorporate the AFS
initialization script into the Digital UNIX startup and shutdown
sequence.
# cd /sbin/init.d
# ln -s ../init.d/afs /sbin/rc3.d/S67afs
# ln -s ../init.d/afs /sbin/rc0.d/K66afs
- (Optional) There are now copies of the AFS initialization file
in both the /usr/vice/etc and /sbin/init.d
directories. If you want to avoid potential confusion by guaranteeing
that they are always the same, create a link between them. You can
always retrieve the original script from the AFS CD-ROM if necessary.
# cd /usr/vice/etc
# rm afs.rc
# ln -s /sbin/init.d/afs afs.rc
- Proceed to Step 4.
On HP-UX systems:
- Run the AFS initialization script.
# /sbin/init.d/afs start
- Change to the /sbin/init.d directory and issue the
ln -s command to create symbolic links that incorporate the AFS
initialization script into the HP-UX startup and shutdown sequence.
# cd /sbin/init.d
# ln -s ../init.d/afs /sbin/rc2.d/S460afs
# ln -s ../init.d/afs /sbin/rc2.d/K800afs
- (Optional) There are now copies of the AFS initialization file
in both the /usr/vice/etc and /sbin/init.d
directories. If you want to avoid potential confusion by guaranteeing
that they are always the same, create a link between them. You can
always retrieve the original script from the AFS CD-ROM if necessary.
# cd /usr/vice/etc
# rm afs.rc
# ln -s /sbin/init.d/afs afs.rc
- Proceed to Step 4.
On IRIX systems:
- If you have configured the machine to use the ml dynamic loader
program, reboot the machine and log in again as the local superuser
root.
# shutdown -i6 -g0 -y
login: root
Password: root_password
- Issue the chkconfig command to activate the
afsserver configuration variable.
# /etc/chkconfig -f afsserver on
If you have configured this machine as an AFS client and want to it remain
one, also issue the chkconfig command to activate the
afsclient configuration variable.
# /etc/chkconfig -f afsclient on
- Run the AFS initialization script.
# /etc/init.d/afs start
- Change to the /etc/init.d directory and issue the
ln -s command to create symbolic links that incorporate the AFS
initialization script into the IRIX startup and shutdown sequence.
# cd /etc/init.d
# ln -s ../init.d/afs /etc/rc2.d/S35afs
# ln -s ../init.d/afs /etc/rc0.d/K35afs
- (Optional) There are now copies of the AFS initialization file
in both the /usr/vice/etc and /etc/init.d
directories. If you want to avoid potential confusion by guaranteeing
that they are always the same, create a link between them. You can
always retrieve the original script from the AFS CD-ROM if necessary.
# cd /usr/vice/etc
# rm afs.rc
# ln -s /etc/init.d/afs afs.rc
- Proceed to Step 4.
On Linux systems:
- Reboot the machine and log in again as the local superuser
root.
# shutdown -r now
login: root
Password: root_password
- Run the AFS initialization script.
# /etc/rc.d/init.d/afs start
- Issue the chkconfig command to activate the afs
configuration variable. Based on the instruction in the AFS
initialization file that begins with the string #chkconfig, the
command automatically creates the symbolic links that incorporate the script
into the Linux startup and shutdown sequence.
# /sbin/chkconfig --add afs
- (Optional) There are now copies of the AFS initialization file
in both the /usr/vice/etc and /etc/init.d
directories, and copies of the afsd options file in both the
/usr/vice/etc and /etc/sysconfig directories. If
you want to avoid potential confusion by guaranteeing that the two copies of
each file are always the same, create a link between them. You can
always retrieve the original script or options file from the AFS CD-ROM if
necessary.
# cd /usr/vice/etc
# rm afs.rc afs.conf
# ln -s /etc/rc.d/init.d/afs afs.rc
# ln -s /etc/sysconfig/afs afs.conf
- Proceed to Step 4.
On Solaris systems:
- Reboot the machine and log in again as the local superuser
root.
# shutdown -i6 -g0 -y
login: root
Password: root_password
- Run the AFS initialization script.
# /etc/init.d/afs start
- Change to the /etc/init.d directory and issue the
ln -s command to create symbolic links that incorporate the AFS
initialization script into the Solaris startup and shutdown sequence.
# cd /etc/init.d
# ln -s ../init.d/afs /etc/rc3.d/S99afs
# ln -s ../init.d/afs /etc/rc0.d/K66afs
- (Optional) There are now copies of the AFS initialization file
in both the /usr/vice/etc and /etc/init.d
directories. If you want to avoid potential confusion by guaranteeing
that they are always the same, create a link between them. You can
always retrieve the original script from the AFS CD-ROM if necessary.
# cd /usr/vice/etc
# rm afs.rc
# ln -s /etc/init.d/afs afs.rc
- Verify that /usr/afs and its subdirectories on the
new file server machine meet the ownership and mode bit requirements outlined
in Protecting Sensitive AFS Directories. If necessary, use the chmod command to
correct the mode bits.
- To configure this machine as a database server machine, proceed to Installing Database Server Functionality.
This section explains how to install database server
functionality. Note the following requirements.
- Database server machines have two defining characteristics:
- They run the Authentication Server, Protection Server, and Volume Location
(VL) Server processes. They also run the Backup Server if the cell uses
the AFS Backup System, as is assumed in these instructions.
- They appear in the CellServDB file of every machine in the cell
(and of client machines in foreign cells, if they are to access files in this
cell)
- In the conventional configuration, database server machines also serve as
file server machines (run the File Server, Volume Server and Salvager
processes). If you choose not to run file server functionality on a
database server machine, then the kernel does not have to incorporate AFS
modifications, but the local /usr/afs directory must house most of
the standard files and subdirectories. In particular, the
/usr/afs/etc/KeyFile file must contain the same keys as all other
server machines in the cell. If you use the United States edition of
AFS, run the upclientetc process on every database server machine
(except the system control machine); if you use the international edition of
AFS, use the bos addkey command as instructed in the chapter in the
AFS System Administrator's Guide about maintaining server
encryption keys.
The instructions in this section assume that the machine on which you are
installing database server functionality is already a file server
machine. Contact the AFS Product Support group to learn how to install
database server functionality on a non-file server machine.
- During the installation of database server functionality, you must restart
all of the database server machines to force the election of a new Ubik
coordinator (synchronization site) for each database server process.
This can cause a system outage, which usually lasts less than 5
minutes.
- Updating the kernel memory list of database server machines on each client
machine is generally the most time-consuming part of installing a new database
server machine. It is, however, crucial for correct functioning in your
cell. Incorrect knowledge of your cell's database server machines
can prevent your users from authenticating, accessing files, and issuing AFS
commands.
You update a client's kernel memory list by changing the
/usr/vice/etc/CellServDB file and then either rebooting or issuing
the fs newcell command. For instructions, see the chapter in
the AFS System Administrator's Guide about administering client
machines.
The point at which you update your clients' knowledge of database
server machines depends on which of the database server machines has the
lowest IP address:
- If the new database server machine has a lower IP address than any
existing database server machine, update the CellServDB file on
every client machine before restarting the database server processes.
If you do not, users can become unable to update (write to) any of the AFS
databases. This is because the machine with the lowest IP address is
usually elected as the Ubik coordinator, and only the Coordinator accepts
database writes. On client machines that do not have the new list of
database server machines, the Cache Manager cannot locate the new
coordinator. (Be aware that if clients contact the new coordinator
before it is actually in service, they experience a timeout before contacting
another database server machine. This is a minor, and temporary,
problem compared to being unable to write to the database.)
- If the new database server machine does not have the lowest IP address of
any database server machine, then it is better to update clients after
restarting the database server processes. Client machines do not start
using the new database server machine until you update their kernel memory
list, but that does not usually cause timeouts or update problems (because the
new machine is not likely to become the coordinator).
The following instructions indicate the appropriate place to update your
clients in either case.
To install a database server machine, perform the following
procedures.
- Install the bos suite of commands locally, as a precaution
- Add the new machine to the /usr/afs/etc/CellServDB file on
existing file server machines
- Update your cell's central CellServDB source file and the
file you make available to foreign cells
- Update every client machine's /usr/vice/etc/CellServDB
file and kernel memory list of database server machines
- Start the database server processes (Authentication Server, Backup Server,
Protection Server, and Volume Location Server)
- Restart the database server processes on every database server machine
- Notify the AFS Product Support group that you have installed a new
database server machine
- You can perform the following instructions on either a server or client
machine. Login as an AFS administrator listed in the
/usr/afs/etc/UserList file on all server machines.
Note: | The following instructions assume that your PATH environment variable
includes the directory that houses the AFS command binaries. If not,
you possibly need to precede the command names with the appropriate
pathname.
|
% klog admin_user
Password: admin_password
- If you are working on a client machine configured in the conventional
manner, the bos command suite resides in the
/usr/afsws/bin directory, a symbolic link to an AFS
directory. An error during installation can potentially block access to
AFS, in which case it is helpful to have a copy of the bos binary
on the local disk.
% cp /usr/afsws/bin/bos /tmp
- Issue the bos addhost command to add the new
database server machine to the /usr/afs/etc/CellServDB file on
existing server machines (as well as the new database server machine
itself).
Substitute the new database server machine's fully-qualified hostname
for the host name argument.
If you use the United States edition of AFS and a system control machine,
substitute its fully-qualified hostname for the machine name
argument. If you use the international edition of AFS, repeat the
bos addhost command once for each server machine in your cell
(including the new database server machine itself), by substituting each
one's fully-qualified hostname for the machine name argument in
turn.
% bos addhost <machine name> <host name>
If using the United States edition of AFS, wait for the Update Server to
distribute the new CellServDB file, which takes up to five minutes
by default. If using the international edition, attempt to issue all of
the bos addhost commands within five minutes.
- Issue the bos listhosts command on each server machine to
verify that the new database server machine appears in its
CellServDB file.
% bos listhosts <machine name>
- Add the new database server machine to your cell's central
CellServDB source file, if you use one. The standard
location is
/afs/cellname/common/etc/CellServDB.
If you are willing to make your cell accessible by users in foreign cells,
add the new database server machine to the file that lists your cell's
database server machines. The conventional location is
/afs/cellname/service/etc/CellServDB.local.
- If this machine's IP address is lower than any existing
database server machine's, update every client machine's
/usr/vice/etc/CellServDB file and kernel memory list to include
this machine. (If this machine's IP address is not the lowest, it
is acceptable to wait until Step 12.)
There are several ways to update the CellServDB file on client
machines, as detailed in the chapter of the AFS System
Administrator's Guide about administering client machines.
One option is to copy over the central update source (which you updated in
Step 5), with or without using the package
program. To update the machine's kernel memory list, you can
either reboot after changing the CellServDB file or issue the
fs newcell command.
- Start the Authentication Server (the kaserver
process).
% bos create <machine name> kaserver simple /usr/afs/bin/kaserver
- Start the Backup Server (the buserver
process). You must perform other configuration procedures before
actually using the AFS Backup System, as detailed in the AFS System
Administrator's Guide.
% bos create <machine name> buserver simple /usr/afs/bin/buserver
- Start the Protection Server (the ptserver
process).
% bos create <machine name> ptserver simple /usr/afs/bin/ptserver
- Start the Volume Location (VL) Server (the vlserver
process).
% bos create <machine name> vlserver simple /usr/afs/bin/vlserver
- Issue the bos restart command on every database
server machine in the cell, including the new server, to restart the
Authentication, Backup, Protection, and VL Servers. This forces an
election of a new Ubik coordinator for each process; the new machine votes in
the election and is considered as a potential new coordinator.
A cell-wide service outage is possible during the election of a new
coordinator for the VL Server, but it normally lasts less than five
minutes. Such an outage is particularly likely if you are installing
your cell's second database server machine. Messages tracing the
progress of the election appear on the console.
Repeat this command on each of your cell's database server machines in
quick succession. Begin with the machine with the lowest IP
address.
% bos restart <machine name> kaserver buserver ptserver vlserver
If an error occurs, restart all server processes on the database server
machines again by using one of the following methods:
- Issue the bos restart command with the -bosserver
flag for each database server machine
- Reboot each database server machine, either using the bos exec
command or at its console
- If you did not update the CellServDB file on client
machines in Step 6, do so now.
- Send the new database server machine's name and IP address
to the AFS Product Support group.
If you wish to participate in the AFS global name space, your cell's
entry appear in a CellServDB file that the AFS Product Support
group makes available to all AFS sites. Otherwise, they list your cell
in a private file that they do not share with other AFS sites.
Removing database server machine functionality is nearly the
reverse of installing it.
To decommission a database server machine, perform the following
procedures.
- Install the bos suite of commands locally, as a precaution
- Notify the AFS Product Support group that you are decommissioning a
database server machine
- Update your cell's central CellServDB source file and the
file you make available to foreign cells
- Update every client machine's /usr/vice/etc/CellServDB
file and kernel memory list of database server machines
- Remove the machine from the /usr/afs/etc/CellServDB file on
file server machines
- Stop the database server processes and remove them from the
/usr/afs/local/BosConfig file if desired
- Restart the database server processes on the remaining database server
machines
- You can perform the following instructions on either a server or client
machine. Login as an AFS administrator listed in the
/usr/afs/etc/UserList file on all server machines.
Note: | The following instructions assume that your PATH environment variable
includes the directory that houses the AFS command binaries. If not,
you possibly need to precede the command names with the appropriate
pathname.
|
% klog admin_user
Password: admin_password
- If you are working on a client machine configured in the conventional
manner, the bos command suite resides in the
/usr/afsws/bin directory, a symbolic link to an AFS
directory. An error during installation can potentially block access to
AFS, in which case it is helpful to have a copy of the bos binary
on the local disk.
% cp /usr/afsws/bin/bos /tmp
- Send the revised list of your cell's database server
machines to the AFS Product Support group.
This step is particularly important if your cell is included in the global
CellServDB file. If the administrators in foreign cells do
not learn about the change in your cell, they cannot update the
CellServDB file on their client machines. Users in foreign
cells continue to send database requests to the decommissioned machine, which
creates needless network traffic and activity on the machine. Also, the
users experience time-out delays while their request is forwarded to a
valid database server machine.
- Remove the decommissioned machine from your cell's central
CellServDB source file, if you use one. The conventional
location is
/afs/cellname/common/etc/CellServDB.
If you maintain a file that users in foreign cells can access to learn
about your cell's database server machines, update it also. The
conventional location is
/afs/cellname/service/etc/CellServDB.local.
- Update every client machine's
/usr/vice/etc/CellServDB file and kernel memory list to exclude
this machine. Altering the CellServDB file and kernel memory
list before stopping the actual database server processes avoids possible
time-out delays that result when users send requests to a decommissioned
database server machine that is still listed in the file.
There are several ways to update the CellServDB file on client
machines, as detailed in the chapter of the AFS System
Administrator's Guide about administering client machines.
One option is to copy over the central update source (which you updated in
Step 5), with or without using the package
program. To update the machine's kernel memory list, you can
either reboot after changing the CellServDB file or issue the
fs newcell command.
- Issue the bos removehost command to remove the
decommissioned database server machine from the
/usr/afs/etc/CellServDB file on server machines.
Substitute the decommissioned database server machine's
fully-qualified hostname for the host name argument.
If you use the United States edition of AFS and a system control machine,
substitute its fully-qualified hostname for the machine name
argument. If you use the international edition of AFS, repeat the
bos removehost command once for each server machine in your cell
(including the decommissioned database server machine itself), by substituting
each one's fully-qualified hostname for the machine name
argument in turn.
% bos removehost <machine name> <host name>
If using the United States edition of AFS, wait for the Update Server to
distribute the new CellServDB file, which takes up to five minutes
by default. If using the international edition, attempt to issue all of
the bos removehost commands within five minutes.
- Issue the bos listhosts command on each server machine to
verify that the decommissioned database server machine no longer appears in
its CellServDB file.
% bos listhosts <machine name>
- Issue the bos stop command to stop the database
server processes on the machine, by substituting its fully-qualified hostname
for the machine name argument. The command changes each
process' status in the /usr/afs/local/BosConfig file to
NotRun, but does not remove its entry from the file.
% bos stop <machine name> kaserver buserver ptserver vlserver
- (Optional) Issue the bos delete command
to remove the entries for database server processes from the
BosConfig file. Do not perform this step if you plan to
reinstall the database server functionality on this machine soon.
% bos delete <machine name> kaserver buserver ptserver vlserver
- Issue the bos restart command on every database
server machine in the cell, to restart the Authentication, Backup, Protection,
and VL Servers. This forces the election of a Ubik coordinator for each
process, ensuring that the remaining database server processes recognize that
the machine is no longer a database server.
A cell-wide service outage is possible during the election of a new
coordinator for the VL Server, but it normally lasts less than five
minutes. Messages tracing the progress of the election appear on the
console.
Repeat this command on each of your cell's database server machines in
quick succession. Begin with the machine with the lowest IP
address.
% bos restart <machine name> kaserver buserver ptserver vlserver
If an error occurs, restart all server processes on the database server
machines again by using one of the following methods:
- Issue the bos restart command with the -bosserver
flag for each database server machine
- Reboot each database server machine, either using the bos exec
command or at its console
© IBM Corporation 1999. All Rights Reserved