As an administrator for the SAS/C CSL NFS client, you must be concerned with installing the software, establishing access controls for remote file security, (optionally) developing file-system mount configurations, and diagnosing problems. This support is provided in a distributed file systems environment that uses Sun NFS protocol for network communication between computer systems.
Although numerous designs for distributed file systems have been implemented experimentally, only a few have achieved commercial success. Of these, the Sun Microsystems Network File System (NFS) protocol is by far the most widely used. Although not as full-featured as some other file systems (most notably the Andrew File System) in areas such as file caching and integrated security administration, its simple and modest design have made it easy to implement on a wide variety of systems. NFS software is currently available for almost every computer and operating system on the market today.
The NFS protocol views all file systems as conforming to the hierarchical directory organization that has been popularized by the UNIX operating system and that was subsequently codified by the IEEE POSIX standard. The NFS protocol not only allows reading and writing of files, it also supports manipulation of directories.
Each NFS client system builds and maintains its own file system view. This view results from a hierarchical combination of its own file systems and the file systems of servers to which it wants access. At any given directory of this view, the client system may attach a new sub-tree of directories from an NFS server. This process of attaching a new sub-tree of directories is called mounting a remote file system. The directory to which the remote file system is mounted is called the mount point.
An important effect of the mount operation is that the files in the mount-point directory are no longer visible to the client. The newly mounted files in the remote file system are visible instead.
Another important principle is that NFS mounts that are made by a server, when it acts as a client to another system, are not visible to its clients. The clients see only the files that are physically located on the server.
For users of MVS and CMS, perhaps the most important aspect of the NFS design is its orientation toward being a network service instead of being the file system component of a distributed operating system. This orientation is critical in enabling the use of NFS on operating systems that are dissimilar to the UNIX environments in which NFS was originally implemented. The primary requirements for an operating system to participate in NFS are the ability to interpret a hierarchical file system structure and to share UNIX format user identification numbers. Other similarities to UNIX are not required. The SAS/C CSL NFS implementation is able to effect support for directories and UNIX user identification on MVS and CMS.
Traditionally, programs running on MVS and CMS have had little or no access to files located on PCs, workstations, and other non-mainframe computers. The SAS/C CSL NFS client support changes that. For example, in the cross-development environment you can run the SAS/C Debugger on the mainframe while your source and debugger files reside on a workstation.
path:
, in
SAS/C filenames. In the same way that an MVS program can use the dsn:
prefix to open a file by
data set name, the program can now open an NFS file with the path:
prefix. Thus, files that are
accessed using NFS are placed in a separate name space from traditional MVS or CMS files. This
separation is due to differences in file system organizations, such as directories versus partitioned
data sets, rather than the fact that one group is local and the other is remote.
For example, a configuration file with the following line can be used if the user wants to access a
UNIX root directory / on a machine named acct.langdev.abc.com
.
acct.langdev.abc.com:/ / nfs
This indicates that the root of the acct.langdev.abc.com
machine should be mounted as
the root directory on the mainframe, thus enabling a debugger user to specify set search
commands relative to the mount point. (See Chapter 9, "Cross-Debugging" on page 71 for
information about the SAS/C Debugger's set search
command.)
To continue our example, suppose the user now invokes the debugger on the mainframe and
enters the following set search
command:
set search userinclude = "path:/usr/name/project/headers/%leafname"The debugger will now look for user include files in the
/usr/name/project/headers
directory on the remote workstation named acct.langdev.abc.com
.
In a more complicated setup, many different UNIX workstation file systems can be mounted together. The overall organization is the responsibility of the mainframe user, and the pathname for a particular file will often differ from what would be used on any of the systems individually.
In either case, the NFS client software maintains the standard UNIX, or POSIX, User Identification (UID) and Group Identification (GID) numbers for the duration of the user's session. The NFS client software controls access to remote files based on the user identification and the file's permissions.
The NFS client commands must be accessible to users. On CMS, this involves accessing the disk. On MVS, the commands can be found if the commands are placed in linklist or LPALIB, or if they are in a data set allocated to the DDname CPLIB (provided that the optional SAS/C TSO command support is installed). Alternatively, MVS sites with REXX support can use REXX EXECs which invoke the commands. This avoids any need to install the SAS/C TSO command support.
In addition to mainframe installation considerations, you must coordinate NFS usage with the administrators of the NFS servers. They must grant the mainframe access in their configuration files. Additionally, they must install a login server for mainframe users to contact.
SAS/C CSL comes with distribution kits (in UNIX tar
format) for two login servers. The first is
standard PCNFSD version 2 server from Sun Microsystems. The second is the CSL's sascuidd
server, which is used for login without a password. If the NFS network is already running a
PCNFSD version 1 server, it can be used instead of the PCNFSD version 2 server. The distribution
kits come with "README" and "Makefile" files to guide the process of building the programs on
your login server operating system.
PCNFSD may be hard to port to some systems, particularly systems that are not UNIX systems.
There are a number of alternative approaches to solve this problem. If there is a secure UNIX
system available in the network that is already running PCNFSD, then that system can be used. If
no such system is available, sites with mainframe security systems can rely exclusively on
sascuidd
(which is much easier to port). sascuidd
will run on any POSIX system that also
supports RPC. It is also possible to use a stripped down version of PCNFSD. Only the authorization
and null procedures are needed for CSL NFS. The others (mostly related to printing) are not
needed.
Whatever server is installed and used, it must be up and running whenever mainframe users might need access to NFS files.
All security on NFS servers is via UNIX (or POSIX) UIDs and GIDs. The UID is a number that represents a user. The GID represents a group of users. The NFS design assures that all participating machines share the same UID and GID assignments. MVS users are identified by a security system such as RACF or ACF2. CMS users are identified by entries in a CP directory. UNIX UIDs and GIDs are not normally associated with mainframe users.
Administrators of NFS servers can usually control, on a file system basis, which client machines can access files via NFS. When security is a concern, the ability of the client machine to allow only authorized UID and GID associations is the most important factor. The CSL NFS client software derives its UID and GID associations from a combination of mainframe security system and UNIX servers. The exact source authorization depends on site configuration.
Because of differences between UNIX and mainframe operating systems, and because of a lack of reserved port controls in current mainframe TCP/IP implementations, the CSL NFS client software is generally less secure (in authorizing UID and GID associations) than most UNIX NFS client implementations. Methods of attack are briefly described in the SAS/C CSL installation instructions. Note that it is server file security that is of concern. An NFS client implementation can pose no additional security threat to files on the client (in this case the mainframe) unless it gives unauthorized access to files containing passwords. Note also that most UNIX NFS servers allow controls for which file systems can be accessed, thus limiting exposure to unauthorized UID associations.
Because it can use authorizations that are provided by a mainframe security system, CSL NFS client software is generally more secure than NFS client implementations on PC operating systems.
One of two methods is used to associate a UNIX username with a mainframe user. If there is a
(RACF compatible) security system installed, profiles can be established to authorize mainframe
users to UNIX usernames. Users whose mainframe login ID is the same as their UNIX username
are a special case that can be authorized using a single profile. There is considerable flexibility in
this assignment. For example, the association between mainframe userids and UNIX usernames
need not be one-to-one. When this method is used, the sascuidd
login server is used to provide
UID information for that username. No UNIX password is required.
A second method is for the mainframe user to supply the desired username and password to the UNIX login server PCNFSD (version 2 if available, otherwise version 1), which authorizes the username (based on the password) and supplies UID information in one step.
The first method will generally be preferred when available because it makes login easier and removes the requirement that UNIX passwords be present on the mainframe.
For TSO or CMS users, the UID information is stored in environment variables. The UID is
stored in NFS_UID
. The primary GID is stored in NFS_GID
. The list of supplementary GIDs
(sascuidd
and PCNFSD V2 only) is stored in NFS_GIDLIST
.
NFS_LOGINDATE
is set to the
date of the login.
The environment variable NFS_LOGINKEY receives an encrypted value which is used by subsequent NFS calls to determine that these environment variables have not been tampered with.
NFS logins must be reissued each time the user logs in to TSO or CMS. Authorization is also lost after about 48 hours, even if the user does not log off. This prevents users from retaining their authorization indefinitely, even after they have had their UNIX authorizations removed.
At most 16 additional GIDs are allowed. This is the maximum supported by the PCNFSD
protocol. A user who can login as UID 0 (root) will probably not be given full authority by the NFS
server system. Most NFS servers remap UID 0 to a UID value of (unsigned short) -2
.
SAS/C NFS client capabilities cannot be used until a successful login has occurred. Successive calls to NFSLOGIN can be made in order to access a different server or to use a different login ID. If a security system is present to allow login without a password, the actual login may be performed automatically when an NFS operation is requested. No corresponding logout is required.
When a mainframe security system is present, it can also control which login server a user is allowed to access. This prevents users from rerouting their login authorization request to a less trusted machine. It also reduces the risk of a user sending a UNIX password to a "Trojan horse" program that is running on an unauthorized system.
Use of a mainframe security system requires the definition of a generalized resource named LSNUID. The mainframe security administrator can then enter profiles that give mainframe users access to particular login servers and equate mainframe userids with UNIX usernames. The next section describes this in detail.
Until this resource is defined and activated, the NFS client code behaves as it does when no security system is installed.
Here is a description of the macro parameters needed to define LSNUID (in a RACF environment).
ICHERDCE CLASS=LSNUID
ID=nn
MAXLNTH=39
FIRST=ANY
OTHER=ANY
POSIT= (prevented when RACF unavailable,
auditing if you want it,
statistics if you want them,
generic profile checking on,
generic command processing on
global access checking off)
ICHRFRTB CLASS=LSNUID
ACTION=RACF
In RACF, once this resource is defined, it must also be activated via the command:
SETROPTS CLASSACT(LSNUID)
The NFS client libraries make authorization inquiries about the following profile names (all
requests are for "read" permission):
LOCAL_USERID
Users who are permitted to this profile are authorized to use their mainframe userid
(lowercased) as the UNIX username without specifying a UNIX password.
USER_
name
Users who are permitted to this profile are authorized to use the string name (lowercased)
as their UNIX username without specifying a UNIX password. For example, if a
mainframe user is permitted to the profile USER_BILL
, then he is allowed to assume the
UNIX username of bill
.
Pddd.ddd.ddd.ddd
This specifies the network address (dotted decimal) of a PCNFSD server which the user
can access to obtain a UID and GID. Permissions against servers prevent users from
setting up unauthorized versions of PCNFSD on a less trusted machine and then directing
their login queries to it. For example, if mainframe user BILL is permitted to
P149.133.175.68, he can use the server at that IP address when logging in. Leading zeros
are not allowed in these names. That is, the previous profile could not have been for
P149.133.175.068.
Sddd.ddd.ddd.ddd
This is similar to the above, but permits access to a sascuidd
server.
You can control the login server in three ways. One way is to set the NFSLOGIN_SERVER
environment variable in the user's PROFILE EXEC or TSO startup CLIST. Another way is to
apply the default login server configuration zap that is supplied in the installation instructions. The
best method is to accept the default name nfsloginhost and to configure your nameserver or
/etc/hosts
format file accordingly.
fstab
file to perform their mounts. The search rules for the fstab
file include a
provision for a system-wide name. Users who do not set up fstab
files of their own will get the
system-wide file. If you want users to save file system context between programs, you can define
the ETC_MNTTAB environment variable in the PROFILE EXEC or TSO startup CLIST.
Many user problems are caused by incorrect installation of system software. These problems can often be diagnosed by understanding what is missing. Sometimes a configuration file is missing. Other times an environment-variable definition is needed, or a REXX EXEC is not placed where it will be accessed.
In other cases, problems are caused by network and server failures. For server problems and
failures on remote systems, the RPCINFO
and SHOWMNT
commands are useful. SHOWMNT
is
described in "SHOWMNT" on page 90, and RPCINFO
is described in SAS Technical Report
C-113, SAS/C Connectivity Support Library, Release 1.00. Both SHOWMNT
and RPCINFO
are
compatible with the equivalent commands on UNIX.
For true network problems, SNMP or other network diagnostic facilities are most useful.
O'Reilly & Associates, Inc.
103 Morris Street, Suite A
Sebastopol, CA 95472
(800) 998-9938 US/Canada
707-829-0515 overseas/local
707-829-0104 Fax
SHOWMNT
command, which is described in
"SHOWMNT" on page 90.
SHOWMNT [-e] [-d] [-a] [
host
]
SHOWMNT
command queries an NFS server for information on file systems that may be
mounted by NFS.
host is the hostname of the NFS server. If you omit this parameter, SHOWMNT
returns
information about the NFS server on the local machine (if one is installed).
SHOWMNT
handles two basic types of lists. The first is an exports list. The exports list tells which
file systems can be mounted. The second is a list describing which mounts have actually taken
place. The form of the second list depends on the -d
and -a
flags.
The -e
flag requests the exports list. This includes information about which hosts are authorized
to mount the listed file systems. This information may either be everyone, or a list of group names
that represent a set of hosts. If it is authorized, a host may mount any of the listed file systems.
You can use the following command when you are trying to determine the name of a file system to mount:
SHOWMNT -e
Note that you can often mount subdirectories of the listed file systems. Whether or not you can
do this depends on whether the subdirectory is in the same physical file system on the server.
Contact the server administrator or examine server configuration files to determine this.
If -e
is used in conjunction with other flags, this exports list will be printed first, followed by the
list describing actual mounts.
If you don't specify any flags, SHOWMNT
prints the list of actual mounts, showing only the
names of the hosts that have a mount. The list is sorted by host name.
If you specify -d, SHOWMNT
prints the list of actual mounts, showing only the names of
directories that have been mounted. The listed is sorted by directory name.
The -a
flag gives the most verbose format for the list of actual mounts. It indicates that the list
should be printed as "host:directory" pairs. If you do not use -d, SHOWMNT
sorts the list by host.
If you do use -d, SHOWMNT
sorts the list by directory.
showmnt -e byrd.unx
Show mountable file systems on the byrd.unx NFS server.
showmnt byrd.unx
Show the list of other hosts that have mounted the NFS file system from byrd.unx.