                          Software           Product            Description   M           ___________________________________________________________________   M           PRODUCT NAME:  OpenVMS Cluster Software, Version 6.2   SPD 29.78.09   M           This Software Product Description describes the following products:   2           o  VMScluster Software for OpenVMS Alpha  0           o  VAXcluster Software for OpenVMS VAX  6           o  OpenVMS Cluster Client Software for Alpha  4           o  OpenVMS Cluster Client Software for VAX  M           Except where noted the features described in this SPD apply equally N           to Alpha and VAX systems. OpenVMS Cluster Software licenses and partJ           numbers are architecture specific; please refer to the Ordering >           Information section of this SPD for further details.             DESCRIPTION   K           OpenVMS Cluster Software is an OpenVMS System Integrated Product  N           (SIP). It provides a highly integrated OpenVMS computing environmentJ           distributed over multiple Alpha and VAX CPUs. In this SPD, this 5           environment is referred to as a VMScluster.   I           CPUs in a VMScluster system can share processing, mass storage  O           (including system disks), and other resources under a single OpenVMS  P           security and management domain. Within this highly integrated environ-N           ment, CPUs retain their independence because they use local, memory-P           resident copies of the OpenVMS operating system. Thus, VMScluster CPUsL           can boot and shut down independently while benefiting from common            resources.        M                                         DIGITAL                      May 1995            L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09    J           Applications running on one or more CPUs in a VMScluster system O           access shared resources in a coordinated manner. VMScluster software  O           components synchronize access to shared resources, allowing multiple  I           processes on any CPU in the VMScluster to perform coordinated,             shared data updates.  N           Because resources are shared, VMScluster systems offer higher avail-N           ability than standalone CPUs. Properly configured VMScluster systemsK           can withstand the shutdown or failure of various components. For  O           example, if one CPU in a VMScluster is shut down, users can log in to P           another CPU to create a new process and continue working. Because massN           storage can be shared clusterwide, the new process is able to accessK           the original data. Applications can be designed to survive these             events automatically.   H           All VMScluster systems have the following software features in           common:   P           o  The OpenVMS operating system and VMScluster software allow all CPUsP              to share read and write access to disk files in a fully coordinatedM              environment. Application programs can specify the level of clus- N              terwide file sharing that is required; access is then coordinatedN              by the OpenVMS Extended QIO Processor (XQP) and Record ManagementF              Services (RMS). Coherency of multi-CPU configurations is F              implemented by VMScluster software, using a flexible and 4              sophisticated per-CPU voting mechanism.  M           o  Shared batch and print queues are accessible from any CPU in the N              VMScluster system. The OpenVMS queue manager controls clusterwideM              batch and print queues, which can be accessed by any CPU. Batch  N              jobs submitted to clusterwide queues are routed to any available -              CPU so the batch load is shared.   N           o  The OpenVMS Lock Manager System Services operate in a clusterwideM              manner. These services allow reliable coordinated access to any  P              resource and provide signaling mechanisms at the system and process6              level across the whole VMScluster system.  M           o  All physical disks and tapes in a VMScluster system can be made  $              accessible to all CPUs.  ,                                            2           L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09    O           o  Process information and control services are available clusterwide :              to application programs and system utilities.  P           o  Configuration command procedures assist in adding and removing CPUsB              and in modifying their configuration characteristics.  M           o  The dynamic Show Cluster utility displays the status of VMSclus- =              ter hardware components and communication links.   O           o  A fully automated clusterwide data and application caching feature B              enhances system performance and reduces I/O activity.  M           o  Standard OpenVMS system management and security features work in O              a clusterwide manner so that the entire VMScluster system operates 8              as a single security and management domain.  N           o  The VMScluster software dynamically balances the interconnect I/ON              load in VMScluster configurations that include multiple intercon-              nects.   I           o  Multiple VMScluster systems can be configured on a single or O              extended local area network (LAN). LANs and the LAN adapters used  L              for VMScluster communications can be used concurrently by other              network protocols.   P           o  The optionally installable DECamds availability management tool al-M              lows system managers to monitor and manage resource availability =              in real time on all the members of a VMScluster.   K           o  Cross-architecture satellite booting permits VAX boot nodes to N              provide boot service to Alpha satellites and Alpha boot nodes to 4              provide boot service to VAX satellites.  N           o  System services are provided that enable applications to automat-<              ically detect changes in VMScluster membership.             Definitions   F           The following terms are used frequently throughout this SPD:  M           o  Boot node - A CPU that is both a MOP server and a disk server. A A              boot node can fully service satellite boot requests.   ,                                            3           L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09    O           o  CPU (central processing unit) - An Alpha family or VAX family com- O              puter running the OpenVMS operating system. A CPU comprises one or M              more processors and operates as a VMScluster node. A VMScluster  :              node can be referred to as VMScluster member.  P           o  Disk server - A CPU that uses the OpenVMS MSCP server to make disksJ              to which it has direct access available to other CPUs in the               VMScluster system.   M           o  HSC, HSJ - An intelligent mass storage controller subsystem that                connects to the CI.  I           o  HSD - An intelligent mass storage controller subsystem that  "              connects to the DSSI.  I           o  HSZ - An intelligent mass storage controller subsystem that  "              connects to the SCSI.  O           o  Maintenance Operations Protocol (MOP) server - A CPU that services M              satellite boot requests to provide the initial LAN downline load N              sequence of the OpenVMS operating system and VMScluster software.M              At the end of the initial downline load sequence, the satellite  H              uses a disk server to perform the remainder of the OpenVMS               booting process.   O           o  Mixed-architecture VMScluster system - A VMScluster system that is 5              configured with both VAX and Alpha CPUs.   P           o  MSCP (Mass Storage Control Protocol) - A message-based protocol forI              controlling Digital Storage Architecture (DSA) disk storage  M              subsystems. The protocol is implemented by the OpenVMS DUDRIVER                device driver.   O           o  Satellite - A CPU that is booted over a LAN using a MOP server and               disk server.   O           o  Star coupler - A common connection point for all CI connected CPUs )              and HSC and HSJ controllers.   L           o  Tape server - A CPU that uses the OpenVMS TMSCP Server to make L              tapes to which it has direct access available to other CPUs in #              the VMScluster system.   ,                                            4           L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09    N           o  TMSCP (Tape Mass Storage Control Protocol) - A message-based pro-O              tocol for controlling DSA tape-storage subsystems. The protocol is ?              implemented by the OpenVMS TUDRIVER device driver.   M           o  Vote - CPUs in a VMScluster system may be configured to provide  O              votes that are accumulated across the multi-CPU environment. Each  O              CPU is provided with knowledge of how many Votes are necessary to  L              meet a quorum before distributed shared access to resources is K              enabled. A VMScluster system must be configured with at least                one voting CPU.  H           o  Multi-host - A configuration in which more than one CPU is 4              connected to a single DSSI or SCSI bus.  M           o  Single-host - A configuration in which a single CPU is connected #              to a DSSI or SCSI bus.              VMScluster Client   M           VMScluster configurations may be configured with CPUs that operate, L           and are licensed, explicitly as client systems. VMScluster Client P           licensing is separately orderable, and is also provided as part of theJ           Digital NAS 150 layered product package. VMScluster Client CPUs N           contain full VMScluster functionality as described in this SPD, with#           the following exceptions:   O           o  VMScluster Client CPUs may not provide Votes towards the operation &              of the VMScluster system.  M           o  VMScluster Client CPUs may not MSCP serve disks, nor TMSCP serve               tapes.              Interconnects   N           VMScluster systems are configured by connecting multiple CPUs with aL           communications medium, referred to as an interconnect. VMScluster G           nodes communicate with each other using the most appropriate  H           interconnect available. In the event of interconnect failure, K           VMScluster software automatically uses an alternate interconnect  M           whenever possible. VMScluster software supports any combination of  &           the following interconnects:  '           o  Computer Interconnect (CI)   ,                                            5           L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09    8           o  Digital Storage Systems Interconnect (DSSI)  7           o  Small Computer Storage Interconnect (SCSI)   4           o  Fiber Distributed Data Interface (FDDI)             o  Ethernet   M           CI and DSSI are highly optimized, special-purpose interconnects for C           CPUs and storage subsystems in VMScluster configurations.   N           SCSI is an industry standard storage interconnect. Multiple CPUs mayK           be configured on a single SCSI bus, thereby providing multi-host  P           access to SCSI storage devices. Note that the SCSI bus is not used forO           CPU to CPU communication. Consequently CPUs connected to a multi-host M           SCSI bus must also be configured with another of the interconnects  D           listed above in order to provide CPU to CPU communication.  M           Ethernet and FDDI are industry-standard, general-purpose communica- O           tions interconnects that can be used to implement a LAN. Except where M           noted, VMScluster support for both of these LAN types is identical.   N           VMScluster configurations may be configured using Wide Area Network-N           ing (WAN) infrastructures such as DS3 and ATM. Connectivity to these.           media is achieved with FDDI bridges.             Configuration Rules   O           o  The maximum number of CPUs supported in a VMScluster system is 96.   N           o  Every CPU in a VMScluster system must be connected to every otherC              CPU via any of the supported VMScluster interconnects                (see Table 1).   N           o  VAX-11/7xx, VAX 6000, VAX 7000, VAX 8xxx, VAX 9000, and VAX 10000L              series CPUs require a system disk that is accessed via a local L              controller or through a local CI or DSSI connection. These CPUs=              cannot be configured to boot as satellite nodes.   N           o  All CPUs connected to a CI or DSSI must be configured as VMSclus-P              ter members. VMScluster members configured on a CI or DSSI will be-O              come members of the same VMScluster (this is imposed automatically M              by the VMScluster software). All CPUs connected to a multi-host  K              SCSI bus must be configured as members of the same VMScluster.   ,                                            6           L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09    M           o  A VMScluster system can include any number of star couplers. The N              number of CI adapters supported by different CPUs can be found inO              Table 2 in this SPD. The number of star couplers that a CPU can be O              connected to is limited by the number of adapters it is configured               with.  O           o  The maximum number of CPUs that can be connected to a star coupler 4              is 16, regardless of Star Coupler size.  M           o  The KFQSA Q-bus-to-DSSI adapter does not support VMScluster com- O              munication to other CPUs on the DSSI; CPUs using this adapter must G              include another interconnect for VMScluster communication.   M           o  The maximum number of CPUs that can be connected to a DSSI is 4. M              Depending on CPU model it may not be possible to configure four  E              CPUs on a common DSSI bus, due to DSSI bus cable length                restrictions.O              Refer to the specific CPU system configuration manuals for further               information.   N           o  The maximum number of CPUs that can be connected to a SCSI bus is              2.   J           o  The maximum number of multi-host SCSI buses that a CPU may be              connected to is 2.   M           o  VMScluster CPUs that are configured using WAN interconnects must M              adhere to the detailed line specifications described in the Open O              VMS Version 6.2 New Features Manual. The maximum CPU separation is               150 miles.   N           o  A single time-zone setting must be used by all CPUs in a VMSclus-              ter system.  M           o  A VMScluster system can be configured with a maximum of one quo- P              rum disk. A quorum disk cannot be a member of an OpenVMS volume setL              or of a shadow set created by the Volume Shadowing for OpenVMS               product.   O           o  A system disk can contain only a single version of the OpenVMS op- N              erating system and is architecture specific. For example, OpenVMSO              Alpha Version 6.2 cannot coexist on a system disk with OpenVMS VAX               Version 6.2.   ,                                            7           L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09    O           o  HSJ and HSC series disks and tapes can be dual pathed between con- N              trollers on the same or different star couplers. The HSD30 seriesO              disks and tapes can be dual pathed between controllers on the same I              or different DSSI interconnects. Such dual pathing provides  L              enhanced data availability using an OpenVMS automatic recovery K              capability called failover. Failover is the ability to use an _K              alternate hardware path from a CPU to a storage device when a .H              failure occurs onthe current path. The failover process is M              transparent to applications. Dual pathing between an HSJ or HSC  E              and a local controller is not permitted. When two local rL              controllers are used for dual pathing, each controller must be @              located on a separate CPU of the same architecture.  G           o  Disks and tapes can be dual pathed between pairs of HSZ40 oB              controllers that are connected to the same SCSI bus. M              Failover is accomplished using the HSZ40's transparent failover                capability.  L           o  OpenVMS operating system and layered-product installations and N              upgrades cannot be performed across architectures. OpenVMS Alpha L              software installations and upgrades must be performed using an M              Alpha system with direct access to its system disk. OpenVMS VAX  K              software installations and upgrades must be performed using a  >              VAX system with direct access to its system disk.  N           o  Ethernet LANs and the protocols that use them must conform to theO              IEEE[R] 802.2 and IEEE[R] 802.3 standards. Ethernet LANs must alsom9              support Ethernet Version 2.0 packet formats.s  K           o  FDDI LANs and the protocols that use them must conform to the iD              IEEE[R] 802.2, ANSI X3.139-1987, ANSI X3.148-1988, and (              ANSI X3.166-1990 standards.  I           o  VMScluster systems support up to 4 LAN adapters per CPU for e'              VMScluster communications.   M           o  LAN segments can be bridged to form an extended LAN (ELAN). The  E              ELAN must conform to IEEE[R] 802.1D, with the following a              restrictions:  M              -  All LAN paths used for VMScluster communication must operate nL                 with a nominal bandwidth of at least 10 megabits per second.  ,                                            8 M  u      L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09    L              -  The ELAN must be capable of delivering packets that use the G                 padded Ethernet Version 2.0 packet format and the FDDI s'                 SNAP/SAP packet format.   M              -  The ELAN must be able to deliver packets with a maximum data  7                 field length of at least 1080 bytes.[1]b  M              -  The maximum number of bridges between any two end nodes is 7.e  N              -  The maximum transit delay through any bridge must not exceed 2                 seconds.  M              -  The ELAN must provide error-detection capability between end MN                 nodes that is equivalent to that provided by the Ethernet and 5                 FDDI data link frame-check sequences.n  N           o  The packet-retransmit timeout ratio for VMScluster traffic on theM              LAN from any CPU to another must be less than 1 timeout in 1000 p              transmissions.(             Recommendations   M           The optimal VMScluster system configuration for any computing envi- O           ronment is based on requirements of cost, functionality, performance,cM           capacity, and availability. Factors that impact these requirements e           include:              o  Applications in use             o  Number of users  &           o  Number and models of CPUs  L           o  Interconnect and adapter throughput and latency characteristics  7           o  Disk and tape I/O capacity and access timel  3           o  Number of disks and tapes being served              ____________________N         [1] In the padded Ethernet format, the data field follows the two-byteP             length field.  These two fields together comprise the LLC data field              in the 802.3 format.  ,                                            9 u  r      L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09    %           o  Interconnect utilization.  K           Digital recommends VMScluster system configurations based on its MO           experience with the VMScluster software product. The customer should oE           evaluate specific application dependencies and performance  I           requirements to determine an appropriate configuration for the u(           desired computing environment.  M           When planning a VMScluster system, consider the following recommen-t           dations:  O           o  VMScluster CPUs should be configured using interconnects that pro-rP              vide appropriate performance for the required system usage. In gen-O              eral, use the highest performance interconnect possible. CI, DSSI,rL              and FDDI are the preferred interconnects between powerful CPUs.  P           o  Although VMScluster systems can include any number of system disks,M              consider system performance and management overhead in determin-aM              ing their number and location. While the performance of configu- N              rations with multiple system disks may be higher than with a sin-N              gle system disk, system management efforts increase in proportion+              to the number of system disks.   M           o  Data availability and I/O performance are enhanced when multiple M              VMScluster nodes have direct access to shared storage; whenever  J              possible, configure systems to allow direct access to shared M              storage in favor of OpenVMS MSCP served access. Multiaccess CI, dK              DSSI, and SCSI storage provides higher data availability than oL              singly accessed, local controller-based storage. Additionally, H              dual pathing of disks between local or HSC/HSJ/HSD storage D              controllers enhances data availability in the event of                controller failure.  O           o  VMScluster systems can enhance availability by utilizing redundanttM              components, such as additional CPUs, storage controllers, disks,oM              and tapes. Extra peripheral options, such as printers and termi-eM              nals, can also be included. Multiple instances of all VMSclustereN              interconnects (CI, DSSI, SCSI, Ethernet, and FDDI) are supported.        -                                            10o a         L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09    P           o  To enhance resource availability, VMSclusters that implement satel-O              lite booting should use multiple boot servers. When a server fails N              in configurations that include multiple servers, satellite accessM              to multipath disks will fail over to another path. Disk servers  N              shouldbe the most powerful CPUs in the VMScluster and should use :              the highest bandwidth LAN adapters available.  P           o  The performance of an FDDI LAN varies with each configuration. WhenM              an FDDI is used for VMScluster communications, the ring latency lL              when the FDDI ring is idle should not exceed 400 microseconds. J              This ring latency translates to a cable distance between end 2              nodes of approximately 40 kilometers.  L           o  The ELAN must provide adequate bandwidth, reliability, and low L              delay in order to optimize the operation of the VMScluster. TheK              average LAN segment utilization should not exceed 60% for any fK              10-second interval. If ELAN performance degrades to the point kL              where nodes cannot communicate every 3 seconds, then nodes may M              leave the VMScluster. The effective performance of the ELAN can h8              be increased by following these guidelines:  L              -  Configure high-performance nodes with multiple LAN adapters 4                 connected to different LAN segments.  M              -  Minimize the number of bridges on the path between nodes thattJ                 communicate frequently, such as satellites and their boot                  servers.  M              -  Use bridges to isolate and localize the traffic between nodes M                 that communicate with each other frequently. For example, use P                 bridges to separate the VMScluster from the rest of the ELAN andN                 to separate nodes within a cluster that communicate frequently0                 from the rest of the VMScluster.  N              -  Use FDDI on the communication paths that have the highest per-O                 formance requirements. The NISCS_MAX_PKTSZ system parameter can M                 be adjusted to use the full FDDI packet size. Ensure that the M                 ELAN path supports a data field of at least 4470 bytes end to M                 end, or the ELAN path sets the priority field to zero in the  E                 FDDI frame-control byte on the destination FDDI link.   <              -  Minimize the packet delay between end nodes.  -                                            11  w  g      L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09    L           o  The RAID level 1 storage functionality of Volume Shadowing for 7              OpenVMS provides the following advantages:.  G              -  Enhanced data availability in the event of disk failure   J              -  Enhanced read performance with multiple shadow-set members  M              For more information, refer to the Volume Shadowing for OpenVMS I*              Software Product Description.  O           o  The DECram for OpenVMS software product can be used to create very M              high-performance, memory-resident RAM disks. Refer to the DECram M              for OpenVMS Software Product Description for additional informa-e              tion.             DECamds Features  O           VMScluster Software incorporates the features of a real-time monitor-rM           ing, investigation, diagnostic, and system management tool that canS1           be used to improve system availability.h  M           The DECamds availability management tool contains a console and an lL           OpenVMS device driver. The console is a DECwindows Motif[R] based M           application that allows system managers to display windows showing eK           processes, quotas, disks, locks, memory, and I/O activity in the nN           VMScluster. The Motif[R] display may be directed to any X-compatibleM           display. The driver is a data collector that runs on the monitored nI           VMScluster members. Console application and driver software is t-           provided for Alpha and VAX systems.V             HARDWARE SUPPORT             CPU support   M           Any Alpha or VAX CPU, as documented in the OpenVMS Operating System N           Version 6.2 Software Product Description (SPD 25.01.xx), can be used           in a VMScluster.      -                                            12S a  S      L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09             Interconnect support  M           Table 1 shows which processors are supported on which interconnectsiO           and whether the processor can be booted as a satellite node over thatbO           interconnect. All CPUs can service satellite boot requests over a LANs*           interconnect (FDDI or Ethernet).  N           Note: Note that levels of interconnect support and LAN booting capa-N           bilities are continuously being increased. In many cases these addi-L           tional capabilities result from hardware option and system consoleL           microcode enhancements, and are not dependent on OpenVMS software.O           Refer to the appropriate hardware option and system documentation for *           the most up-to-date information.                                                            -                                            13            L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09    M           ___________________________________________________________________              Table 1:  E           CPU             CI      DSSI    SCSI[8] FDDI       Etherneti  C           AlphaServer     Yes[1]  Yes     -       Yes+Sat[2] Yes[3]m           8400  @           AlphaServer     -       Yes     -       Yes        Yes           8200  @           DEC 7000,       Yes     Yes     -       Yes+Sat[3] Yes           10000e  D           DEC 4000        -       Yes     -       Yes        Yes+Sat  D           DEC 3000        -       -       -       Yes+Sat[4] Yes+Sat  D           AlphaServer     -       Yes     Yes[7]  Yes+Sat[5] Yes+Sat           2100  D           AlphaServer     -       Yes     Yes[7]  Yes        Yes+Sat           1000, 2000  D           AlphaServer     -       -       Yes[7]  Yes        Yes+Sat
           400u  D           AlphaStation    -       -       Yes[7]  Yes        Yes+Sat           200, 250, 
           400-  D           DEC 2000        -       -       -       Yes        Yes+Sat  @           VAX 6000,       Yes     Yes     -       Yes        Yes           7000, 10000   @           VAX 8xxx,       Yes     -       -       -          Yes           9xxx, 11/xxx  D           VAX 4xxx[6]     -       Yes     -       Yes        Yes+Sat  D           VAX 2xxx,       -       -       -       -          Yes+Sat           3xxx[6]cM           ___________________________________________________________________sM           [1]Each "Yes" means that this CPU is supported on this interconect- D           but cannot be booted as a satellite over this interconnectK           [2]Each "Yes+Sat" means that this CPU is supported on this inter-.N           connect and can be booted as a satellite node over this interconnect           [3]Using DEMFA onlyd           [4]Using DEFTA only            [5]Using DEFEA onlySM           [6]Some models may provide slightly different interconnect support,CK           refer to the system specific hardware manual for complete detailsiI           [8]This column refers to multi-host SCSI connectivity. Refer toeH           the appropriate system documentation for information regarding0           single-host connectivity to SCSI buses           L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09                 CI Adapter support  G           VMScluster nodes can be configured with multiple CI adapters.fH           Table 2 shows the types of adapters that are supported by eachI           CPU. There can only be one type of adapter configured on a CPU;mI           the maximum quantity of each type is noted in the table. The CI G           adapters in a CPU can connect to the same, or different, starm           couplers.h  J           Note: The CIBCA-A adapter cannot coexist with a KFMSA adapter on           the same CPU.e  6           Note: The CIBCA-A and CIBCA-B are different.                                                    -                                            15     r      L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09    M           ___________________________________________________________________e             Table 2:  <                                             CIBCA-    CIBCA-E           CPU Type        CI750 CI780 CIBCI A         B         CIXCDi  B           AlphaServer     -     -     -     -         -         10           8400  B           DEC 7000,       -     -     -     -         -         10           10000i  A           VAX 11/750      1     -     -     -         -         -   A           VAX 11/780,     -     1     -     -         -         -o           11785.  A           VAX 6000        -     -     -     1         4         4i  A           VAX 82xx,       -     -     1     1         1         -            83xx  A           VAX 86xx        -     2     -     -         -         -s  A           VAX 85xx,       -     -     1     1         2         -            8700, 88xx  A           VAX 9000        -     -     -     -         -         6l  B           VAX 7000,       -     -     -     -         -         10           10000F             LAN Adapter support_  G           VMScluster systems can use all Ethernet and FDDI LAN adaptersaJ           supported by OpenVMS Version 6.2 for access to Ethernet and FDDIJ           interconnects. Refer to the OpenVMS Operating System for VAX and*           Alpha, SPD for more information.  B           The DEFZA FDDI adapter is supported on VAX systems only.    -                                            16  l  t      L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09             DSSI support  G           Any mix of Alpha and VAX DSSI adapters may be configured on abH           common DSSI bus. Refer to the appropriate hardware manuals forG           specific adapter and configuration information. The followings?           points provide general guidelines for configurations:   G           o  Configure VAX 6000, VAX 7000, VAX 10000 systems with KFMSAw              adapters.  H           o  Configure DEC 7000, DEC 10000, AlphaServer 8400 XMI systems!              with KFMSB adapters.X  G           o  Up to 6 KFMSA/Bs may be configured on an XMI bus. Up to 12g4              KFMSA/Bs may be configured in a system.  J           o  Configure the AlphaServer systems shown in Table 1 with KFESBG              adapters. The AlphaServer 2100 may also be configured withsL              KFESA adapters. AlphaStation systems may not be configured with              KFESA/B.   G           o  Up to three CPUs may be configured on a DSSI when a KFMSB, 2              KFESA or KFESB is present on the bus.  I           o  Up to 4 KFESBs may be configured on a system. Up to 2 KFESAsyL              may be configured on a system. A mix of 1 KFESB and 1 KFESA may'              be configured on a system.   L           o  Because the DEC 4000 DSSI adapter terminates the DSSI bus, only7              two DEC 4000s may be configured on a DSSI.   :           Peripheral Option and Storage Controller support  G           VMScluster systems can use all peripheral options and storagehK           subsystems supported by OpenVMS Version 6.2. Refer to the OpenVMS F           Operating System for VAX and Alpha SPD for more information.  *           Multi-Host SCSI Hardware Support  L           OpenVMS Cluster Software, Version 6.2, provides support for multi-L           host SCSI configurations using a restricted range of Alpha systemsG           and SCSI adapters, devices, and controllers. Single-host SCSIpH           support is provided for an extensive range of systems and SCSIG           adapters, devices and controllers. For further information on   -                                            17u r  i      L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09    H           the complete range of SCSI support please refer to the OpenVMS1           Operating System for VAX and Alpha SPD.y  L           Table 1 shows which systems may be configured on a multi-host SCSIL           bus. These systems must use their embedded system SCSI adapters orK           optional KZPAA adapters to connect to a multi-host SCSI bus. (AnypH           supported SCSI adapter may be used to connect to a single-host           SCSI bus.)  J           Note that optional KZPAA adapters are recommended for connectionG           to multi-host buses. Usage of KZPAA adapters simplifies SCSI nM           cabling, and also leaves the embedded system SCSI bus available for -           tape drives, floppies, and CD-ROMs.   G           Multi-host SCSI configurations may include DWZZA single-endedIG           SCSI to fast wide differential SCSI converters. These provide G           additional SCSI cable length and access to HSZ40 controllers.f  L           The following storage devices may be configured on multi-host SCSI           buses:             o  RZ28i             o  RZ28B             o  RZ26h             o  RZ26L             o  RZ29B  L           Tape drives, floppies, and CD-ROMs may not be configured on multi-F           host SCSI buses. Configure these devices on single-host SCSI           buses.  J           Multi-host SCSI buses must adhere to all SCSI-II specifications.H           Rules regarding cable length and termination must be carefullyK           complied with. Refer to the SCSI-II specification, or the OpenVMS 5           V6.2 Release Notes for further information.d    -                                            18  a  o      L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09    H           Multi-host SCSI buses may be configured with any appropriatelyH           compliant SCSI-II disk. SCSI disk requirements are fully docu-7           mented in the the OpenVMS V6.2 Release Notes.f             Star Coupler Expanderg  J           A CI star coupler expander (CISCE) can be added to any star cou-K           pler to increase its connection capacity to 32 ports. The maximum H           number of CPUs that can be connected to a star coupler is 16, ,           regardless of the number of ports.             DECamds Consoleg  I           Digital recommends that the availability management console runlG           on a standalone workstation with a color monitor. However, it J           can also run on a workstation that is configured as a VMSclusterJ           member, or on a nonworkstation system using DECwindows to direct,           the display to an X-based display.             SOFTWARE REQUIREMENTSn  1           o  OpenVMS Operating System Version 6.2   F              Refer to the OpenVMS Operating System for VAX and Alpha, M              Version 6.2 Software Product Description (SPD 25.01.xx) for moreS              information.c  L           The ability to have more than one version of OpenVMS in a VMSclus-I           ter allows upgrades to be performed in a staged fashion so thatiI           continuous VMScluster system operation is maintained during the H           upgrade process. Only one version of OpenVMS can exist on any K           system disk; multiple versions of OpenVMS in a VMScluster require G           multiple system disks. Also note that system disks are archi-cI           tecture specific - OpenVMS Alpha and OpenVMS VAX cannot coexist J           on the same system disk. The coexistence of multiple versions of            -                                            19            L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09    I           OpenVMS in a VMScluster configuration is supported according tos#           the following conditions:   D              -  Warranted support is provided for mixed-architectureJ                 VMSclusters in which all Alpha and VAX systems are running#                 OpenVMS Version 6.2   H                 Warranted support means that Digital has fully qualifiedI                 the two architectures coexisting in a VMScluster and willrG                 answer any problems identified by customers using thesed                 configurations.g  E              -  Migration support is provided for VMSclusters running K                 OpenVMS Version 6.2 and OpenVMS Versions 1.5 (Alpha), 5.5-2d4                 & 6.0 (VAX), and V6.1 (Alpha & VAX).  J                 Migration support means that Digital has qualified the twoI                 architectures/versions for use together in configurations J                 that are migrating in a staged fashion to a higher versionK                 of OpenVMS or to Alpha systems. Digital will answer problemrI                 reports submitted about these configurations. However, inLK                 exceptional cases, Digital may recommend that you move yourdL                 system to a warranted configuration as part of the solution.  F           Note that Digital does not support the use of more than two A           versions of OpenVMS software in a VMScluster at a time.   K           Digital recommends that all Alpha and VAX systems in a VMScluster ,           run the latest version of OpenVMS.             o  DECnet software  K              DECnet software is not required in a VMScluster configuration. L              However, DECnet software is necessary if the following features              are required:  H              o  Inter-node process to process communication using DECnet                 mailboxese      -                                            20f u  l      L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09    J           o  The Monitor utility with the CLUSTER class or /NODE qualifier  I              Refer to the appropriate DECnet Software Product Description %              for further information.   )           o  DECamds Availability Manager   J              The DECamds Availability Manager requires DECwindows Motif[R]6              Version 1.2-3 for OpenVMS (SPD 42.19.xx).             OPTIONAL SOFTWARE   H           For information about VMScluster support for optional softwareK           products, refer to the VMScluster Support section of the Software 2           Product Descriptions for those products.  E           Optional products that may be useful in VMScluster systems             include:  8           o  Volume Shadowing for OpenVMS (SPD 27.29.xx)  B           o  StorageWorks RAID Software for OpenVMS (SPD 46.49.xx)  .           o  DECram for OpenVMS (SPD 34.26.xx)  M           o  POLYCENTER Performance Data Collector for OpenVMS (SPD 36.02.xx)   F           o  POLYCENTER Performance Advisor for OpenVMS (SPD 36.03.xx)  5           o  VAXcluster Console System (SPD 27.46.xx)l  4           o  Business Recovery Server (SPD 35.05.xx)             GROWTH CONSIDERATIONSr  G           The minimum hardware and software requirements for any futureDL           version of this product may be different than the requirements for           the current version.  -                                            21r e  S      L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09               DISTRIBUTION MEDIA  I           OpenVMS Cluster Software Version 6.2 is distributed on the same I           distribution media as the OpenVMS Operating System Version 6.2. I           Refer to the OpenVMS Operating System for VAX and Alpha SPD forv           more information.n             ORDERING INFORMATION  ;           OpenVMS Cluster Software is orderable as follows:a  N           Every server (non-client) Alpha system in a VMScluster configuration           requires:e  ?           o  VMScluster Software for OpenVMS Alpha, Version 6.2   .              o  Software Licenses: QL-MUZA*-AA  6              o  Software Product Services: QT-MUZA*-**  L           Every server (non-client) VAX system in a VMScluster configuration           requires:v  =           o  VAXcluster Software for OpenVMS VAX, Version 6.2   .              o  Software Licenses: QL-VBRA*-AA  6              o  Software Product Services: QT-VBRA*-**  K           Every Alpha client system in a VMScluster configuration requires:n  C           o  OpenVMS Cluster Client Software for Alpha, Version 6.2   .              o  Software Licenses: QL-3MRA*-AA  6              o  Software Product Services: QT-3MRA*-**  I           Every VAX client system in a VMScluster configuration requires:   A           o  OpenVMS Cluster Client Software for VAX, Version 6.2   .              o  Software Licenses: QL-3MSA*-AA  -                                            22l P  c      L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09    6              o  Software Product Services: QT-3MSA*-**  L           *  Denotes variant fields. For additional information on availableJ              licenses, services, and media, refer to the appropriate price              book.  L           The right to the functionality of the OpenVMS Cluster AvailabilityB           Manager (DECamds) is included in all the above licenses.             DOCUMENTATION   H           The VMScluster Systems for OpenVMS manual, the Guidelines for M           VMScluster Configurations and the DECamds User's Guide are included J           in the OpenVMS Version 6.2 hardcopy documentation as part of the!           Full Documentation Set._  I           Refer to the OpenVMS Operating System for VAX and Alpha Version K           6.2 Software Product Description for additional information about 9           OpenVMS documentation and ordering information.Y  H           Specific terms and conditions regarding documentation on mediaK           apply to this product. Please refer to Digital's terms and condi- $           tions of sale, as follows:  K           "A software license provides the right to read and print software I           documentation files provided with the software distribution kit L           for use by the licensee as reasonably required for licensed use ofK           the software. Any hard copies or copies of files generated by the I           licensee must include Digital's copyright notice. Customization L           or modifications, of any kind, to the software documentation files           are not permitted.  H           Copies of the software documentation files, either hardcopy orG           machine readable, may only be transferred to another party in K           conjunction with an approved relicense by Digital of the software             to which they relate."          -                                            23  e         L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09               SOFTWARE LICENSING  G           This software is furnished under the licensing provisions of _L           Digital Equipment Corporation's Standard Terms and Conditions. ForH           more information about Digital's licensing terms and policies,,           contact your local Digital office.  -           License Management Facility Supporto  K           The OpenVMS Cluster Software product supports the OpenVMS License3$           Management Facility (LMF).  G           License units for this product are allocated on an Unlimited p           System Use basis.e  K           For more information about the License Management Facility, refercL           to the OpenVMS Operating System for VAX and Alpha Software Product:           Description (SPD 25.01.xx) or documentation set.  #           SOFTWARE PRODUCT SERVICES   K           A variety of service options are available from Digital. For more 9           information, contact your local Digital office.              SOFTWARE WARRANTY   L           Warranty for this software product is provided by Digital with theJ           purchase of a license for the product as defined in the Software(           Warranty Addendum of this SPD.  K           The above information is valid at time of release. Please contact H           your local Digital office for the most up-to-date information.                  -                                            24I -  d      L           OpenVMS Cluster Software, Version 6.2                 SPD 29.78.09  .           1995 Digital Equipment Corporation.              All rights reserved.  K           [TM] AlphaServer, AlphaStation, BI, Business Recovery Server, CI, J                DECamds, DECchip, DECnet, DECram, DECwindows, DELUA, DEUNA,J                Digital, DSSI, HSC, HSC40, HSC50, HSC60, HSC70, HSC90, HSJ,L                HSZ, MicroVAX, MicroVAX II, MSCP, OpenVMS, POLYCENTER, Q-bus,J                RA, RZ, StorageWorks, TA, TMSCP, TURBOchannel, UNIBUS, VAX,G                VAX 6000, VAX 9000, VAX-11/750, VAX-11/780, VAXstation,  N                VAXcluster, VMScluster XMI, and the DIGITAL logo are trademarks0                of Digital Equipment Corporation.  K           IEEE is a registered trademark of the Institute of Electrical and $           Electronic Engineers, Inc.  J           Motif is a registered trademark of the Open Software Foundation,           Inc.                                                    -                                            256                                                                                                                                                                              