      Software Product  Description   C ___________________________________________________________________   C PRODUCT NAME:  Compaq OpenVMS Cluster Software         SPD 29.78.19   F This Software Product Description describes Versions 6.2-1H3, 7.1-1H1,E 7.1-1H2, 7.1-2, 7.2, 7.2-1, V7.2-1H1, and V7.3 of the following prod-  ucts:   / o  Compaq VMScluster Software for OpenVMS Alpha   - o  Compaq VAXcluster Software for OpenVMS VAX   D o  Compaq OpenVMS Cluster Client Software for Alpha (part of NAS150)  B o  Compaq OpenVMS Cluster Client Software for VAX (part of NAS150)  D Except where noted, the features described in this SPD apply equallyF to Alpha and VAX systems. Compaq OpenVMS Cluster Software licenses andD part numbers are architecture specific; refer to the Ordering Infor-/ mation section of this SPD for further details.    DESCRIPTION   E Compaq OpenVMS Cluster Software is an OpenVMS System Integrated Prod- E uct (SIP). It provides a highly integrated OpenVMS computing environ- G ment distributed over multiple Alpha and VAX systems. In this SPD, this 1 environment is referred to as an OpenVMS Cluster.   E Systems in an OpenVMS Cluster system can share processing, mass stor- H age (including system disks), and other resources under a single OpenVMSC security and management domain. Within this highly integrated envi- J ronment, systems retain their independence because they use local, memory-D resident copies of the OpenVMS operating system. Thus, OpenVMS Clus-F ter systems can boot and shut down independently while benefiting from common resources.   C                                                          April 2001        F Applications running on one or more systems in an OpenVMS Cluster sys-F tem can access shared resources in a coordinated manner. OpenVMS Clus-C ter software components synchronize access to shared resources, al- F lowing multiple processes on any system in the OpenVMS Cluster to per-& form coordinated, shared data updates.  I Because resources are shared, OpenVMS Cluster systems offer higher avail- D ability than standalone systems. Properly configured OpenVMS ClusterD systems can withstand the shutdown or failure of various components.D For example, if one system in an OpenVMS Cluster is shut down, usersG can log in to another system to create a new process and continue work- D ing. Because mass storage can be shared clusterwide, the new processD is able to access the original data. Applications can be designed to# survive these events automatically.   C All OpenVMS Cluster systems have the following software features in  common:   F o  The OpenVMS operating system and OpenVMS Cluster software allow allF    systems to share read and write access to disk files in a fully co-D    ordinated environment. Application programs can specify the levelC    of clusterwide file sharing that is required; access is then co- C    ordinated by the OpenVMS extended QIO processor (XQP) and Record C    Management Services (RMS). Coherency of multiple-system configu- C    rations is implemented by OpenVMS Cluster software using a flex- 6    ible and sophisticated per-system voting mechanism.  F o  Shared batch and print queues are accessible from any system in theC    OpenVMS Cluster system. The OpenVMS queue manager controls clus- D    terwide batch and print queues, which can be accessed by any sys-D    tem. Batch jobs submitted to clusterwide queues are routed to any0    available system so the batch load is shared.  D o  The OpenVMS Lock Manager System Services operate in a clusterwideC    manner. These services allow reliable, coordinated access to any D    resource, and provide signaling mechanisms at the system and pro-6    cess level across the whole OpenVMS Cluster system.  C o  All disks and tapes in an OpenVMS Cluster system can be made ac-     cessible to all systems.   "                                  2       E o  Process information and control services, including the ability to D    create and delete processes, are available on a clusterwide basisE    to application programs and system utilities. (Clusterwide process 6    creation is available with Version 7.1 and higher.)  F o  Configuration command procedures assist in adding and removing sys-=    tems and in modifying their configuration characteristics.   H o  The dynamic Show Cluster utility displays the status of OpenVMS Clus-3    ter hardware components and communication links.   E o  A fully automated clusterwide data and application caching feature 8    enhances system performance and reduces I/O activity.  C o  The ability to define logical names that are visible across mul- >    tiple nodes in an OpenVMS Cluster (Version 7.2 and higher).  G o  An application programming interface (API) allows applications with- E    inin multiple OpenVMS Cluster nodes to communicate with each other     (Version 7.2 and higher).  C o  Standard OpenVMS system management and security features work in E    a clusterwide manner so that the entire OpenVMS Cluster system op- 5    erates as a single security and management domain.   E o  The OpenVMS Cluster software dynamically balances the interconnect C    I/O load in OpenVMS Cluster configurations that include multiple     interconnects.   D o  Multiple OpenVMS Cluster systems can be configured on a single orD    extended local area network (LAN). LANs and the LAN adapters usedG    for OpenVMS Cluster communications can be used concurrently by other     network protocols.   F o  The optionally installable DECamds availability management tool (asF    well as Availability Manager) allows system managers to monitor andE    manage resource availability in real time on all the members of an     OpenVMS Cluster.   F o  Cross-architecture satellite booting permits VAX boot nodes to pro-D    vide boot service to Alpha satellites and allows Alpha boot nodes-    to provide boot service to VAX satellites.   "                                  3       F o  System services enable applications to automatically detect changes!    in OpenVMS Cluster membership.    Definitions   < The following terms are used frequently throughout this SPD:  D o  Boot node - A system that is both a MOP server and a disk server.9    A boot node can fully service satellite boot requests.   D o  System - An Alpha family or VAX family computer running the Open-F    VMS operating system. A system comprises one or more processors andC    operates as an OpenVMS Cluster node. An OpenVMS Cluster node can /    be referred to as an OpenVMS Cluster member.   C o  Disk server - A system that uses the OpenVMS MSCP server to make D    disks to which it has direct access available to other systems in    the OpenVMS Cluster system.  C o  HSC, HSJ - An intelligent mass storage controller subsystem that     connects to the CI bus.  C o  HSD - An intelligent mass storage controller subsystem that con-     nects to the DSSI bus.   C o  HSG - An intelligent mass storage controller subsystem that con- "    nects to the Fibre Channel bus.  C o  HSZ - An intelligent mass storage controller subsystem that con-     nects to the SCSI bus.   F o  MDR (Compaq Modular Data Router) - Fibre Channel to SCSI bridge al-E    lowing SCSI tape devices to be used behind a Fibre Channel switch.   D o  Maintenance Operations Protocol (MOP) server - A system that ser-D    vices satellite boot requests to provide the initial LAN downlineD    load sequence of the OpenVMS operating system and OpenVMS ClusterI    software. At the end of the initial downline load sequence, the satel- H    lite uses a disk server to perform the remainder of the OpenVMS boot-    ing process.   F o  Mixed-architecture OpenVMS Cluster system - An OpenVMS Cluster sys-:    tem that is configured with both VAX and Alpha systems.  "                                  4       F o  MSCP (mass storage control protocol) - A message-based protocol forC    controlling Digital Storage Architecture (DSA) disk storage sub- C    systems. The protocol is implemented by the OpenVMS DUDRIVER de-     vice driver.   C o  Multihost configuration - A configuration in which more than one C    system is connected to a single CI, DSSI, SCSI, or Fibre Channel     interconnect.  D o  Satellite - A system that is booted over a LAN using a MOP server    and disk server.   E o  Single-host configuration - A configuration in which a single sys- C    tem is connected to a CI, DSSI, SCSI, or Fibre Channel intercon-     nect.  E o  Star coupler - A common connection point for all CI connected sys- $    tems and HSC and HSJ controllers.  D o  Tape server - A system that uses the OpenVMS TMSCP server to makeD    tapes to which it has direct access available to other systems in    the OpenVMS Cluster system.  D o  TMSCP (tape mass storage control protocol) - A message-based pro-E    tocol for controlling DSA tape-storage subsystems. The protocol is 5    implemented by the OpenVMS TUDRIVER device driver.   C o  Vote - Systems in an OpenVMS Cluster system can be configured to C    provide votes that are accumulated across the multi-system envi- D    ronment. Each system is provided with knowledge of how many votesE    are necessary to meet a quorum before distributed shared access to E    resources is enabled. An OpenVMS Cluster system must be configured #    with at least one voting system.   & Compaq OpenVMS Cluster Client Software  F OpenVMS Cluster configurations can be configured with systems that op-C erate and are licensed explicitly as client systems. Compaq OpenVMS F Cluster Client licensing is provided as part of the Compaq NAS150 lay-J ered product. An individually available license for DS-series AlphaServersD is also provided. Compaq OpenVMS Cluster Client systems contain full  "                                  5       E OpenVMS Cluster functionality as described in this SPD, with the fol-  lowing exceptions:  H o  Client systems cannot provide votes toward the operation of the Open-    VMS Cluster system.  ? o  Client systems cannot MSCP serve disks or TMSCP serve tapes.   
 Interconnects   E OpenVMS Cluster systems are configured by connecting multiple systems C with a communications medium, referred to as an interconnect. Open- E VMS Cluster systems communicate with each other using the most appro- D priate interconnect available. In the event of interconnect failure,E OpenVMS Cluster software automatically uses an alternate interconnect D whenever possible. OpenVMS Cluster software supports any combination of the following interconnects:    o  CI (computer interconnect)   . o  DSSI (Digital Storage Systems Interconnect)  - o  SCSI (Small Computer Storage Interconnect)   * o  FDDI (Fiber Distributed Data Interface)   o  Ethernet (10/100, Gigabit)   F o  Asynchronous transfer mode (ATM) (eMULATED LAN configurations only)  / o  Memory Channel (Version 7.1 and higher only)   D o  Fibre Channel (storage only, Alpha only, Version 7.2-1 and higher    only)  C CI and DSSI are highly optimized, special-purpose interconnects for D systems and storage subsystems in OpenVMS Cluster configurations. CID and DSSI provide both system-to-storage communication and system-to- system communication.   "                                  6       C SCSI is an industry-standard storage interconnect. Multiple systems C can be configured on a single SCSI bus, thereby providing multihost F access to SCSI storage devices. Note that the SCSI bus is not used forD system-to-system communication. Consequently, systems connected to aD multihost SCSI bus must also be configured with another interconnect* to provide system-to-system communication.  E Fibre Channel is an evolving industry standard interconnect for stor- E age and communications. Support by OpenVMS V7.2-1 (and higher) allows D for a storage-only interconnect in a multihost environment utilizingE Fibre Channel switched topologies. With Version 7.3, support for SCSI G Tapes utilizing the Modular Data Router bridge is supported. As is true G with SCSI, systems connected to a multihost Fibre Channel bus must also C be configured with another interconnect to provide system-to-system  communication.  C Ethernet, ATM, and FDDI are industry-standard, general-purpose com- D munications interconnects that can be used to implement a local areaD network (LAN). Except where noted, OpenVMS Cluster support for theseF LAN types is identical. The ATM device must be used as an emulated LAND configured device. Ethernet and FDDI provide system-to-system commu-E nication. Storage can be configured in FDDI environments that support  FDDI-based storage servers.   E OpenVMS Cluster configurations can be configured using wide area net- C work (WAN) infrastructures, such as DS3, E3, and ATM. Connection to C these media is achieved by the use of WAN interswitch links (ISLs).   G Memory Channel is a high-performance interconnect that provides system- C to-system communication. Memory Channel does not provide direct ac- G cess to storage, so a separate storage interconnect is required in Mem-  ory Channel configurations.    Configuration Rules   E o  The maximum number of systems supported in an OpenVMS Cluster sys- 
    tem is 96.       "                                  7       E o  Every system in an OpenVMS Cluster system must be connected to ev- G    ery other system via any supported OpenVMS Cluster interconnect (see     Table 1).  D o  VAX-11/7xx, VAX 6000, VAX 7000, VAX 8xxx, VAX 9000, and VAX 10000D    series systems require a system disk that is accessed via a localG    adapter or through a local CI or DSSI connection. These systems can- 0    not be configured to boot as satellite nodes.  D o  All systems connected to a common CI, DSSI, or Memory Channel in-D    terconnect must be configured as OpenVMS Cluster members. OpenVMSC    Cluster members configured on a CI, DSSI, or Memory Channel will D    become members of the same OpenVMS Cluster (this is imposed auto-D    matically by the OpenVMS Cluster software). All systems connectedD    to a multihost SCSI bus must be configured as members of the same    OpenVMS Cluster.   E o  An OpenVMS Cluster system can include any number of star couplers. F    Table 2 shows the number of CI adapters supported by different sys-C    tems. The number of star couplers that a system can be connected C    to is limited by the number of adapters with which it is config-     ured.  E o  The maximum number of systems that can be connected to a star cou- /    pler is 16, regardless of star coupler size.   D o  The KFQSA Q-bus to DSSI adapter does not support system-to-systemE    communication across the DSSI; systems using this adapter must in- A    clude another interconnect for system-to-system communication.   C o  The maximum number of systems that can be connected to a DSSI is E    four, regardless of system or adapter type. Any mix of systems and D    adapters is permitted, except where noted in the Hardware SupportD    section of this SPD. Depending on the system model, it may not beE    possible to configure four systems on a common DSSI bus because of C    DSSI bus cable-length restrictions. Refer to the specific system 8    system configuration manuals for further information.      "                                  8       D o  The maximum number of systems that can be connected to a SCSI busE    is three. If the SCSI bus includes a five-port or greater Fair Ar- F    bitration SCSI Hub (DWZZH-05), the maximum number of systems is in-    creased to four.   G o  The maximum number of multihost SCSI buses that a system can be con-     nected to is 26.   G o  The configuration size for Fibre Channel storage increases on a reg- C    ular basis with new updates to OpenVMS. As such, please refer to C    the Guidelines for OpenVMS Cluster Configurations manual for the .    most up-to-date configuration capabilities.  D o  Beginning with OpenVMS Version 7.2-1, Multipath Failover for bothE    Parallel SCSI and Fibre Channel storage environments is supported. C    This feature allows for the failover of cluster storage communi- D    cations from one path to another when multiple storage buses haveD    been connected to the same data source. For detailed information,E    refer to the Guidelines for OpenVMS Cluster Configurations manual.   F o  OpenVMS Cluster systems that are configured using WAN interconnectsC    must adhere to the detailed line specifications described in the_D    Guidelines for OpenVMS Cluster Configurations manual. The maximum"    system separation is 150 miles.  E o  A single time-zone setting must be used by all systems in an Open-     VMS Cluster system.  D o  An OpenVMS Cluster system can be configured with a maximum of oneC    quorum disk. A quorum disk cannot be a member of an OpenVMS vol- G    ume set or of a shadow set created by the Volume Shadowing for Open-p    VMS product.e  E o  A system disk can contain only a single version of the OpenVMS op-dD    erating system and is architecture specific. For example, OpenVMSE    Alpha Version 7.1 cannot coexist on a system disk with OpenVMS VAXc    Version 7.1.e      "                                  9 f  f  E o  HSJ and HSC series disks and tapes can be dual pathed between con- D    trollers on the same or different star couplers. The HSD30 seriesE    disks and tapes can be dual pathed between controllers on the sameaG    or different DSSI interconnects. Such dual pathing provides enhancedeC    data availability using an OpenVMS automatic recovery capabilityeE    called failover. Failover is the ability to use an alternate hard-nD    ware path from a system to a storage device when a failure occursE    on the current path. The failover process is transparent to appli-eE    cations. Dual pathing between an HSJ or HSC and a local adapter iseD    not permitted. When two local adapters are used for dual pathing,D    each adapter must be located on a separate system of the same ar-G    chitecture. (Note: When disks and tapes are dual pathed between con-nH    trollers that are connected to different star couplers or DSSI buses,F    any system connected to one of the star couplers or buses must also    be connected to the other.)  E o  Disks can be dual pathed between pairs of HSZ controllers that areSC    arranged in a dual-redundant configuration. The controllers musteC    be connected to the same host SCSI bus. Failover is accomplishedl1    using the HSZ transparent failover capability.e  E o  OpenVMS operating system and layered-product installations and up-mG    grades cannot be performed across architectures. OpenVMS Alpha soft-uC    ware installations and upgrades must be performed using an AlphauE    system with direct access to its system disk. OpenVMS VAX softwarepG    installations and upgrades must be performed using a VAX system with $    direct access to its system disk.  D o  Ethernet LANs and the protocols that use them must conform to theD    IEEE 802.2 and IEEE 802.3 standards. Ethernet LANs must also sup-,    port Ethernet Version 2.0 packet formats.  E o  FDDI LANs and the protocols that use them must conform to the IEEElH    802.2, ANSI X3.139-1987, ANSI X3.148-1988, and ANSI X3.166-1990 stan-	    dards.e          "                                 10 M  x  G o  LAN segments can be bridged to form an extended LAN (ELAN). The ELANe@    must conform to IEEE 802.1D, with the following restrictions:  C    -  All LAN paths used for OpenVMS Cluster communication must op-tE       erate with a nominal bandwidth of at least 10 megabits per sec- 
       ond.  H    -  The ELAN must be capable of delivering packets that use the paddedE       Ethernet Version 2.0 packet format and the FDDI SNAP/SAP packety
       format.c  H    -  The ELAN must be able to deliver packets with a maximum data field'       length of at least 1080 bytes.[1]O  G    -  The maximum number of bridges between any two end nodes is seven.e  F    -  The maximum transit delay through any bridge must not exceed two       seconds.  H    -  The ELAN must provide error-detection capability between end nodesG       that is equivalent to that provided by the Ethernet and FDDI data !       link frame-check sequences.s  H o  The average packet-retransmit timeout ratio for OpenVMS Cluster traf-F    fic on the LAN from any system to another must be less than 1 time-    out in 1000 transmissions.c   Recommendations   F The optimal OpenVMS Cluster system configuration for any computing en-G vironment is based on requirements of cost, functionality, performance,mF capacity, and availability. Factors that impact these requirements in- clude:   o  Applications in use   o  Number of users   o  Number and models of systemst   ____________________I  In  the padded Ethernet format, the data field follows the 2-byte lengthmF   field.  These two fields together comprise the LLC data field in the     802.3 format.r  "                                 11    i  B o  Interconnect and adapter throughput and latency characteristics  - o  Disk and tape I/O capacity and access time   ) o  Number of disks and tapes being serveds   o  Interconnect utilizationu  D Compaq recommends OpenVMS Cluster system configurations based on itsI experience with the OpenVMS Cluster Software product. The customer shouldaC evaluate specific application dependencies and performance require-uD ments to determine an appropriate configuration for the desired com- puting environment.C  D When planning an OpenVMS Cluster system, consider the following rec-
 ommendations:c  C o  OpenVMS Cluster systems should be configured using interconnects C    that provide appropriate performance for the required system us- C    age. In general, use the highest-performance interconnect possi-VE    ble. CI and Memory Channel are the preferred interconnects betweenk    powerful systems.  D o  Although OpenVMS Cluster systems can include any number of systemD    disks, consider system performance and management overhead in de-E    termining their number and location. While the performance of con-OC    figurations with multiple system disks may be higher than with aAD    single system disk, system management efforts increase in propor-&    tion to the number of system disks.  C o  Data availability and I/O performance are enhanced when multiple F    OpenVMS Cluster systems have direct access to shared storage; when-D    ever possible, configure systems to allow direct access to sharedH    storage in favor of OpenVMS MSCP served access. Multiaccess CI, DSSI,D    SCSI, and Fibre Channel storage provides higher data availabilityC    than singly accessed, local adapter-based storage. Additionally, C    dual pathing of disks between local or HSC/HSJ/HSD/HSZ/HSG stor-sH    age controllers enhances data availability in the event of controller    failure.O  "                                 12 t  s  D o  OpenVMS Cluster systems can enhance availability by utilizing re-G    dundant components, such as additional systems, storage controllers,lC    disks, and tapes. Extra peripheral options, such as printers andSC    terminals, can also be included. Multiple instances of all Open-tD    VMS Cluster interconnects (CI, Memory Channel, DSSI, SCSI, Ether-E    net, ATM, Gigabit Ethernet, Fibre Channel and FDDI) are supported.n  D o  To enhance resource availability, OpenVMS Clusters that implementD    satellite booting should use multiple boot servers. When a serverC    fails in configurations that include multiple servers, satellitetI    access to multipath disks will fail over to another path. Disk serversoH    should be the most powerful systems in the OpenVMS Cluster and should4    use the highest bandwidth LAN adapters available.  F o  The performance of an FDDI LAN varies with each configuration. WhenC    an FDDI is used for OpenVMS Cluster communications, the ring la-hC    tency when the FDDI ring is idle should not exceed 400 microsec-sE    onds. This ring latency translates to a cable distance between endo(    nodes of approximately 40 kilometers.  E o  The ELAN must provide adequate bandwidth, reliability, and low de-tF    lay to optimize the operation of the OpenVMS Cluster. There are in-D    depth configuration guidelines for these ELAN environments in theG    OpenVMS documentation set, which are frequently updated as the tech- C    nology area evolves. For specific configuration information, re-      fer to the following manuals:      -  OpenVMS Cluster Systems   3    -  Guidelines for OpenVMS Cluster Configurationse  D o  The RAID level 1 storage functionality of Compaq Volume Shadowing1    for OpenVMS provides the following advantages:r  =    -  Enhanced data availability in the event of disk failureu  @    -  Enhanced read performance with multiple shadow-set members  G    For more information, refer to the Compaq Volume Shadowing for Open-n3    VMS Software Product Description (SPD 27.29.xx).   "                                 13 r  A  E o  The Compaq DECram for OpenVMS software product can be used to cre-hE    ate high-performance, memory-resident RAM disks. Refer to the Com-sE    paq DECram for OpenVMS Software Product Description (SPD 34.26.xx)e    for additional information.  ) DECamds and Availability Manager Features   E OpenVMS software incorporates the features of a real-time monitoring,tF investigation, diagnostic, and system management tool that can be usedC to improve overall cluster system availability. DECamds can be usedh4 in both clustered and nonclustered LAN environments.  H The DECamds availability management tool contains a console and an Open-C VMS device driver. The console is a DECwindows Motif based applica- F tion that allows system managers to display windows showing processes,C quotas, disks, locks, memory, SCS data structures, and I/O activity C in the OpenVMS Cluster. The Motif display can be directed to any X- C compatible display. The driver is a data collector that runs on theNE monitored OpenVMS systems. Console application and driver software ise# provided for Alpha and VAX systems.p  D Availability Manager is functionally similar to DECamds, but it runs. on Windows-based systems and on OpenVMS Alpha.   HARDWARE SUPPORT   System Support  D Any Alpha or VAX system, as documented in the Compaq OpenVMS Operat-I ing System for VAX and Alpha Software Product Description (SPD 25.01.xx),m" can be used in an OpenVMS Cluster.  0 Peripheral Option and Storage Controller Support  G OpenVMS Cluster systems can use all peripheral options and storage sub- C systems supported by OpenVMS. Refer to the Compaq OpenVMS Operatingo2 System for VAX and Alpha SPD for more information.   Interconnect Support  "                                 14 ,  p  D Table 1 shows which systems are supported on which interconnects andE whether the system can be booted as a satellite node over that inter-tC connect. All systems can service satellite boot requests over a LAND  interconnect (FDDI or Ethernet).  E Note: Levels of interconnect support and LAN booting capabilities arebE continually being increased. In many cases, these additional capabil-AG ities result from hardware option and system console microcode enhance-MD ments and are not dependent on OpenVMS software. Refer to the appro-C priate hardware option and system documentation for the most up-to-S date information.z   LAN Supportp  G OpenVMS Cluster systems can use all Ethernet (10 Mb/sec and 100 Mb/sec)oE and FDDI LAN adapters supported by OpenVMS for access to Ethernet and C FDDI interconnects. Any number of LAN adapters can be configured in G any combination (with the exception that a Q-bus can be configured withiD only one FDDI adapter). Refer to the Compaq OpenVMS Operating SystemD for VAX and Alpha Software Product Description for more information.  C Gigabit Ethernet LAN adapters can be used for limited OpenVMS Clus-oE ter interconnect capability for Version 7.1-2 through Version 7.2-xx.nD OpenVMS Version 7.3 clusters provide more robust support for GigabitG Ethernet and ATM emulated LAN Ethernet connections. Additionally, Open-nE VMS Version 7.3 also allows for load distribution of SCS cluster com--E munications traffic across multiple, parallel LAN connections betweenwD cluster nodes. Refer to the release notes for your OpenVMS operating? system version for specific limitations on these interconnects.S  8 The DEFZA FDDI adapter is supported on VAX systems only.  0 Note: VAX systems cannot be booted over an FDDI.              "                                 15    ,  C ___________________________________________________________________e  C Table_1:___________________________________________________________c  6                                                ATM,[3]3                  Memory                        Eth-mB                  Chan-                         er-     Fibre Chan-C System_____CI____nel[1]__DSSI___SCSI[2]FDDI____net_____nel_________s  : AlphaServerYes[4]Yes     Yes[5] Yes    Yes+Sat[Yes     Yes GS 80/160/320,c	 GS60/140,g 8200,i 8400  : AlphaServerYes   Yes     Yes    Yes    Yes+Sat Yes     Yes ES40,o 4000,n 4100  C ___________________________________________________________________  [1]Version 7.1 and higher only.n  > [2]This column refers to multihost SCSI connectivity. Refer to> the appropriate system documentation for information regarding' single-host connectivity to SCSI buses.e  ; [3]ATM using an emulated LAN configuration can be used as a ? cluster interconnect on all AlphaServer systems, except for Al-t? phaServer 300 and 400 systems. ATM is not supported on the DEC-7) series systems listed nor on VAX systems.l  @ [4]Each "Yes" means that this system is supported on this inter-? connect but cannot be booted as a satellite over this intercon-V nect.x2 [5]DSSI is not supported on GS-Series AlphaServers  = [6]Each "Yes+Sat" means that this system is supported on thisa@ interconnect and can be booted as a satellite node over this in- terconnect.n  "                                 16 l       C ___________________________________________________________________l  6                                                ATM,[3]3                  Memory                        Eth-CB                  Chan-                         er-     Fibre Chan-C System_____CI____nel[1]__DSSI___SCSI[2]FDDI____net_____nel_________e  = AlphaServerYes   Yes     Yes    Yes    Yes+Sat Yes+Sat Yes[7]s 1200,n 2000,  2100,e 2100Ah  = AlphaServer-     Yes     Yes    Yes    Yes+Sat Yes+Sat Yes[8]i DS10/10L/20, 1000,s 1000Ap  = AlphaServer-     -       Yes    Yes    Yes+Sat[Yes+Sat Yes[9]i 400,800y  8 AlphaServer-     -       -      Yes    Yes     Yes+Sat - 300n  8 AlphaStatio-s    -       -      Yes    Yes+Sat[Yes+Sat -  8 DEC        Yes   -       Yes    -      Yes+Sat Yes     - 7000,  10000c                    "                                 17    p    C ___________________________________________________________________t  6                                                ATM,[3]3                  Memory                        Eth- B                  Chan-                         er-     Fibre Chan-C System_____CI____nel[1]__DSSI___SCSI[2]FDDI____net_____nel_________e  C ___________________________________________________________________x [1]Version 7.1 and higher only.S  > [2]This column refers to multihost SCSI connectivity. Refer to> the appropriate system documentation for information regarding' single-host connectivity to SCSI buses.g  ; [3]ATM using an emulated LAN configuration can be used as ag? cluster interconnect on all AlphaServer systems, except for Al- ? phaServer 300 and 400 systems. ATM is not supported on the DEC- ) series systems listed nor on VAX systems.. [7]AlphaServer 1200 only.  [8]Excludes AlphaServer 1000.  [9]AlphaServer 800 only.  A [10]Version 7.1 and higher only. Most models provide FDDI booting ? capability. Refer to system-specific documentation for details.e                              "                                 18 h  e    C ___________________________________________________________________.  6                                                ATM,[3]3                  Memory                        Eth--B                  Chan-                         er-     Fibre Chan-C System_____CI____nel[1]__DSSI___SCSI[2]FDDI____net_____nel_________n  8 DEC        -     -       Yes    -      Yes     Yes+Sat - 4000  8 DEC        -     -       -      Yes    Yes+Sat[Yes+Sat - 3000  8 DEC        -     -       -      -      Yes     Yes+Sat - 2000  8 VAX        Yes   -       Yes    -      Yes     Yes     - 6000,  7000,n 10000O                                      "                                 19    f    C ___________________________________________________________________f  6                                                ATM,[3]3                  Memory                        Eth-aB                  Chan-                         er-     Fibre Chan-C System_____CI____nel[1]__DSSI___SCSI[2]FDDI____net_____nel_________l  8 VAX        Yes   -       -      -      -       Yes     - 8xxx,k 9xxx,r 11/xxx  8 VAX        -     -       Yes    -      Yes     Yes+Sat - 4xxx[12]  8 VAX        -     -       -      -      -       Yes+Sat - 2xxx,A 3xxx[12]C ___________________________________________________________________s [1]Version 7.1 and higher only.c  > [2]This column refers to multihost SCSI connectivity. Refer to> the appropriate system documentation for information regarding' single-host connectivity to SCSI buses.J  ; [3]ATM using an emulated LAN configuration can be used as a ? cluster interconnect on all AlphaServer systems, except for Al-e? phaServer 300 and 400 systems. ATM is not supported on the DEC-n) series systems listed nor on VAX systems.c [11]Using DEFTA only.r  @ [12]Some models may provide slightly different interconnect sup-9 port. Refer to system-specific documentation for details.lC ___________________________________________________________________g  
 CI Support  D OpenVMS Cluster systems can be configured with multiple CI adapters.C Table 2 shows the types of adapters that are supported by each sys-aG tem. There can be only one type of adapter configured in a system (withrF the exception that, with OpenVMS Version 7.1, CIXCD and CIPCA adaptersE can be configured together in the same system). The maximum number ofd  "                                 20 e  d  E each type is noted in the table. The CI adapters in a system can con-e, nect to the same or different star couplers.  D Note: The CIBCA-A adapter cannot coexist with a KFMSA adapter on the same system.  , Note: The CIBCA-A and CIBCA-B are different.  C ___________________________________________________________________S  C Table_2:___________________________________________________________e   System -C CIxxx_______750___780___BCI___BCA-A____BCA-B____XCD____PCA_________n  ? AlphaServer -     -     -     -        -        10     10,26[1]  GS, 8400  ? AlphaServer -     -     -     -        -        -      10,26[1]l 8200  ; AlphaServer -     -     -     -        -        -      3[2]f ES,p 4000,s 4100  ; AlphaServer -     -     -     -        -        -      6[3]  4000 + I/O ex-  pansionm  8 AlphaServer -     -     -     -        -        -      3 DS,o 2100A, 1200  C ___________________________________________________________________ @ [1]The two numbers represent the support limits for Version 6.2-- 1H3 and Version 7.1 and higher, respectively.E  @ [2]For three CIPCAs, one must be CIPCA-AA and two must be CIPCA- BA.a [3]Only three can be CIPCA-AA.  "                                 21    N    C ___________________________________________________________________s   System -C CIxxx_______750___780___BCI___BCA-A____BCA-B____XCD____PCA_________   ; AlphaServer -     -     -     -        -        -      2[4]i 2000,m 2100  8 DEC         -     -     -     -        -        10     - 7000,  10000s  8 VAX         1     -     -     -        -        -      - 11/750  8 VAX         -     1     -     -        -        -      - 11/780,m 11785   8 VAX 6000    -     -     -     1        4        4      -  8 VAX         -     -     1     1        1        -      - 82xx,  83xx  8 VAX 86xx    -     2     -     -        -        -      -  8 VAX         -     -     1     1        2        -      - 85xx,t 8700,  88xx  8 VAX 9000    -     -     -     -        -        6      -  8 VAX         -     -     -     -        -        10     - 7000,t 10000rC ___________________________________________________________________a [4]Only one can be a CIPCA-BA.C ___________________________________________________________________o  A Observe the following guidelines when configuring CIPCA adapters:o  C o  The CIPCA adapter can coexist on a CI bus with CIXCD and CIBCA-ByD    CI adapters and all variants of the HSC/HSJ controller except theC    HSC50. Other CI adapters cannot be configured on the same CI bush  "                                 22 a  n  C    as a CIPCA. HSC40/70 controllers must be configured with a Revi-_"    sion F (or higher) L109 module.  G o  The CIPCA-AA adapter occupies a single PCI backplane slot and a sin-     gle EISA backplane slot.t  9 o  The CIPCA-BA adapter occupies two PCI backplane slots.    Star Coupler Expander   C A CI star coupler expander (CISCE) can be added to any star couplersC to increase its connection capacity to 32 ports. The maximum numbersD of systems that can be connected to a star coupler is 16, regardless of the number of ports.   4 Memory Channel Support (Version 7.1 and higher only)  D Memory Channel is supported on all AlphaServer systems starting withG the AlphaServer 1000. Observe the following rules when configuring Mem-a ory Channel:  G o  A maximum of eight systems can be connected to a single Memory Chan-n    nel interconnect.  D o  Systems configured with Memory Channel adapters require a minimum    of 128 megabytes of memory.  G o  A maximum of two Memory Channel adapters can be configured in a sys-sD    tem. Configuring two Memory Channel interconnects can improve theF    availability and performance of the cluster configuration. Only oneH    Memory Channel adapter may be configured in an AlphaServer 8xxx DWLPAD    I/O channel configured with any other adapter or bus option. ThisC    restriction does not apply to the DWLPB I/O channel, or to DWLPA ;    I/O channels that have no other adapters or bus options.s  E o  Multiple adapters in a system cannot be connected to the same Mem-a    ory Channel hub.s  F o  Memory Channel adapters must all be the same version. Specifically,D    a Memory Channel V1.5 adapter cannot be mixed with a Memory Chan-/    nel V2.0 adapter within the same connection.e  "                                 23 o  l   DSSI Support  D Any mix of Alpha and VAX DSSI adapters can be configured on a commonE DSSI bus (except where noted in the following list). Refer to the ap-vE propriate hardware manuals for specific adapter and configuration in-iC formation. The following points provide general guidelines for con-S figurations:  E o  Configure the AlphaServer systems shown in Table 1 with KFPSA (PCIfG    to DSSI) adapters. The KFPSA is the highest-performance DSSI adaptert(    and is recommended wherever possible.  $ o  Other supported adapters include:  F    -  KFESB (EISA to DSSI) for all AlphaServer systems except 4xxx and       8xxx modelss  7    -  KFESA (EISA to DSSI) for AlphaServer 2100 systemsu  !    -  KFMSB for Alpha XMI systemsS      -  KFMSA for VAX XMI systemsC  !    -  KFQSA for VAX Q-bus systemsT  E o  KFMSB adapters and KFPSA adapters cannot be configured on the samea    DSSI bus.  1 o  Up to 24 KFPSAs can be configured on a system.l  4 o  Up to 6 KFMSA/Bs can be configured on an XMI bus.  3 o  Up to 12 KFMSA/Bs can be configured in a system.s  3 o  Up to four KFESBs can be configured on a system.a  2 o  Up to two KFESAs can be configured on a system.  B o  A mix of one KFESB and one KFESA can be configured on a system.  F o  Because the DEC 4000 DSSI adapter terminates the DSSI bus, only two)    DEC 4000s can be configured on a DSSI.   "                                 24 m  i  G o  Some of the new generation AlphaServer processors will support DSSI.eD    The GS series and the DS20 series will have support. Other DS se-#    ries and the ES series will not.x   Multihost SCSI Support  C Compaq OpenVMS Cluster Software provides support for multihost SCSI G configurations using Alpha systems and SCSI adapters, devices, and con- C trollers. Table 1 shows which systems can be configured on a multi-  host SCSI bus.  C Any AlphaStation or AlphaServer system that supports optional KZPSAeE (fast-wide differential) or KZPBA-CB (ultrawide differential; VersionlC 7.1-1H1 and higher only) adapters can use them to connect to a mul-fG tihost SCSI bus. Refer to the appropriate system documentation for sys-nC tem specific KZPSA and KZPBA support information. Single-host UltranC SCSI connections with either the KZPBA-CA (ultrawide single-channelsG adapter) or the KZPBA-CB (ultrawide differential adapter) are supportedm in Version 6.2-H3 and higher.,  C Also, any AlphaStation or AlphaServer system except the AlphaServercG 4000, 4100, 8200, and 8400 can use embedded NCR-810-based SCSI adapters > or optional KZPAA adapters to connect to a multihost SCSI bus.  E Additionally, DEC 3000 systems can use optional KZTSA (fast-wide dif-o7 ferential) adapters to connect to a multihost SCSI bus.s  G Note: A wide range of SCSI adapters can be used to connect to a single-iG host SCSI bus. For further information about the complete range of SCSIoE support, refer to the Compaq OpenVMS Operating System for VAX and Al-e! pha Software Product Description.o  F Compaq recommends optional adapters for connection to multihost buses.D Use of optional adapters simplifies SCSI cabling and also leaves theD embedded system adapter available for tape drives, floppies, and CD- ROMs.e  G Multihost SCSI configurations can include DWZZA/DWZZB single-ended SCSI   to differential SCSI converters.  "                                 25 s  c  E Multihost SCSI buses can be configured with any appropriately compli-eF ant SCSI-2 or SCSI-3 disk. Disks must support the following three fea- tures:   o  Multihost support   o  Tagged command queueing  " o  Automatic bad block revectoring  C These SCSI disk requirements are fully documented in the GuidelinesmF for OpenVMS Cluster Configurations manual. In general, nearly all diskE drives available today, from Compaq or third-party suppliers, supportaF these features. Known exceptions to the range of Compaq drives are the= RZ25 and RZ26F, which do not support tagged command queueing.a  C Tape drives, floppy disks, and CD-ROMs cannot be configured on mul- E tihost SCSI buses. Configure these devices on single-host SCSI buses.b  D HSZ series storage controllers can be configured on a mulithost SCSIF bus. Refer to the appropriate HSZ storage controller documentation forD configuration information. Note that it is not possible to configureE tape drives, floppy disks, or CD-ROMs on HSZ controller storage busesm2 when the HSZ is connected to a multihost SCSI bus.  C Multihost SCSI buses must adhere to all SCSI-2 or SCSI-3 specifica- C tions. Rules regarding cable length and termination must be adheredoG to carefully. Refer to the SCSI-2 or SCSI-3 specification or the Guide-yD lines for OpenVMS Cluster Configurations manual for further informa- tion.t   Fibre Channel Storage Supporte  F Beginning with Version 7.2-1, Compaq OpenVMS Cluster Software providesD support for multihost Fibre Channel storage configurations using Al-J pha systems and Fibre Channel adapters, switches, and controllers. Direct-E attached Fibre Channel storage and Arbitrated Loop Fibre Channel con-SC figurations are not supported. For the current configuration guide-dG lines and limitations, refer to the Guidelines for OpenVMS Cluster Con-QF figurations manual. This manual outlines the specific requirements forE the controller (HSG80 and HSG60), switch, and adapter (KGPSA-**), andi  "                                 26 n  L  D for the disks that can be attached to this configuration. The numberG of hosts, adapters, switches, and distance between these items, is con-lF stantly being increased, so refer to the manual for the up-to-date in-  formation on this evolving area.  E Starting with OpenVMS Version 7.3, SCSI tape devices can be connectedcD to a Fibre Channel storage environment with the use of a Compaq Mod-D ular Data Router (MDR) bridge product. This bridge allows these tapeE devices to be placed behind the Fibre Channel switch environment, andoD to be shared via the same methodologies that the Fibre Channel disks in the same fabric.a  C Because the support for Fibre Channel is currently limited to stor- D age only, a second interconnect for node-to-node communications must< be present for the full clustered capability to be utilized.   DECamds Console_  D Compaq recommends that the DECamds console run on a standalone work-D station with a color monitor. However, it can also run on a worksta-F tion that is configured as an OpenVMS Cluster member, or on a nonwork-C station system using DECwindows to direct the display to an X-based  display.   SOFTWARE REQUIREMENTS    Compaq OpenVMS Operating Systemi  D Refer to the Compaq OpenVMS Operating System for VAX and Alpha Soft-= ware Product Description (SPD 25.01.xx) for more information._  H The ability to have more than one version of OpenVMS in an OpenVMS Clus-D ter allows upgrades to be performed in a staged fashion so that con-E tinuous OpenVMS Cluster system operation is maintained during the up-tH grade process. Only one version of OpenVMS can exist on any system disk;C multiple versions of OpenVMS in an OpenVMS Cluster require multipletC system disks. Also, system disks are architecture specific: OpenVMSe      "                                 27 4  c  E Alpha and OpenVMS VAX cannot coexist on the same system disk. The co-aD existence of multiple versions of OpenVMS in an OpenVMS Cluster con-> figuration is supported according to the following conditions:  E o  Warranted support is provided for mixed-architecture OpenVMS Clus-dF    ter systems in which all Alpha and VAX systems are running the sameD    version of OpenVMS-Version 6.2-xxx, Version 7.0, Version 7.1-xxx,#    Version 7.2-xxx, or Version 7.3.   F    Warranted support means that Compaq has fully qualified the two ar-H    chitectures coexisting in a OpenVMS Cluster and will answer any prob-;    lems identified by customers using these configurations.F  D o  Migration support is provided for OpenVMS Cluster systems runningC    two versions of the OpenVMS operating system. These versions can-    be:  C    -  Any mix of Version 7.3, Version 7.2-1xx, Version 7.2, Versionp.       7.1-2, Version 7.1-1Hx, and Version 7.1.  C    -  Any mix of Version 7.2, Version 7.1-xxx, and Version 6.2-xxx.Y  ?    -  Any mix of Version 7.1, Version 7.0, and Version 6.2-xxx.s  E    -  Any mix of Version 6.2-xxx with OpenVMS VAX Version 5.5-2, Ver--G       sion 6.0, Version 6.1 and OpenVMS Alpha Version 1.5, Version 6.0,        Version 6.1.  C    Migration support means that Compaq has qualified the two archi- D    tectures and versions for use together in configurations that areE    migrating in a staged fashion to a higher version of OpenVMS or to D    Alpha systems. Compaq will answer problem reports submitted aboutG    these configurations. However, in exceptional cases, Compaq may rec-_C    ommend that you move your system to a warranted configuration as     part of the solution.  H Note: Compaq does not support the use of more than two versions of Open-C VMS software in the same OpenVMS Cluster at the same time. However,uD in many cases, running more than two versions or mixing versions not, described above will operate satisfactorily.  "                                 28       C Compaq recommends that all Alpha and VAX systems in a OpenVMS Clus-A& ter run the latest version of OpenVMS.   DECnet softwareA  D DECnet software is not required in an OpenVMS Cluster configuration.F However, DECnet software is necessary for internode process-to-process) communication that uses DECnet mailboxes.   F The OpenVMS Version 6.2-1H3 Monitor utility uses DECnet for intraclus- ter communication.  C The OpenVMS Version 7.1 (and higher) Monitor utility uses TCP/IP or D DECnet based transports, as appropriate, for intracluster communica- tion.   E Refer to the appropriate DECnet Software Product Description for fur-l ther information.F   DECamds_  G DECamds requires Compaq DECwindows Motif for OpenVMS. Refer to the Com--L paq DECwindows Motif for OpenVMS Software Product Description (SPD 42.19.xx) for details.   OPTIONAL SOFTWARE   C For information about OpenVMS Cluster support for optional software C products, refer to the OpenVMS Cluster Support section of the Soft- - ware Product Descriptions for those products.f  C Optional products that may be useful in OpenVMS Cluster systems in-_ clude:  5 o  Compaq Volume Shadowing for OpenVMS (SPD 27.29.xx)   2 o  Compaq RAID Software for OpenVMS (SPD 46.49.xx)  + o  Compaq DECram for OpenVMS (SPD 34.26.xx)   + o  VAXcluster Console System (SPD 27.46.xx)_  "                                 29 A      GROWTH CONSIDERATIONS   E The minimum hardware and software requirements for any future version C of this product may be different than the requirements for the cur- 
 rent version.    DISTRIBUTION MEDIA  D OpenVMS Cluster Software is distributed on the same distribution me-C dia as the OpenVMS Operating System. Refer to the OpenVMS Operatingr2 System for VAX and Alpha SPD for more information.   ORDERING INFORMATION  1 OpenVMS Cluster Software is orderable as follows:i  C Every server (nonclient) Alpha system in an OpenVMS Cluster config-  uration requires:   ( o  VMScluster Software for OpenVMS Alpha  $    -  Software Licenses: QL-MUZA*-AA  ,    -  Software Product Services: QT-MUZA*-**      -  LMF PAK Name: VMSCLUSTER  D Note: Compaq VMScluster Software for OpenVMS Alpha provides a uniqueC ordering and pricing model for single-CPU and dual-CPU capable sys-rE tems. Specifically, all AlphaServer DS-series systems, along with Al-_F phaServer 800 and 1200 systems, should use the QL-MUZAC-AA license or-E der number; for service, use the corresponding QT-MUZAC-** order num-pH ber. For all remaining AlphaServer systems in the Workgroup system classC (such as the ES40), use the standard QL-MUZAE-AA license order num-IL ber; for service, use the corresponding QT-MUZAE-** order number. VMSclusterE pricing and ordering for the remaining system classes of AlphaServerst are unchanged.      "                                 30 n     D Every server (nonclient) VAX system in an OpenVMS Cluster configura- tion requires:  & o  VAXcluster Software for OpenVMS VAX  $    -  Software Licenses: QL-VBRA*-AA  ,    -  Software Product Services: QT-VBRA*-**      -  LMF PAK Name: VAXCLUSTER  H OpenVMS Cluster Client Software is available as part of the NAS150 prod-@ uct. It is also separately orderable for DS-Series AlphaServers.  / o  VMScluster Client Software for OpenVMS Alpha   $    -  Software Licenses: QL-3MRA*-AA  .    -  Software Migration Licenses: QL-6J7A*-AA  ,    -  Software Product Services: QT-3MRA*-**  %    -  LMF PAK Name: VMSCLUSTER-CLIENTp  F *  Denotes variant fields. For additional information on available li-D    censes, services, and media, refer to the appropriate price book.  C The right to the functionality of the DECamds and Availability Man-_E ager availability management software is included in all the licensest in the preceding list.  
 DOCUMENTATION   C The following manuals are included in the OpenVMS hardcopy documen-m- tation as part of the full documentation set:3   o  OpenVMS Cluster Systems  0 o  Guidelines for OpenVMS Cluster Configurations   o  DECamds User's Guide_  $ o  Availability Manager User's Guide  "                                 31 _  -  D Refer to the Compaq OpenVMS Operating System for VAX and Alpha Soft-F ware Product Description for additional information about OpenVMS doc- umentation and how to order it.0  D Specific terms and conditions regarding documentation on media applyC to this product. Refer to Compaq's terms and conditions of sale, as1 follows:  F "A software license provides the right to read and print software doc-D umentation files provided with the software distribution kit for useD by the licensee as reasonably required for licensed use of the soft-G ware. Any hard copies or copies of files generated by the licensee must E include Compaq's copyright notice. Customization or modifications, of @ any kind, to the software documentation files are not permitted.  F Copies of the software documentation files, either hardcopy or machineF readable, may only be transferred to another party in conjunction withF an approved relicense by Compaq of the software to which they relate."   SOFTWARE LICENSING  C This software is furnished under the licensing provisions of Compaq E Computer Corporation's Standard Terms and Conditions. For more infor-0D mation about Compaq's licensing terms and policies, contact your lo- cal Compaq office.  # License Management Facility Support0  F The OpenVMS Cluster Software product supports the OpenVMS License Man- agement Facility (LMF).A  C License units for this product are allocated on an Unlimited SystemS
 Use basis.  D For more information about the License Management Facility, refer toC the OpenVMS Operating System for VAX and Alpha Software Product De-d. scription (SPD 25.01.xx) or documentation set.      "                                 32 s  f   SOFTWARE PRODUCT SERVICESd  D A variety of service options are available from Compaq. For more in-, formation, contact your local Compaq office.   SOFTWARE WARRANTY   F This software is provided by Compaq with a 90 day conformance warrantyF in accordance with the Compaq warranty terms applicable to the license	 purchase.   C The above information is valid at time of release. Contact your lo-o6 cal Compaq office for the most up-to-date information.  "  2001 Compaq Computer Corporation  D AlphaServer, AlphaStation, Compaq, Digital, HSC, HSJ, HSZ, MicroVAX,E StorageWorks, VAX, VMS, and the Compaq logo Registered in U.S. Patentc and Trademarks Office.  F DECnet, OpenVMS, and UNIBUS are trademarks of Compaq Information Tech-= nolgies Group, L.P. in the United States and other countries.L  E Motif is a trademark of The Open Group in the United States and other 
 countries.  F Confidential computer software. Valid license from Compaq required forC possession, use, or copying. Consistent with FAR 12.211 and 12.212,iH Commercial Computer Software, Computer Software Documentation, and Tech-C nical Data for Commercial Items are licensed to the U.S. Government,+ under vendor's standard commercial license.m  E Compaq shall not be liable for technical or editorial errors or omis-eD sions contained herein. The information in this document is providedC "as is" without warranty of any kind and is subject to change with-IC out notice. The warranties for Compaq products are set forth in therE express limited warranty statements accompanying such products. Noth-oF ing herein should be construed as constituting an additional warranty.    "                                 33 s  s                                                                                  "                                 34