VSI Job Management for OpenVMS Installation Guide
- Software Version:
- VSI Job Management for OpenVMS V3.1
- Operating System and Version:
- VSI OpenVMS x86-64 Version 9.2-3
Preface
1. About VSI
VMS Software, Inc. (VSI) is an independent software company licensed by Hewlett Packard Enterprise to develop and support the OpenVMS operating system.
2. Intended Audience
This guide describes how to install VSI Job Management for OpenVMS on x86-64 Servers. VSI Job Management for OpenVMS lets you automate and manage repetitious computer processing jobs, such as fiscal reporting procedures, system maintenance tasks, and user applications.
VSI Job Management for OpenVMS includes two components – the Job Management Manager and the Job Management Agent. Throughout this guide, these component names are used.
3. VSI Encourages Your Comments
You may send comments or suggestions regarding this manual or any VSI document by sending electronic mail to the following Internet address: <docinfo@vmssoftware.com>
. Users who have VSI OpenVMS support contracts through VSI can contact <support@vmssoftware.com>
for help with this product.
4. OpenVMS Documentation
The full VSI OpenVMS documentation set can be found on the VMS Software Documentation webpage at https://docs.vmssoftware.com.
Chapter 1. Preparing to Install Job Management Manager
1.1. Hardware and Software Requirements
Here are the requirements:
VSI OpenVMS x86-64 V9.2-3
VSI TCP/IP Services for OpenVMS x86-64 Version V6.0
If you intend to use the DECwindows interface, you must install VSI DECWindows Motif for x86-64 V1.8.
If you intend to use the remote functions to access other Job Managment Managers running on standalone nodes or different VMS Clusters, you must install DECnet Phase IV V9.2-2 or DECnet Plus V9.2-G.
1.2. Wide Area Network Support
To support a wide area network (WAN), you need a TCP/IP Stack for OpenVMS as described in Section 1.1, “Hardware and Software Requirements”. The sections below describe installation procedures for TCP/IP Services.
1.2.1. Default Job Management Manager Node Must Have a Supported TCP/IP Stack
If you are installing remote Agent software, TCP/IP must be installed and running on the manager’s default node before you start up the manager. Otherwise, remote operations will not work since the default node handles all remote mode operations.
To check which node is the default node, execute this command on a node that is a member of the OpenVMS Cluster:
$ SCHEDULE SHOW STATUS
The default node name will be indicated by "<\-\--- DEFAULT"
in the output. To force the default manager node to a particular node, execute the
command:
$ SCHEDULE SET DEFAULT node_name
where node_name
is the new default node.
1.2.2. TCP/IP Services Installation and Configuration
The manager uses TCP/IP Services for OpenVMS for communication between the manager and the remote agents.
1.2.3. Configuration Items Required Regardless of IP Stack Installed
In addition, perform the following procedures to ensure proper communication between the manager and the remote agent:
When specifying a job on a remote node, use the syntax
user@node
(rather thannode::user
).Make sure TCP/IP is running on Job Management Manager node. Use the command SCHED SHOW STATUS to see which node is the default.
Make sure SCHED$LISTENER is running on the default manager node.
Make sure you can ping each node from the other, in both directions.
If the manager is running in a cluster, make sure all nodes in the cluster have proxy access to the account on the remote Agent. In cases where Job Management Manager failover occurs, another manager node in the cluster may take over submission of jobs to the remote agent. Defining the proxies ensures that the other manager nodes will be able to run jobs on the remote agent.
You must define those proxies in AUTHORIZE (not TCP/IP). To add proxies for manager nodes, use the following command on the remote agent node:
$ RUN AUTHORIZE UAF>ADD/PROXY servernode::serveruser /DEFAULT
1.2.4. Enabling Wide Area Network Support
To enable WAN support, you should perform these three procedures:
Create a network object
Add an account for the network object
Add a proxy for each cluster account or local account that will be using the manager
The Job Management Manager startup procedure creates the network object for you. For the manager to function correctly with WAN support, both the network object and the corresponding account must be added, as described below. If they are not, the environment is at risk and may not function correctly.
Note
DECnet is required to enable communication between manager and agent pairs, for proxy authentication only.
The passwords for the SCHED_DECNET object and the SCHED$DECNET UAF account must match to properly facilitate the communication between manager instances across the WAN. To properly synchronize the passwords between the object and the account, run these commands:
$ MCR NCP NCP> define object SCHED_DECNET password yourpassword NCP> set object SCHED_DECNET password yourpassword NCP> exit
$ RUN AUTHORIZE
$ SET DEF SYS$SYSTEM
UAF> MOD SCHED$DECNET/password=yourpassword
UAF> exit
To add the account for the network object, first determine a UIC that is in the same group as the default DECnet account, or is in a group by itself. Then, run AUTHORIZE as follows:
$ RUN AUTHORIZE
UAF> ADD SCHED$DECNET/FLAGS=DISUSER/UIC=uic-spec
UAF> EXIT
Add a proxy for each account that will be using the manager:
$ RUN AUTHORIZE
UAF> ADD/PROXY local_node::local_user local_user/DEFAULT
UAF> EXIT
Note that on cluster member nodes, this proxy needs to be added for each cluster account only, not for each cluster node account. This is needed to allow DECnet to operate in cases such as use of the following command:
$ SCHEDULE MODIFY remote-node::job1/sync=(local-node::job2)
Refer https://docs.vmssoftware.com/vsi-console-management-administration-guide/ for standard proxy needs and setup. Those specified here are in addition to those specified in the Administrator Guide. Both sets of instructions must be followed. Also, for information on how to set up proxies to allow job synchronization on remote nodes, see Section 3.11, “Setting Up a Proxy in AUTHORIZE database”.
1.3. Installing in an OpenVMS Cluster
1.3.1. Installing in a New Cluster
If you are installing VSI Job Management for OpenVMS V3.1 on a server in a cluster, do the following for the product to work correctly.
Install this release on all nodes in the cluster.
Start Job Management on all nodes, after the install on the last node.
1.3.2. Considerations That are Unique to OpenVMS x86-64
When installing Job Management Manager in any cluster, you must use the same job database on all nodes, regardless of system architecture. The job database and other common files are typically located on a disk that one or more nodes can access via a direct hardware path. Often, node-specific and architecture-specific files are also located on the same disk as the common files. Furthermore, in some environments, these files are located on an architecture-specific common system disk.
At the time of publication, OpenVMS x86-64 has the following unique restrictions that impact access to common, node-specific, and architecture-specific files:
Cluster-common system disks are not supported.
There is no support for pass-through fiber channel on any of the supported hypervisors.
Given these restrictions, when installing in a mixed-architecture cluster, the disk hosting the job database must be MSCP-served by one or more Alpha or IA-64 architecture nodes. Additionally, at least one of the serving nodes must be running when Job Management starts on any OpenVMS x86-64 node. This also implies that, when rebooting the cluster, at least one of the serving nodes is booted before any OpenVMS x86-64 nodes.
Moreover, when installing in a cluster consisting of only OpenVMS x86-64 nodes, one node must be selected to host the common files on a virtual disk and MSCP-serve that virtual disk to the other cluster members. This implies that this node must be booted before any other nodes in the cluster.
All of these restrictions and implications must be considered when responding to the installation prompts for both common and node-specific files, as described in Section 1.3.3, “Choosing Installation Directories”.
1.3.2.1. Installing in an Existing Mixed-Architecture Cluster
When installing in a mixed-architecture cluster, perform the following steps to gather the necessary information to respond to the installation prompts described in Section 1.3.3, “Choosing Installation Directories”:
The common database location is referenced by the logical name NSCHED$DATA. Execute the following command on one of the existing cluster nodes where Job Management Manager is running, and make a note of the equivalence name:
$ SHOW LOGICAL NSCHED$DATA
In the example below, NSCHED$DATA equates to
$1$DGA0:[NSCHED.DATA]
:$ SHOW LOGICAL NSCHED$DATA "NSCHED$DATA" = "$1$DGA0:[NSCHED.DATA]" (LNM$SYSTEM_TABLE)
Continuing with the example, when the installation prompts for the location of common files as shown below:
* Enter the full pathname for VSI Job Management - (cluster) common files [SYS$COMMON:[NSCHED.]]:
The correct response is:
$1$DGA0:[NSCHED]
The installation will prompt for the location of node-specific files, as shown below:
* Enter the full pathname for VSI Job Management - node specific files [SYS$COMMON:[NSCHED]]:
VSI recommends placing these files on the same disk as the common database. Additionally, it is a best practice to include the architecture name in the directory path. Continuing with the previous example, one possible answer to the prompt is:
$1$DGA0:[NSCHED.X86_64]
Finally, the directory path does not need to exist, since the installation procedure will create it on your behalf.
1.3.2.2. Installing in an x86-64 Cluster
When installing in a cluster consisting of only OpenVMS x86-64 nodes, follow these steps to gather the necessary information to respond to the installation prompts, as described in Section 1.3.3, “Choosing Installation Directories”:
Determine which node will host the common files.
Determine the name of the disk that will host the common files. The disk name will start with either the node name of the OpenVMS x86-64 host, or an allocation class if you have set a value for the ALLOCLASS SYSGEN parameter.
In the following example, there is an OpenVMS x86-64 node named VMS100 that has the
ALLOCLASS SYSGEN parameter set to a value of 10. There are two virtual disks configured:
$10$DKA0:
and $10$DKA100:
. The disk named
$10$DKA100
will host all Job Management Manager files, including the
node-specific files.
When the installation prompts for the location of common files, as shown below:
* Enter the full pathname for VSI Job Management - (cluster) common files [SYS$COMMON:[NSCHED.]]:
The correct response is:
$10$DKA100:[NSCHED]
When the installation prompts for the location of node-specific files, as shown below:
* Enter the full pathname for VSI Job Management - node specific files [SYS$COMMON:[NSCHED]]:
One possible response is:
$10$DKA100:[NSCHED]
You may specify additonal subdirectories in the path, such as including the architecture name x86-64, if desired.
1.3.3. Choosing Installation Directories
VSI Technologies supports Job Management Manager in a homogeneous OpenVMS cluster or a mixed cluster. A mixed OpenVMS cluster is one that has mixed architecture, mixed versions, or both. A mixed-architecture cluster has a mixture of x86-64 nodes and Integrity and Alpha systems. A mixed version cluster has different nodes at different operating system versions. A cluster may also be mixed with Alpha, Integrity, and x86-64 nodes that are each running different versions of OpenVMS. For the manager to function correctly, each machine must have its own license. Nodes with the same architecture or the same system version should share executables. All other files must be shared on a cluster common disk. You can accomplish this set up as follows:
Select a common directory on a disk that:
Is served cluster-wide
Has at least 40000 free blocks
It is preferable to use a cluster member fault-tolerant disk – that is, a disk hosted on a multi-path storage controller and physically connected to more than one cluster member.
Note
OpenVMS x86-64 systems currently cannot access fiber-channel disks directly. Instead, these disks must be MSCP-served by an Integrity or Alpha node. You must ensure that the disk is served to and mounted on the x86-64 node(s) where you are installing Job Management Manager before starting or continuing with the installation.
When the installation procedure prompts for a pathname to the cluster common files area, provide the pathname to the selected storage area. For example, if the logical name of the designated shared disk is SHARED$DISK, the proposed pathname could be SHARED$DISK:[NSCHED].
Note
The default answer to the prompt for the common disk, SYS$COMMON:[NSCHED], is only valid on homogeneous clusters and on standalone nodes.
Only these clusters may have a system disk that is shared among all nodes. In a mixed-architecture cluster, a minimum of one system disk per architecture is required. For x86-64 nodes, shared system disks are not currently supported, therefore, each will have its own system disk. Do not choose the default answer to the prompt on these clusters.
The directory will be created by the installation procedure, if it does not yet exist. You do not need to create it manually prior to installation.
Select a platform-specific directory that:
Is common to all nodes with the same architecture (x86_64) and system version
Has at least 40,000 free blocks.
When the installation procedure prompts for a pathname to the node-specific files area, specify the selected directory. As long as there is sufficient space available on the device SYS$COMMON, selecting the default answer to the prompt, SYS$COMMON:[NSCHED] is a safe choice.
In a completely homogeneous cluster where all machines are of the same architecture and all machines are running an identical version of OpenVMS and patches, the node-specific area may be shared in order to save space. If the nodes share a system disk, this sharing of space happens automatically. In heterogeneous clusters, nodes of different types do not necessarily share system disks, and SYS$COMMON points to different locations. This device is safe to select. If you relocate files, you must select distinct directories for each node type.
Note
The installation procedure stops running processes it encounters. When installing on a cluster, you save time if you do not automatically start the software after installation on each node. After you finish installing on the last node, run startup and IVP for each node.
Install on all cluster members.
Repeat the installation on each of the cluster nodes where the manager will be needed. Provide a directory name for the system specific directory according to step 2. The default value for the system specific directory works in all types of clusters.
Note
Installing the manager on one node in a cluster does not allow you to run the manager on other nodes by simply running the startup procedure. You must run the kit installation procedure on every member, even in a homogeneous cluster environment.
Start the manager on all cluster members on which it was installed. By default, the load-balancing mechanism will be engaged in a cluster environment. Verify the result of the load balancing startup by using one of the following commands:
$ SCHEDULE SHOW STATUS
or
$ SCHEDULE SHOW LOAD_BALANCE
Note
Basic load balancing may not provide desired results in a mixed-architecture cluster. VSI recommends observing the results on a few batches of jobs, then either using specific node restrictions for some or all jobs, or designing load-balance groups that restrain the execution of given jobs to selected cluster member subsets.
Important
Be careful when making assumptions about which node is the default node, or which architecture the default node uses. Because batch queues may not be distributed across architectures or nodes, care should also be taken when using batch mode jobs.
1.4. Installing on a Standalone System
You can install the manager on a standalone node. The installation procedure prompts you for two separate directories. This prompt is intended for OpenVMS cluster installations; however, for a standalone node, you also select the default answer to both prompts. VSI recommends that you use the default answers to place the files on device SYS$COMMON. This location lets you easily transition this node into a cluster in the future. You can accomplish this setup as follows:
Select a common directory on a disk that has at least 5,000 free blocks.
The default answer to the prompt for the common disk, SYS$COMMON:[NSCHED], is valid on standalone nodes.
If the directory does not exist, the installation procedure creates it. You do not need to create it prior to installation.
Select a node-specific directory that has at least 40,000 free blocks.
When the installation procedure prompts you for a pathname to the node-specific files area, specify the selected directory. If there is sufficient space available on the device SYS$COMMON, selecting the default answer to the prompt, SYS$COMMON:[NSCHED] is a safe choice.
1.5. Considerations for Systems Running DECnet
Before installing or starting the manager on systems running DECnet, grant the rights identifier to the user account. To grant the NET$MANAGE IDENTIFIER, run the AUTHORIZE utility as follows:
$ RUN AUTHORIZE
UAF> GRANT /IDENTIFIER NET$MANAGE user_account
For example, if the manager is being installed or started by user SMITH, type:
$ RUN AUTHORIZE UAF> GRANT /IDENTIFIER NET$MANAGE SMITH
1.6. Installation Procedure Requirement
The following sections list Job Management Manager installation procedure requirements.
1.6.1. Installation Time
The installation should take from 10 to 20 minutes, depending on the type of media you use and your system configuration.
1.6.2. Privilege
To install the manager, you must be logged in to an account that has either the SETPRV privilege or at least the following default privileges:
CMKRNL
WORLD
SYSPRV
SYSNAM
CMEXEC
SYSLCK
DETACH
TMPMBX
NETMBX
Note
The installation procedure turns off the BYPASS privilege when the installation starts.
1.6.3. Disk Space
Installing the manager requires at least 58,000 blocks of free storage space. This includes storage for the zip file package, storage for the kit save-sets contained in the zip file package, and temporary storage needed during the installation. After the manager is installed, about 40,000 blocks are used. For more information, see the "Installing in an OpenVMS Cluster" section in this guide.
To determine the number of free disk blocks on the current system disk, enter the following command at the DCL prompt:
$ SHOW DEVICE SYS$SYSDEVICE
1.6.4. System Quotas
The system you install the manager on must have sufficient quotas to enable you to perform the installation and start the software.
The following minimum values recommended for system parameters are listed below:
SYSGEN parameter | Minimum Value |
---|---|
FREE_GBLSECTS | 20 |
GBLPAGES | 900 |
CHANNELCNT | 256 |
RESHASHTBL | 4096 |
LOCKIDTBL | 3840 |
If you are using extended job field lengths, the DCL "pipe" command may need larger quotas. The following minimum values recommended for system parameters are listed below:
SYSGEN parameter | Minimum Value |
---|---|
DEFMBXBUFQUO | 10240 |
DEFMBXMXMSG | 2048 |
1.6.5. VMSINSTAL Requirements
The Job Management installation procedure uses VMSINSTAL to complete the installation. When invoked, VMSINSTAL checks whether:
You are logged into a privileged account
You have adequate quotas for installation
Any users are logged into the system
If it detects any problems during the installation, the installation procedure notifies you of the problem and asks if you want to continue the installation. In some instances, you can enter YES to continue. To stop the installation process and correct the problem, type NO or press Enter. After the problem is corrected, restart the installation.
Chapter 2. Installing Job Management Manager
This section provides step-by-step instructions for installing Job Management Manager (the manager).
Important
If you are installing in a cluster, or upgrading the Job Management Manager in a cluster, review the section Section 1.3, “Installing in an OpenVMS Cluster” before proceeding with the installation or upgrade.
2.1. Running the Installation
A successful installation requires you to be logged into the SYSTEM account for security access to the Job Management Manager configuration database.
This section describes the following topics:
Installing Job Management Manager
Running the Installation Verification Procedure (IVP)
Error recovery
2.1.1. Installing Job Management Manager
Before installing the manager on your system, you should stop any previous version currently running on that system. If there are active processes, the installation procedure displays them and prints the following message:
%VMSINSTAL-W-ACTIVE, The following processes are still active:
Then you will be asked the following question:
* Do you want to continue anyway [NO]?
A NO
answer halts the installation procedure and exits. A
YES
answer allows you to continue the installation procedure.
2.2. How to Install the Job Management Manager
The installation consists of a number of steps designed to check your system, install Job Manager, and then initialize Job Manager.
For a sample installation, see Chapter 7, Examples of New Installations in this guide.
Decompress the zip file
Extract the installation kit from the installation package zip file. You must have OpenVMS UNZIP present on the system and you must have the UNZIP foreign command defined:
$ UNZIP X86VMS-JOBMGT-V0301
Make note of the directory path where the installation kit was extracted to; you will need it for the next step.
Run the Installation Procedure
@sys$update:vmsinstal VJM031 <directory-path-for-the-installation-kit>
Check Your System Backup
You should always back up your system disk before installing any new software. If you need to restore your former system settings, you want the most current information saved. To ensure you have a good backup, the installation asks if you are satisfied with your backup. Select one of the following responses:
- YES
If you are satisfied with the backup of your system disk, press Return to accept the default YES.
- NO
Type
NO
and press Return to stop the installation. Back up your system disk, and then restart the installation.
Check for a Product Authorization Key
The installation procedure prompts for the presence of a valid Product Authorization Key (PAK). Select one of the following responses:
- YES
The installation verifies if a PAK has been registered and loaded. If a PAK is found the installation will continue. If a PAK is not found the installation will continue, however, the Installation Verification Procedure will not be run.
- NO
The installation continues; however, the Installation Verification procedure will not be run.
Run the Installation Verification Procedure (optional)
After the installation, the Installation Verification Procedure (IVP) checks to ensure that the installation was successful. It starts the application and performs function tests. VSI recommends that you run the IVP, so the installation gives you the opportunity by asking if you want to run it after completing the installation. Select one of the following responses:
- YES
The IVP runs after the installation completes and automatically starts the product.
- NO
The IVP does not run after the installation completes and the startup prompts.
Note
If you choose not to run the IVP during the installation, you can run it at any time after the installation completes by entering the following command:
$ @SYS$TEST:JOB$MANAGER$IVP.COM
Start the Application After Installation
During the installation process you can choose to start or not start Job Manager after the installation completes.
If you already chose to run the Installation Verification Procedure (IVP), no prompt occurs for the startup option, as the IVP requires the Job Management Manager to run.
If this is the first time you are installing the application on your system, you are asked if you want the software to start right after the installation. Select one of the following responses:
- YES
The application is started after the installation.
- NO
The application is not started after the installation.
Purge Previous Version Files
You can purge files from previous versions of the product that are superseded by this installation. VSI recommends that you purge your old files; however, if you need to keep files from a previous version you can choose to not purge your files. The installation asks if you want to purge files replaced by this installation. Select one of the following responses:
- YES
The files related to earlier versions of the product are purged after the installation completes.
- NO
The files related to earlier versions of the product are left on the system after the installation completes.
Choose Whether to Install the DECwindows Interface (Optinal)
If you have Motif installed, the installation asks if you want to install Job Management Manager DECwindows interface.
* Do you want the DECwindows/MOTIF components [YES]?
The DECwindows interface provides a graphical, user-friendly interface to the manager. To run the DECwindows interface, you must have a workstation or an X-based display station. The DECwindows interface requires approximately 8000 blocks of disk space.
Choose Support for Remote Command Execution
The manager can be installed with support for executing jobs on a remote OpenVMS node. If you want to use the manager to control jobs on one or more of these remote nodes, answer YES to this question. This feature of the manager requires that a supported TCP/IP stack be installed on your system. If it is, the installation asks the following question:
* Do you want support for remote Agent command execution [YES]?
If you answer YES to this question, and a supported TCP/IP stack is not installed on ALL the nodes of your cluster, be sure that the Job Management Manager default node is one on which the supported TCP/IP stack is running. If this is not the case, remote operations will not work since the default node handles all remote node operations.
To check which node is the default node, type the command:
$ SCHEDULER SHOW STATUS
The default node name will be indicated by
"<-- DEFAULT"
in the output. To force the default Job Management Manager node to a particular node, type the command:$ SCHEDULER SET DEFAULT node_name
where
node_name
is the new default node.Choose a Product Common Directory Location
Job Management Manager common files are installed to two directories on a cluster common disk: NSCHED$COM and NSCHED$DATA. The installation procedure asks for the device where the product common directory will be located. The default device is SYS$COMMON. However, if you are installing the manager in a mixed architecture OpenVMS cluster, specify a disk that is shared among all nodes in the cluster (for more details, see Section 1.3, “Installing in an OpenVMS Cluster” in this guide).
Note
On a standalone system, the prompt still refers to a cluster common directory. You may choose to install the common and specific files to the same directory. However, VSI recommends choosing a directory that is part of SYS$COMMON, which makes it easier to add this system to a cluster in the future.
Choose a Node-Specific Directory Location
The manager’s system specific files are installed to NSCHED$EXE on a system specific disk. The installation procedure asks for the device where the node-specific directory will be located. The default answer is SYS$COMMON. The directory must not be in use by a manager instance on another node of the cluster. In most cases, this is a good choice (for more details, see Section 1.3, “Installing in an OpenVMS Cluster” in this guide).
* Enter the full pathname for VSI Job Management - node specific files [SYS$COMMON:[NSCHED]]: Selected pathname: SYS$COMMON:[NSCHED] * Is that correct [Y]?
Notes
On a standalone system, the prompt still refers to a cluster common directory. You may choose to install the common and specific files to the same directory. However, VSI recommends choosing a directory that is part of SYS$COMMON, which makes it easier to add this system to a cluster in the future.
Directories [DATA] and [COM] will be created under the path you specify. All data and common files are moved to these directories. Logical names NSCHED$DATA and NSCHED$COM will be associated accordingly.
Directory [EXE] will be created under the path you specify. All the executable files will be moved to this [EXE] directory. Logical name NSCHED$EXE will be associated with this directory.
2.3. Backup of Files from a Previous Installation
If the installation is not a first time install on the node, the installation determines whether or not it should backup the existing files. There are two types of files that are backed up:
Job Management database and executable files
Startup files which define customized configuration
2.3.1. Backup of Job Management database and executable files
The installation might create two savesets:
NSCHED$:OLD_CAJM_DATABASE.BCK (with NSCHED$:*.DAT;* files) NSCHED$:OLD_CAJM_EXECUTABLES.BCK (with NSCHED$:*.EXE;*,*.OLB;*, *.UID;*,*.COM;*,*.HLB;*,*.CLD;* files.)
The installation also deletes all files from their original location.
2.3.2. Preserving customized startup files
The installation procedure removes the definitions of all customizable logical names from the main manager startup procedure to new user customizable files in the directory NSCHED$COM. User customizable files include all customizable logical names, even those which were not previously included in the startup file.
System specific logical names are placed in the file NSCHED$COM:JOB$MANAGER$STARTUP_nodename.COM where nodename is the name of the local system. Cluster-wide logical names are placed in the file NSCHED$COM:JOB$MANAGER$STARTUP_SYSCLUSTER.COM. If a logical name is defined in both the system table and in the cluster table, only the value in the system table is preserved.
The startup procedure SYS$STARTUP:JOB$MANAGER$STARTUP.COM will run these two files if they are present.
Job Management Manager logical names which are defined in LNM$SYSTEM_TABLE at the time of product installation are provided in the system specific custom startup file; logical names found in LNM$SYSCLUSTER_TABLE are provided in the cluster common custom startup file. Any logical name present in the template file but not defined in any of the above mentioned LNM tables is copied as is (commented out) to the system specific custom file, along with related comments. The operator can review and amend these definitions by editing the custom file.
Upon upgrade, the installation procedure detects and saves any existing application startup file that might have been customized. The following table summarizes the files that are saved, where from, and where to. The last column indicates what happens to the original file, at the end of the installation.
File name |
Original directory |
Saved to directory |
Original file is... |
---|---|---|---|
SCHEDULER$STARTUP.COM |
SYS$STARTUP: |
NSCHED$DATA: |
Removed |
JOB$MANAGER$STARTUP.COM |
SYS$STARTUP: |
NSCHED$DATA: |
Superseded |
SCHEDULER$STARTUP_SPECIFIC.COM |
NSCHED$: |
NSCHED$COM: |
Removed |
JOB$MANAGER$STARTUP_SPECIFIC.COM |
NSCHED$: |
NSCHED$COM: |
Removed |
JOB$MANAGER$STARTUP_nodename.COM |
NSCHED$COM: |
NSCHED$COM: |
Superseded |
JOB$MANAGER$STARTUP_SYSCLUSTER.COM |
NSCHED$COM: |
NSCHED$COM: |
Superseded |
Notes:
When the installation procedure backs up a file, the name is changed; the characters _yyyymmdd are appended to each file name. This represents the last modification date of the file being backed up.
The files that are valid for the current release are backed up to NSCHED$COM:. directory Any legacy files that are not to be reused for the current release are backed up to NSCHED$DATA:
The startup template file JOB$MANAGER$STARTUP_LOCAL.TEMPLATE is placed in NSCHED$COM: for future reference.
2.4. Error Recovery
If errors occur during the installation, the procedure displays failure messages. If the installation fails, the following message is generated:
%VMSINSTAL-E-INSFAIL, The installation of VJM V3.1 has failed.
Errors can occur during the installation if any of the following conditions exist:
Incorrect operating system version
Incorrect version of some prerequisite software
The account used for the installation does not have the required quotas for a successful installation
System parameter values are insufficient for a successful installation
OpenVMS help library is currently in use
Insufficient disk space
For descriptions of the error messages that these conditions generate, see the OpenVMS documentation on system messages, recovery procedures, and OpenVMS software installation. If you are notified that any of these conditions exist, take the appropriate action as described in the message. You may need to change a system parameter or increase an authorized quota value.
For information on installation requirements, see Chapter 1, Preparing to Install Job Management Manager.
Chapter 3. After Installing Job Management Manager
After you install Job Management Manager (the manager) on OpenVMS, perform the following procedures:
Start the manager on the nodes on which you installed it (if they are not started as part of the installation procedure).
Edit your system startup and shutdown files.
Reboot your system to verify your edits (optional).
Ensure that your system account has the required minimum quotas.
Ensure that the accounts managing the manager have the required privileges.
Customize your manager installation by modifying the logical name definitions in the custom startup files. For more information, see Section 3.9.1.2, “Custom Startup Files”.
Enter proxies for each user account.
3.1. Starting Job Management Manager
If you chose the option to start or restart the manager, the installation procedure has already started the manager.
If you did not, you can use one of the following methods to start or restart the manager after the installation procedure is complete:
Enter the following command to run the command procedure SYS$STARTUP:JOB$MANAGER$STARTUP.COM from the SYSTEM account:
$ @SYS$STARTUP:JOB$MANAGER$STARTUP.COM
If you are not currently logged into the SYSTEM account, enter this command:
$ SUBMIT/USER=SYSTEM SYS$STARTUP:JOB$MANAGER$STARTUP.COM
3.2. Editing the System Startup and Shutdown Files
You must edit your operating system startup and shutdown files, as described below.
3.2.1. System startup command file
Edit your system startup command file to cause an automatic startup of the manager when your system is rebooted.
Add the following command line to the system startup file, SYS$MANAGER:SYSTARTUP_VMS.COM:
$ @SYS$STARTUP:JOB$MANAGER$STARTUP.COM
Place this new command line after the line that invokes the network startup command procedure. For example:
$ @SYS$MANAGER:STARTNET.COM $ @SYS$STARTUP:JOB$MANAGER$STARTUP.COM
You may want to edit the local startup files to change default values for Max_jobs, to enable load balancing, or perform any other customization. For more information, see the Section 3.9.1.2, “Custom Startup Files”. Users who are currently logged on must log off and then back on again to gain access to the Job Management Manager DCL command interface.
In addition, if you have a supported TCP/IP stack installed and you asked for remote Agent command execution support during the installation, you must place the Job Management Manager startup command after the line that invokes the TCP/IP Startup command procedure. For example, when using TCPIP Services:
$ @SYS$STARTUP:TCPIP$STARTUP.COM . . . $ @SYS$STARTUP:JOB$MANAGER$STARTUP.COM
3.2.2. System shutdown command file
Edit your system shutdown file so that the manager will shut down properly when your system performs an orderly shutdown.
Add the following command line to the system shutdown file, SYS$MANAGER:SYSHUTDWN.COM:
$ @SYS$STARTUP:JOB$MANAGER$SHUTDOWN.COM
3.3. Rebooting the System
You can reboot your system after you have installed the manager and edited your system startup command file. A system reboot verifies that the manager is ready for use and ensures that the edits to the system startup command file are correct.
Rebooting the system is an optional step and not required for using the manager.
3.4. Starting Job Management Manager on an OpenVMS Cluster
If you have installed the manager on an OpenVMS Cluster, you must start the manager on all the nodes that will run Job Management Manager jobs.
You can start the manager on a node in three ways:
- Automatically
Edit the node’s startup command file to start the manager automatically. For instructions, see the "Editing the System Startup Command File" section in this guide.
- Interactively
Log into the account from which the manager will run (normally the SYSTEM account), and enter the following command:
$ @SYS$STARTUP:JOB$MANAGER$STARTUP.COM
- From batch
Enter the following command:
$ SUBMIT/USER=SYSTEM SYS$STARTUP:JOB$MANAGER$STARTUP.COM
On nodes that will not run jobs, you can install just the DCL interface. To install the interface on a node without starting the manager, enter the following commands. You can enter the commands interactively or place them in the system startup file:
$ @SYS$STARTUP:JOB$MANAGER$STARTUP.COM LOGICAL_NAMES $ DEFINE/SYSTEM/EXEC NSCHED$ NSCHED$DATA,NSCHED$COM,NSCHED$EXE $ @NSCHED$:INSTALL_INTERFACE.COM
If necessary, substitute the appropriate device for SYS$COMMON.
3.5. Shutting Down Job Management Manager
If you chose the option to shut down the manager, the installation procedure has already shut down the manager.
If you did not, you can use one of the following methods to shut down the manager after the installation procedure is complete:
Enter the following command to run the command procedure SYS$SHUTDOWN:JOB$MANAGER$SHUTDOWN.COM from the SYSTEM account:
$ @SYS$STARTUP:JOB$MANAGER$SHUTDOWN.COM
If you are not currently logged into the SYSTEM account, enter this command:
$ SUBMIT/USER=SYSTEM SYS$SHUTDOWN:JOB$MANAGER$SHUTDOWN.COM
3.6. Checking SYSGEN Parameters Minimum Quotas
The following SYSGEN parameters and system quotas are checked during the installation and are highly suggested for a system in a production-clustered environment. Warning messages are generated during installation if these parameters and quotas are lower than recommended, but you have the responsibility of setting these yourself prior to the startup of the manager product.
These settings will ensure maximum effectiveness of the manager. You should ensure that SYSGEN parameters meet or exceed the following minimum values:
FREE_GBLSECTS=20
GBLPAGES=900
CHANNELCNT=256
RESHASHTBL=4096
LOCKIDTBL=3840
However, your own configuration may differ and may not require settings as high as these recommended parameters. In particular, RESHASHTBL requirements may vary broadly for a given configuration. If lower settings are used, they should be monitored closely.
Note
The SYSGEN parameter values listed in this section are the recommended values for a clustered environment. The optimum values for your particular system may vary from the recommended values, particularly if the system on which you have installed the manager is not part of a cluster.
3.7. Global Sections and Pages
Job Management Manager for OpenVMS does not install any shared images. Therefore, the images do not take up global sections or pages. However, the installation procedure modifies the DCL tables, and DCL tables are shared. During an installation, the current tables are modified and reinstalled. As long as users are logged on who are mapped to the previous version of DCL tables, both versions will be mapped and will require both sections and pages. Repeated installations of products while many people are logged in may eventually exhaust global memory.
3.8. Checking the Minimum Privileges for Job Management Manager
If you do not choose to start the manager under the SYSTEM account, the manager must start under an account that has the following privileges enabled by default:
SYSPRV
SYSLCK
SYSNAM
CMKRNL
CMEXEC
WORLD
DETACH
To use the manager, individual accounts must have at least the TMPMBX and NETMBX privileges. The startup procedure will fail to execute for an account without sufficient privileges. Use the OpenVMS Authorize Utility to determine whether users have the privileges that they require.
3.9. Customizing Your System
3.9.1. Job Management Manager Startup
3.9.1.1. Access Restriction to the SYSTEM Account
The Job Management Manager SYS$STARTUP:JOB$MANAGER$STARTUP.COM procedure allows users outside the SYSTEM account to start the manager. If this is an undesirable feature for your installation, insert the following line in your JOB$MANAGER$STARTUP_SYSCLUSTER.COM procedure:
$ UIC == "[1,4]"
This ensures that the manager always runs under the SYSTEM account.
3.9.1.2. Custom Startup Files
The product startup can be customized according to instructions delivered within the template file NSCHED$COM:JOB$MANAGER$STARTUP_LOCAL.TEMPLATE.
However, such customizations affect how the manager runs and what it does. Before attempting any customization of the Job Management Manager startup procedure, read the referencing sections. For more information about the system logical names, see the following:
The following files contain several logical name definitions that you can modify to customize Job Management Manager for your particular system or cluster.
NSCHED$COM:JOB$MANAGER$STARTUP_nodename.COM, where nodename is the name of the system.
NSCHED$COM:JOB$MANAGER$STARTUP_SYSCLUSTER.COM
Change the values of the logical names as needed, following instructions within the comments in the custom startup files. For a list of these logical names, see Section 8.1, “Job Management Manager Logical Names”. If either of these files are not present in NSCHED$COM, after upgrading a noncustomized version of Job Management Manager, or after performing a new installation, you can copy the template file NSCHED$COM:JOB$MANAGER$STARTUP_LOCAL.TEMPLATE as file names listed, then customize as required.
3.9.1.3. Process Priority
Note
The upgrade installation procedure processes any relevant logical name found on the system and automatically rebuilds the custom startup files accordingly.
The NSCHED process priority is set to 6 by default. You can carefully alter this process by editing the value of the symbol NSCHED_PRIORITY within the Job Management Manager main startup procedure SYS$STARTUP:JOB$MANAGER$STARTUP.COM. Job Management jobs run at a default priority of 4, adjustable using the logical name NSCHED$DEFAULT_JOB_PRI found in the custom startup procedures.
3.9.1.4. Privilege Requirements
The following privileges are required to start the manager (to run the JOB$MANAGER$STARTUP.COM file):
Default privileges: SYSPRV + DETACH
Authorize privileges: SETPRV, or ALTPRI + CMEXEC + CMKRNL + DETACH + SYSLCK + SYSNAM + SYSPRV
3.9.2. Server Process Logfile
The file NSCHED$:nodename.LOG is always created by the Job Management Manager’s server process "NSCHED". The startup procedure command file contains the command:
PURGE NSCHED$:*.LOG/KEEP=2
3.9.3. Job Management Manager Shutdown
The SYS$STARTUP:JOB$MANAGER$SHUTDOWN.COM procedure allows the passing of a parameter that controls the type of shutdown performed. The parameter values accepted are:
- FAST
This P1 parameter shuts down the system without checking for any currently running VSI Job Management Manager jobs. Any jobs currently running are stopped and requeued.
- NOWAIT
This P1 parameter checks for any currently running Job Management Manager jobs. If any jobs are found, you are notified and given the option of reviewing a list of currently running jobs and terminating the shutdown, if desired. If you do not respond within 30 seconds, the shutdown procedure continues, and any jobs currently running are stopped and requeued. This is the SYS$STARTUP:JOB$MANAGER$SHUTDOWN.COM default setting.
- WAIT
This P1 parameter checks for any currently running Job Management Manager jobs. If any jobs are found, you are notified and given the option of reviewing a listing of currently running jobs and terminating the shutdown, if desired. The shutdown will not continue until a response is given.
3.10. Entering a Proxy for Each User Account
To enable WAN support via DECnet, you need to perform these three procedures:
Create a network object
Add an account for the network object
Add a proxy for each cluster account or local account that will be using the manager
For the manager to function correctly with WAN support, both the network object and the corresponding account must be added, as described below. If they are not, the environment is at risk and may not function correctly.
The passwords for the SCHED_DECNET object and the SCHED$DECNET UAF account need to match in order to properly facilitate the communication between manager instances across the WAN. The commands to do this are as follows:
$ MCR NCP NCP> define object SCHED_DECNET password yourpassword NCP> set object SCHED_DECNET password yourpassword NCP> exit
$ SET DEF SYS$SYSTEM
$ RUN AUTHORIZE
UAF> MOD SCHED$DECNET/password=yourpassword
UAF> exit
These commands will properly synchronize the passwords between the object and the account.
To create the network object, enter these commands:
$ MCR NCP
NCP> SHOW EXEC CHAR
NCP> SET EXEC INCOMING TIMER 120
NCP> SET EXEC OUTGOING TIMER 120
NCP> SHOW KNOWN OBJECT
NCP> DEFINE OBJECT SCHED_DECNET NUMBER 0 -_
NCP> FILE SYS$COMMON:[SYSEXE]SCHED_DECNET.COM -_
NCP> USER SCHED$DECNET -_
NCP> ALIAS INCOMING ENABLED -_
NCP> ALIAS OUTGOING ENABLED
NCP> SET OBJECT SCHED_DECNET NUMBER 0 -_
NCP> FILE SYS$COMMON:[SYSEXE]SCHED_DECNET.COM -_
NCP> USER SCHED$DECNET -_
NCP> ALIAS INCOMING ENABLED -_
NCP> ALIAS OUTGOING ENABLED
NCP> SET NODE local-node ACCESS BOTH
NCP> EXIT
To add the account for the network object, first determine a UIC that is in the same group as the default DECnet account, or is in a group by itself. Then, run AUTHORIZE as follows:
$ RUN AUTHORIZE
UAF> ADD SCHED$DECNET/FLAGS=DISUSER/UIC=uic-spec
UAF> EXIT
Enter these commands to add a proxy for each account that will be using the manager:
$ RUN AUTHORIZE UAF> ADD/PROXY local_node::local_user local_user/DEFAULT UAF> EXIT
Note
On cluster member nodes, this proxy needs to be added for each cluster account only, and not for each cluster node account. This configuration allows DECnet to operate in cases such as use of the following command:
$ SCHEDULE MODIFY remote-node::job1/SYNC=(local-node::job2)
Refer to the https://docs.vmssoftware.com/vsi-console-management-administration-guide/ for standard proxy needs and setup. Those specified here are in addition to those specified in the Administrator Guide. Both sets of instructions must be followed.
3.11. Setting Up a Proxy in AUTHORIZE database
For job synchronization on remote nodes, an additional proxy must exist in the AUTHORIZE database of the dependent job’s node to allow that job to receive job completion synchronization messages from its predecessor jobs. You need to set up only one proxy for each node that must receive job completion messages. If the proxies are not set up, jobs with the status DEP WAIT will never run.
The proxy must define the predecessor job’s node and the account on that node under which the manager was started (usually the SYSTEM account). The proxy must be set up to access the dependent job’s node through the account on that node under which the manager was started (also usually the SYSTEM account).
To add the proxy, run AUTHORIZE as follows:
$ RUN AUTHORIZE UAF> ADD/PROXY predecessor_job_node::pred_node_startup_account - UAF> dependent_job_node_startup_account/DEFAULT
For example, if the manager was started under the SYSTEM account on both nodes, type:
$ RUN AUTHORIZE
UAF> ADD/PROXY predecessor_job_node::SYSTEM SYSTEM/DEFAULT
If the manager was started under the XYZ account on the predecessor job node, type:
$ RUN AUTHORIZE
UAF> ADD/PROXY predecessor_job_node::XYZ SYSTEM/DEFAULT
If the manager was started under the account XYZ on the predecessor job node and also started under the account SMITH on the dependent job node, type:
$ RUN AUTHORIZE
UAF> ADD/PROXY predecessor_job_node::XYZ SMITH/DEFAULT
Chapter 4. Installing VSI Job Management Agent
This section contains instructions for installing Job Management Agent (the agent) on nodes running the OpenVMS operating system.
4.1. Before Installing Any Agent Component
Before installing the agent, perform the following procedures:
- Back up your system disk
VSI recommends that you back up your system disk before installing any software. For information about backing up your system disk, see the documentation for the operating system involved.
- Verify your hardware and software requirements
Your OpenVMS x86-64 Server should satisfy the general requirements outlined in the Section 1.1, “Hardware and Software Requirements” section in Chapter 1 of this guide.
- Check your system quotas
The system you install the manager on must have sufficient quotas to enable you to perform the installation and start the software.
The following minimum values are recommended for system parameters listed below:
SYSGEN parameter |
Minimum Value |
---|---|
FREE_GBLSECTS |
20 |
GBLPAGES |
900 |
CHANNELCNT |
256 |
RESHASHTBL |
4096 |
LOCKIDTBL |
3840 |
Note
These are minimum quotas only and intended for systems with VSI Job Management Agent-only installations. Higher values may be needed for other applications.
4.2. Installing the VSI Job Management Agent
The installation procedure provides instructions to install the agent on an OpenVMS node that is accessible by an OpenVMS computer where Job Management Manager (the manager) is installed, or will be installed. Installation time for the agent is 5 to 10 minutes.
User requirement: The installation procedure requires that the user be familiar with system level OpenVMS commands and procedures.
The node on which you are installing the agent must have TCP/IP installed. In addition, TCP/IP must be running on the default Job Management Manager node.
Licensing: License keys for the agent are installed on the nodes running Job Management Manager from which you will submit remote jobs. You will not be prompted for anything that is licensing related when installing Job Management Agent.
4.2.1. Installation Procedure
Before installing the agent on your system, stop any previous version that you have running on that system. If there are active processes, the installation procedure displays them and prints the following message:
%VMSINSTAL-W-ACTIVE, The following processes are still active:
Then you will be asked the following question:
* Do you want to continue anyway [NO]?
- YES
Continues the installation procedure.
- NO
Exits the installation procedure immediately.
4.3. How You Install Job Manager Agent
The installation consists of a number of steps designed to check your system, install Job Manager, and then initialize Job Manager.
For a sample installation, see Chapter 7, Examples of New Installations.
Note
Default answers to the installation questions below are provided in brackets. For
example, [YES]
indicates the default answer is YES. You can accept the
default by pressing Enter. For answers other than the default, type your answer and press
Enter.
Decompress the zip file
Extract the installation kit from the installation package zip file. You must have OpenVMS UNZIP present on the system and you must have the UNZIP foreign command defined:
$ UNZIP X86VMS-JOBMGT-V0301
Make note of the directory path where the installation kit was extracted to; you will need it for the next step.
Run the Installation Procedure
@sys$update:vmsinstal VJA031 <directory-path-for-the-installation-kit>
Check Your System Backup
You should always back up your system disk before installing any new software. If you need to restore your former system settings, you want the most current information saved. To ensure you have a good backup, the installation asks if you are satisfied with your backup. Select one of the following responses:
- YES
If you are satisfied with the backup of your system disk, press Return to accept the default YES.
- NO
Type
NO
and press Return to stop the installation. Back up your system disk, and then restart the installation.
Check for a Product Authorization Key
The installation procedure prompts for the presence of a valid Product Authorization Key (PAK). Select one of the following responses:
- YES
The installation verifies if a PAK has been registered and loaded. If a PAK is found the installation will continue. If a PAK is not found the installation will continue, however, the Installation Verification Procedure will not be run.
- NO
The installation continues; however, the Installation Verification procedure will not be run.
Run the Installation Verification Procedure (optional)
After the installation, the Installation Verification Procedure (IVP) checks to ensure that the installation was successful. It starts the application and performs function tests. VSI recommends that you run the IVP, so the installation gives you the opportunity by asking if you want to run it after completing the installation. Select one of the following responses:
- YES
The IVP runs after the installation completes and automatically starts the product.
- NO
The IVP does not run after the installation completes and the startup prompts.
Note
If you choose not to run the IVP during the installation, you can run it at any time after the installation completes by entering the following command:
$ @SYS$TEST:JOB$AGENT$IVP.COM
Purge Previous Version Files
You can purge files from previous versions of the product that are superseded by this installation. VSI recommends that you purge your old files; however, if you need to keep files from a previous version you can choose to not purge your files. The installation asks if you want to purge files replaced by this installation. Select one of the following responses:
- YES
The files related to earlier versions of the product are purged after the installation completes.
- NO
The files related to earlier versions of the product are left on the system after the installation completes.
Chapter 5. After Installing the VSI Job Management Agent
After installing VSI Job Management Agent (the agent), you can perform the tasks described in the sections below.
5.1. Starting VSI Job Management Agent
To start the agent, use one of these two methods:
If you are logged into the SYSTEM account, enter the following to run the command procedure SYS$STARTUP:JOB$AGENT$STARTUP.COM:
$ @SYS$STARTUP:JOB$AGENT$STARTUP.COM
If you are not currently logged into the SYSTEM account, you can start the agent by entering:
$ SUBMIT/USER=SYSTEM SYS$STARTUP:JOB$AGENT$STARTUP.COM
5.2. Shutting Down VSI Job Management Agent
To shut down the agent, use one of these two methods:
If you are logged into the SYSTEM account, enter the following to run the command procedure SYS$STARTUP:JOB$AGENT$SHUTDOWN.COM:
$ @SYS$STARTUP:JOB$AGENT$SHUTDOWN.COM
If you are not currently logged into the SYSTEM account, you can stop the agent by entering:
$ SUBMIT/USER=SYSTEM SYS$STARTUP:JOB$AGENT$SHUTDOWN.COM
5.3. Editing the System Startup Command File
You can edit your system startup command file so that the agent is automatically started when your system is rebooted. To do this, add the following command line to the system startup file, SYS$MANAGER:SYSTARTUP_VMS.COM:
$ @SYS$STARTUP:JOB$AGENT$STARTUP.COM
Place this new command line after the line that invokes the network startup command procedure. For example:
$ @SYS$MANAGER:STARTNET.COM . . . $ @SYS$STARTUP:JOB$AGENT$STARTUP.COM
In addition, place the VSI Job Management Agent startup command after the line that invokes the TCP/IP services startup command procedure. For example:
$ @SYS$STARTUP:TCPIP$STARTUP.COM . $ @SYS$STARTUP:JOB$AGENT$STARTUP.COM
5.4. Editing the System Shutdown Command File
You can edit your system shutdown file so that the agent shuts down properly when your system performs an orderly shutdown. To do this, add the following command line to the system shutdown file, SYS$MANAGER:SYSHUTDWN.COM:
$ @SYS$STARTUP:JOB$AGENT$SHUTDOWN.COM
Chapter 6. Re-installing VSI Job Management
This chapter considers reinstallation requirements and processes.
The JM Manager must be reinstalled after OpenVMS upgrade. For example, if you installed JM Manager on OpenVMS 8.2 and then upgraded to OpenVMS 8.3, you must reinstall JM Manager. The JM Manager installation links executables with current system libraries, and those executables might not work properly after OpenVMS upgrade.
When reinstalling in a cluster, you may perform just one installation for all the nodes with the same architecture and OpenVMS version. Those nodes must share node-specific directory NSCHED$EXE, so the reinstalled executables will be effective on all nodes. Select any node among those nodes to run the installation on. Be sure to specify the same node-specific directory as the existing one. If however you decide to specify another node-specific directory, you must edit SYS$STARTUP: JOB$MANAGER$STARTUP.COM on ALL nodes that share this directory and alter the definition of the NSCHED$EXE logical name.
Chapter 7. Examples of New Installations
This chapter provides sample installations for the two components of VSI Job Management for OpenVMS – the manager and the agent.
7.1. Job Management Manager OpenVMS Installation
The following example shows the log file from a sample installation of the manager.
$ @SYS$UPDATE:VMSINSTAL VJM031 SYS$SYSROOT:[SYSMGR] OpenVMS Software Product Installation Procedure V9.2-3 It is 31-DEC-2024 at 21:31. Enter a question mark (?) at any time for help. %VMSINSTAL-W-ACTIVE, The following processes are still active: TCPIP$FTP_1 * Do you want to continue anyway [NO]? YES * Are you satisfied with the backup of your system disk [YES]? The following products will be processed: VJM V3.1 Beginning installation of VJM V3.1 at 21:31 ********************************************************************** %VMSINSTAL-I-VALSIGN, Performing product kit validation of signed kits ... Success %VMSINSTAL-I-VALPASSED, validation of SYS$SYSROOT:[SYSMGR]VJM031.A_VNC succeeded Success %VMSINSTAL-I-VALPASSED, validation of SYS$SYSROOT:[SYSMGR]VJM031.E_VNC succeeded Success %VMSINSTAL-I-VALPASSED, validation of SYS$SYSROOT:[SYSMGR]VJM031.Y_VNC succeeded %VMSINSTAL-I-RESTORE, Restoring product save set A ... %VMSINSTAL-I-RESTORE, Restoring product save set Y ... Copyright (c) 2024,2025 VMS Software, Inc.. All rights reserved. Product: VSI-JOBMGTMGR Producer: VSI Version: Release Date: * Does this product have an authorization key registered and loaded? YES * Do you want to run the IVP after the installation (Product Startup Required) [YES]? The product will be started and the IVP will run automatically at the end of t his install. * Do you want to purge files replaced by this installation [YES]? The DECwindows/MOTIF components of this software are optional * Do you want to install the DECwindows/MOTIF components [YES]? This product needs a TCP/IP network stack installed and started on this node. You can use VSI TCP/IP Services for OpenVMS x86_64. The TCP/IP stack TCP/IP Services is up and running... Support for remote agent command execution is an available option. * Do you want support for remote Agent command execution [YES]? The installation procedure will prompt you for pathnames to two distinct areas: cluster-common files area and node-specific files area. When prompted, enter the full pathname of selected root directories, including device names. o Cluster Common files area Contains data files and scripts which are shared by all cluster nodes. It must be on a cluster-common non-DFS device. Default value SYS$COMMON:[NSCHED] is intended for homogeneous clusters and stand-alone nodes. It will NOT work for clusters that have different system disks for different nodes. Directories [DATA] and [COM] will be created under the path you specify. Logical names NSCHED$DATA and NSCHED$COM will be associated accordingly. o Machine architecture specific files area Contains executables and other files which are specific to the hardware architecture and/or OpenVMS version of the node. A single set of executables can be shared for nodes with the same architecture and OpenVMS version, to save disk space. For other cases be sure to specify a directory that is specific to the local node. The default value SYS$COMMON:[NSCHED] is a valid choice for stand-alone nodes and clustered nodes with a shared system disk. Directory [EXE] will be created under the path you specify. Logical name NSCHED$EXE will be associated with this directory. o Disk space required Initial cluster-common NSCHED$ area requires a total of 5,000 blocks of disk space. Node architecture specific area requires about 40,000 blocks. If this installation is an upgrade from a previous version, additional disk space (equal to the current total size of NSCHED$ directory) will be required to backup your current files. If you do not have sufficient space, abort the installation by pressing CTRL-Y at this time. * Enter the full pathname for VSI Job Management - (cluster) common files [SYS$COMMON:[NSCHED.]]: $1$DGA0:[NSCHED.COMMON] Selected pathname: $1$DGA0::[NSCHED.COMMON] * Is that correct [Y]? * Enter the full pathname for VSI Job Management - node specific files [SYS$COMMON:[NSCHED.]]: $1$DGA0:[NSCHED.X86] Selected pathname: $1$DGA0:[NSCHED.X86] * Is that correct [Y]? All questions regarding this installation have been asked. The installation will run for approximately 2 to 5 minutes %VMSINSTAL-I-RESTORE, Restoring product save set E ... Linking Job Management Manager images... Linking NSCHED.EXE ... Linking SCHED_DECNET... Linking INTERFACE... Linking RETRY... Linking MANAGER... Linking SHELL_INTERFACE... Linking SUMMARIZE_LOG... Linking DOO_COMMAND... Linking DB_UTILITY... Linking VSS_REPORTS... Linking SCHED$LISTENER... Linking SCHED$GET_BEST_NODE... Linking MOTIF... Linking Config Utility... Linking CPU Utility... Linking Convert Utility... Providing files... Providing DCL interface, utilities, and HELP... Providing Wide Area Network capabilities... Providing Remote Executor capabilities for Agent... Providing DECWindows Interface... Providing Callable Application Programming Interface (API)... Providing Startup, Shutdown, Installation Verification and Deinstallation procedures ... -------------------------------------------------------------------------- Product Management Command Files -------------------------------------------------------------------------- Startup: $ @SYS$STARTUP:JOB$MANAGER$STARTUP.COM Shutdown: $ @SYS$STARTUP:JOB$MANAGER$SHUTDOWN.COM IVP: $ @SYS$TEST:JOB$MANAGER$IVP.COM Deinstall: $ @SYS$UPDATE:JOB$MANAGER$DEINSTALL.COM -------------------------------------------------------------------------- Note: A call to the product startup procedure or common startup procedure should be inserted manually in SYS$STARTUP:SYSTARTUP_VMS.COM in order to start the product automatically at system boot time. Similarly, a call to the product or common shutdown procedure should be inserted in the system shutdown procedure, SYS$MANAGER:SYSHUTDWN.COM %VMSINSTAL-I-MOVEFILES, Files will now be moved to their target directories... DECwindows interface installation has completed successfully. The DECwindows interface may be accessed by type the command: $ schedule/interface=decwindows You may want to edit the startup file JOB$MANAGER$STARTUP.COM to change the default value for Max_jobs, or to enable load balancing... Users who are currently logged on must log off and then back on again to gain access to the Job Management Manager command line interface. IMPORTANT For each Job Management Manager user on this node set up a proxy to self, using AUTHORIZE as follows: $ RUN AUTHORIZE UAF> ADD/PROXY <local-node>::<local-user> <local-user>/DEFAULT An additional proxy is needed to support job execution synchronization. This proxy must exist in AUTHORIZE on the DEPENDENT JOB NODE. If both Job Management Managers are started under the SYSTEM account, then add: UAF> add/proxy <predecessor-job-node>::SYSTEM SYSTEM/DEFAULT Finally, add the account for the network object: first determine an UIC which is either in the same group as the default DECNET account, or in a group by itself. Then, run AUTHORIZE as follows: UAF> ADD SCHED_DECNET/FLAGS=DISUSER/UIC=<uic-spec> UAF> EXIT Please refer to the Job Management Installation Guide for more details. %RUN-S-PROC_ID, identification of created process is 0000042A %RUN-S-PROC_ID, identification of created process is 0000042B Beginning the Job Management Manager for OpenVMS Installation Verification Pr ocedure. %NSCHED-I-RQSTSUCCSS, Job 2 - Created %NSCHED-I-FLAGSET, Job IVP_JOB - DELETE Requested Job Management Manager for OpenVMS has been successfully installed. Installation of VJM V3.1 completed at 21:33 Adding history entry in VMI$ROOT:[SYSUPD]VMSINSTAL.HISTORY Creating installation data file: VMI$ROOT:[SYSUPD]VJM031.VMI_DATA VMSINSTAL procedure done at 21:33
7.2. Job Management Agent OpenVMS Installation
The following example shows the log file from a sample installation of the agent.
$ @sys$update:vmsinstal vja031 sys$manager: OpenVMS Software Product Installation Procedure V9.2-3 It is 31-DEC-2024 at 14:41. Enter a question mark (?) at any time for help. %VMSINSTAL-W-ACTIVE, The following processes are still active: TCPIP$FTP_1 SSHD22_BG113 * Do you want to continue anyway [NO]? YES * Are you satisfied with the backup of your system disk [YES]? The following products will be processed: VJA V3.1 Beginning installation of VJA V3.1 at 14:41 ********************************************************************** %VMSINSTAL-I-VALSIGN, Performing product kit validation of signed kits ... Success %VMSINSTAL-I-VALPASSED, validation of SYS$SYSROOT:[SYSMGR]VJA031.A_VNC succeeded Success %VMSINSTAL-I-VALPASSED, validation of SYS$SYSROOT:[SYSMGR]VJA031.E_VNC succeeded Success %VMSINSTAL-I-VALPASSED, validation of SYS$SYSROOT:[SYSMGR]VJA031.Y_VNC succeeded %VMSINSTAL-I-RESTORE, Restoring product save set A ... %VMSINSTAL-I-RESTORE, Restoring product save set Y ... Copyright (c) 2024,2025 VMS Software, Inc.. All rights reserved. Product: VMS-JOBMGTAGT Producer: VSI Version: Release Date: * Does this product have an authorization key registered and loaded? YES * Do you want to run the IVP after the installation (Product Startup Required) [YES]? The product will be started and the IVP will run automatically at the end of this install. * Do you want to purge files replaced by this installation [YES]? This product needs a TCP/IP network stack installed and started on this node. You can use VSI TCP/IP Services for OpenVMS x86_64. The TCP/IP stack TCP/IP Services is up and running... All questions regarding this installation have been asked. The installation will run for approximately 2 to 5 minutes %VMSINSTAL-I-RESTORE, Restoring product save set E ... Linking Agent... Linking NSCHED$AGENT.EXE ... Linking NSCHED$AGENT_IVP.EXE ... Linking NSCHED$AGENT_SHUTDOWN.EXE ... Providing files... Providing NSCHED$AGENT.EXE ... Providing NSCHED$AGENT_SHUTDOWN.EXE ... Providing NSCHED$AGENT_IVP.EXE ... Providing NSCHED$AGENT_DO_COMMAND.COM ... Providing NSCHED$AGENT_RUN.COM ... Providing Startup, Shutdown, Installation Verification and Deinstallation procedures ... -------------------------------------------------------------------------- Product Management Command Files -------------------------------------------------------------------------- Startup: $ @SYS$STARTUP:JOB$AGENT$STARTUP.COM Shutdown: $ @SYS$STARTUP:JOB$AGENT$SHUTDOWN.COM IVP: $ @SYS$TEST:JOB$AGENT$IVP.COM Deinstall: $ @SYS$UPDATE:JOB$AGENT$DEINSTALL.COM -------------------------------------------------------------------------- Note: A call to the product startup procedure or common startup procedure should be inserted manually in SYS$STARTUP:SYSTARTUP_VMS.COM in order to start the product automatically at system boot time. Similarly, a call to the product or common shutdown procedure should be inserted in the system shutdown procedure, SYS$MANAGER:SYSHUTDWN.COM %VMSINSTAL-I-MOVEFILES, Files will now be moved to their target directories... %DCL-I-TABNOTFND, previous table LNM$SCHED_AGENT_TABLE was not found - new table created %RUN-S-PROC_ID, identification of created process is 0000042F Beginning the Job Management Agent IVP Job Management Agent, 0 jobs running, 64 jobs max The Job Management Agent IVP procedure has completed successfully. Installation of VJA V3.1 completed at 14:42 Adding history entry in VMI$ROOT:[SYSUPD]VMSINSTAL.HISTORY Creating installation data file: VMI$ROOT:[SYSUPD]VJA031.VMI_DATA VMSINSTAL procedure done at 14:42
Chapter 8. VSI Job Management Manager Logical and Kit Names
The following tables list the logical names and kit names used in Job Management Manager (the manager).
8.1. Job Management Manager Logical Names
For a list of Job Management Manager logical names, refer to the Appendix B of the https://docs.vmssoftware.com/vsi-console-management-administration-guide/.
8.2. Job Management Manager Installation Kit Files
The following tables list the installation kit files for the Job Management Manager software. These kit files include five categories:
The Job Management Manager core files
Startup and miscellaneous control files
VSI Job Management Agent (the agent) files
DCL interface files
Motif interface files
All files reside in the NSCHED$ directory, unless the table indicates otherwise.
8.2.1. Job Management Manager Core Files
The following table describes Job Management Manager core files.
File |
Description |
---|---|
CPU_UTILITY.EXE |
Shows the CPU rating of the local node, if available. |
DB_UTILITY.EXE |
Database compression utility. |
NSCHED.EXE |
Job Management Manager program image. |
RETRY.EXE |
A program image that retries the manager’s remote operations. |
SCHED$LISTENER.EXE |
Detached process listening for termination messages from remote Agents. |
SCHEDULER$DOO_COMMAND.EXE |
Run by jobs that are created by the manager. Protection must be set to W:RE. |
SCHEDULER$SHELL.COM |
A command shell used to execute jobs. |
SCHEDULER$SUMMARIZE_LOG.EXE |
Used to generate a log file summary. |
VSS.DAT |
The job database. |
VSS_REPORTS.EXE |
A log file reader/reporter utility. |
8.2.2. Startup and Miscellaneous Control Files
The following table describes the manager’s startup and miscellaneous control files.
File |
Description |
---|---|
DEPENDENCY.DAT |
File for dependencies, created or updated by the manager when you create a dependency. |
INSTALL_DO.COM |
Installs the job command Agent SCHEDULER$DOO_COMMAND.EXE. |
INSTALL_INTERFACE.COM |
Installs all Job Management Manager user interfaces. |
LBAL_ROUND_ROBIN.COM |
Alternative load-balancing procedure that performs round-robin load balancing. |
LOAD_BALANCE.COM |
Load-balancing command procedure. Performs dynamic load balancing within an OpenVMS Cluster. You can customize this command procedure to meet specific user requirements. |
SCHED$GET_BEST_NODE.EXE |
Load balancing file. |
SCHED$LISTENER.COM |
Runs the manager’s Remote Agent Listener. |
SCHED$SD_CLASSES.DAT |
File for Special Day Class definitions, created or updated by the manager when you create a Special Day Class. |
SCHED$SD_RESTRICTIONS.DAT |
File for Special Day Restrictions, created to record the Day Class definitions (that is, user-defined calendars associated with jobs that run only on specified dates). |
SCHED_DECNET.EXE |
Detached process listening for communication requests from remote OpenVMS machine running the manager. |
SCHED_RUN.COM |
Runs the manager’s program image. |
SYS$SYSTEM:SCHED_DECNET.COM |
Runs the manager’s remote network object. |
TEMP_PARMS.DAT |
File that contains parameter override values, created when a RUN or SET command is executed with parameter override values. |
JOB$MANAGER$IVP.COM |
The manager’s installation verification procedure. Resides in the SYS$TEST directory. |
JOB$MANAGER$SHUTDOWN.COM |
The Job Management Manager shutdown procedure that the SYS$MANAGER:SYSSHUTDOWN.COM procedure should call. |
JOB$MANAGER$STARTUP.COM |
Installs the manager’s images and starts them running. Resides in the SYS$STARTUP directory. |
JOB$MANAGER$STARTUP_LOCAL.TEMPL ATE |
Provides the full list of customizable logical names, as well as usage information, to alter Job Management Manager behavior at startup time. |
8.2.3. VSI Job Management Agent Files
The following table describes VSI Job Management Agent files.
File |
Description |
---|---|
NSCHED$AGENT.EXE |
SYS$SYSTEM Agent image. |
NSCHED$AGENT_DO_COMMAND.COM |
Same as DO_COMMAND for server. |
NSCHED$AGENT_RUN.COM |
Same as SCHED_DECNET.COM but for the agent. |
JOB$AGENT$IVP.COM |
SYS$TEST IVP for the agent. |
JOB$AGENT$SHUTDOWN.COM |
SYS$STARTUP shutdown for the agent. |
JOB$AGENT$STARTUP.COM |
SYS$STARTUP startup file for the agent. |
8.2.4. DCL Interface Files
The following table describes the manager’s DCL files.
File |
Description |
---|---|
SCHEDULER$CONFIG.EXE |
The DCL interface program image for Load Balance Group database subsystem |
SCHEDULER$INTERFACE.EXE |
The DCL interface program image for general command users. |
SCHEDULER$MANAGER.EXE |
The DCL interface program image for privileged command users. |
SCHEDULER$SHELL_INTERFACE.EXE |
The DCL interface shell for the Job Management Manager subsystem. |
SCHEDULER.CLD |
The manager’s command language definitions file for the DCL interface. |
SCHEDULER.HLB |
The manager’s help file. |
8.2.5. Motif Interface Files
The following table describes Job Management Manager Motif Interface files.
File |
Description |
---|---|
SCHEDULER$MOTIF.DAT |
Same as the file SCHEDULER$MOTIF_COLOR.DAT, this file is moved to the directory DECW$SYSTEM_DEFAULTS. |
SCHEDULER$MOTIF.EXE |
The Motif interface image for the manager. |
SCHEDULER$MOTIF.UID |
The user interface database for the Motif interface. |
SCHEDULER$MOTIF_BW.DAT |
Motif resource file used by the manager for single-plane displays; copy to DECW$USER_DEFAULTS:SCHEDULER$ MOTIF.DAT. |
Chapter 9. Uninstalling
Uninstall scripts enable you to easily remove VSI Job Management for OpenVMS components. The command files to execute the scripts are located in the SYS$UPDATE directory.
9.1. Uninstall Scripts
The following table lists the uninstall scripts.
Product |
Uninstall Script |
---|---|
Job Management Agent |
JOB$AGENT$DEINSTALL.COM |
Job Management Manager |
JOB$MANAGER$DEINSTALL.COM |
Chapter 10. Installation FAQs
This chapter provides answers to the following frequently asked questions about Job Management Manager (the manager) installation.
- If I install a new version of Job Management Manager, what happens to the old version that was running?
The old version of the manager is superseded by the new version. During the installation, you may choose to stop the old manager and restart the new manager. From the DCL prompt, you may stop the manager by entering the following command:
SCHEDULE> STOP
The old job database is preserved across manager versions. To start the manager, run the startup procedure in the file SYS$STARTUP:JOB$MANAGER$STARTUP.COM.
- After installing the manager, how do I define the command
SCHEDULE
? When you install the manager on a node, only that node has the updated DCL table. The tables on the other nodes are updated when those nodes are rebooted.
You can also update the DCL tables by entering the following command from an account that has the CMKRNL privilege:
$ INSTALL REPLACE SYS$LIBRARY:DCLTABLES.EXE
You can use SYSMAN to perform this task automatically on every node on yourOpenVMS Cluster. If any users are currently logged in, they must log out and log in again for the new DCL tables to take effect for them.
- When I install a new version of the manager, do I have to reenter all the jobs that are in the current database?
You should be able to use your current database. In the installation procedure, just specify the manager to be installed in the same area as before – view the translation of the logical name NSCHED$DATA.
- Will the manager operate in a heterogeneous cluster?
Yes. To run the manager in any cluster, follow the installation instructions to assign files to cluster common devices and to platform-specific devices.