COPT Compute Cluster Service
The Cardinal Optimizer provides COPT compute cluster service on all supported platforms, which allows you to offload optimization computations from COPT client applications over local network.
Once COPT compute cluster server runs at local network, any COPT client application with matching COPT version can connect to server and offload optimization computations. That is, COPT compute cluster clients are allowed to do modelling locally, execute optimization jobs remotely, and then obtain results interactively.
Note that the more computing power server has, the more optimization jobs can run simultaneously. Furthermore, COPT compute cluster service has functionality to cluster multiple servers together and therefore serve more COPT compute cluster clients over local network.
Server Setup
The COPT compute cluster service includes copt_cluster
executable
and a configuration file cls.ini
. The very first thing to do when cluster server starts
is to verify cluster license locally, whose path is specified in cls.ini
.
If local validation passes, cluster server might connect remotely to COPT licensing server
for further validation, such as verifying machine IP, which is supposed to match IP range
that user provided during registration. This means the server running COPT compute cluster service
should have internet access in specified area, such as campus network.
For details, please see descriptions below or refer to
How to obtain and setup license.
Installation
The Cardinal Optimizer provides a separate package for remote services, which include COPT compute cluster. Users may apply for remote package from customer service. Afterwards, unzip the remote package and move to any folder on your computer. The software is portable and does not change anything in the system it runs on. Below are details of installation.
Windows
Please unzip the remote package and move to any folder. Though, it is common
to move to folder under C:\Program Files
.
Linux
To unzip the remote package, enter the following command in terminal:
tar -xzf CardinalOptimizer-Remote-7.1.1-lnx64.tar.gz
Then, the following command moves folder copt_remote71 in current
directory to other path. For an example, admin user may move it to folder
under /opt
and standard user may move it to $HOME
.
sudo mv copt_remote71 /opt
Note that it requires root
privilege to execute this command.
MacOS
To unzip the remote package, enter the following command in terminal:
tar -xzf CardinalOptimizer-Remote-7.1.1-universal_mac.tar.gz
Then, the following command moves folder copt_remote71 in current
directory to other path. For an example, admin user may move it to folder
under /Applications
and standard user may move it to $HOME
.
mv copt_remote71 /Applications
If you see errors below or similar signature problem of COPT lib during installation,
"libcopt.dylib" cannot be opened because the developer cannot be verified.
macOS cannot verify that this app is free from malware.
run the following command as root user, to bypass check of loading dynamic lib on MacOS.
xattr -d com.apple.quarantine CardinalOptimizer-Remote-7.1.1-universal_mac.tar.gz
or
xattr -dr com.apple.quarantine /Applications/copt_remote71
Cluster License
After installing COPT remote package, it requirs cluster license to run.
It is prefered to save cluster license files, license.dat
and
license.key
, to cluster
folder in path of remote package.
The following explains how to obtain the license file
via the copt_licgen
tool and the license credential information key
under different systems.
Note
If the user has already obtained the two license files license.dat
and license.key
, there is no need to obtain them again.
You can skip the following steps to obtain the license file and refer to Configuration directly.
Windows
If the COPT remote package is installed under "C:\Program Files"
,
execute the following command to enter cluster
folder in path of
remote package.
cd "C:\Program Files\copt_remote71\cluster"
Note that the tool copt_licgen
creating license files exists
under tools
folder in path of remote package. The following command
creates cluster license files in current directory, given a cluster
license key, such as 7483dff0863ffdae9fff697d3573e8bc
.
..\tools\copt_licgen -key 7483dff0863ffdae9fff697d3573e8bc
Linux and MacOS
If the COPT remote package is installed under "/Applications"
,
execute the following command to enter cluster
folder in path of
remote package on MacOS system.
cd /Applications/copt_remote71/cluster
The following command creates cluster license files in current directory,
given a cluster license key, such as 7483dff0863ffdae9fff697d3573e8bc
.
../tools/copt_licgen -key 7483dff0863ffdae9fff697d3573e8bc
In addition, if users run the above command when current directory is
different than cluster
folder in path of remote package, it is prefered
to move them to cluster
. The following command does so.
mv license.* /Application/copt_remote71/cluster
Configuration
Below is a typical configuration file, cls.ini
, of COPT compute cluster.
[Main]
Port = 7878
# Number of total tokens, what copt jobs can run simutaneously up to.
NumToken = 3
# Password is case-sensitive and default is emtpy;
# It applies to both copt clients and cluster nodes.
PassWd =
# Data folder of cluster relative to its binary folder,
# where multiple versions of copt libraries and related licenses reside.
DataFolder = ./data
[SSL]
# Needed if connecting using SSL/TLS
CaFile =
CertFile =
CertkeyFile =
[Licensing]
# If empty or default license name, it is from binary folder;
# To get license files from cwd, add prefix "./";
# Full path is supported as well.
LicenseFile = license.dat
PubkeyFile = license.key
[WLS]
# WebServer have a default host and no need to edit in most scenarios
# Must specify WebLicenseId and WebAccesskey to trigger web licensing
WebServer =
WebLicenseId =
WebAccessKey =
WebTokenDuration = 300
[Cluster]
# Host name and port of parent node in cluster.
# Default is empty, meaning not connecting to other node.
Parent =
PPort = 7878
[Filter]
# default policy 0 indicates accepting all connections, except for ones in blacklist
# otherwise, denying all connections except for ones in whitelist
DefaultPolicy = 0
UseBlackList = true
UseWhiteList = true
FilterListFile = clsfilters.ini
The Main
section specifies port number, through which COPT compute cluster clients
connect to server; token number, the number of optimization jobs that
server can run simultaneously up to; password string, if specified, cluster
clients should send the same password when requesting for service.
The COPT compute cluster may install multiple versions of COPT to subfolder of DataFolder
.
Only clients with matching version (major and minor) will get approved and then offload optimization
jobs at server side. Note that the COPT compute cluster pre-installs a COPT solver of the same version
as server itself, which illustrate how to install other versions of COPT.
For instance, the COPT compute cluster has default COPT v7.1.1 installed and users plan to install
COPT of other version v4.0.7. users may create a folder ./data/copt/4.0.7/ and
copy a COPT C lib of the same version to it. Specifically, on Linux platform,
copy C dynamic library libcopt.so
from the binary folder $COPT_HOME/lib/
of COPT v4.0.7
to subfolder ./data/copt/4.0.7/ of the COPT compute cluster.
Furthermore, users are allowed to install newer version of COPT than cluster server version, such as COPT v7.5.0. To do so, follow the same step of copying a C lib of COPT v7.5.0 to ./data/copt/7.5.0/. In addition, users need a personal license of v7.5.0 to load C lib of COPT v7.5.0 at server side. That is, copy valid personal license files to folder ./data/copt/7.5.0/ as well. However, this simple procedure may break if the newer COPT solver has significant updates. In this case, it is necessary to upgrade the COPT compute cluster to newer version, that is, v7.5.0.
Below is an example of directory structure of cluster server on Linux platform. It includes pre-installed COPT v7.1.1, COPT of previous version v4.0.7, and COPT of newer version v7.5.0.
~/copt_remote71/cluster │ cls.ini │ copt_cluster │ license.dat -> cluster license v7.1.1 │ license.key │ └─data └─copt └─4.0.7 │ libcopt.so └─7.1.1 │ libcopt.so └─7.5.0 libcopt.so license.dat -> license v7.5.0 license.key
The Licensing
section specifies location of cluster license files.
As described in comments above, if emtpy string or default license file name
is specified, cluster license files are read from the binary folder where
the cluster executable reside.
It is possible to run COPT compute cluster service, even if
cluster license files do not exist in the same folder as the cluster executive.
One option is to set LicenseFile = ./license.dat
and PubkeyFile = ./license.key
.
By doing so, the COPT compute cluster reads cluster license files from the current
working directory. That is, user could execute command at the path where cluster
license files exist to run service.
The other option is to set full path of license files in configuration.
As mentioned before, Cardinal Optimizer allows user to set
environment variable COPT_LICENSE_DIR
for license files. For details,
please refer to How to install Cardinal Optimizer.
If user prefers the way of environment variable, cls.ini
should have
the full path to cluster license files.
The Cluster
section specifies IP and port number of parent node. By connecting to
parent node, this server joins a cluster of servers running COPT compute cluster service. The default
value is empty, which means this server does not join other cluster group.
In the Filter
section, DefaultPolicy
has default value 0, meaning
all connections are accepted except for those in black lists; if it is set
to non-zero value, then all connection are blocked except for those in white
lists. In addition, black list is enabled if UseBlackList
is true and white list
is enabled if UseWhiteList
is true. The filter configuration file is specified by
FilterListFile
. Below is an example of the filter configuration file.
[BlackList]
# 127.0.*.* + user@machine*
[WhiteList]
# 127.0.1.2/16 - user@machine*
[ToolList]
# only tool client at server side can access by default
127.0.0.1/32
It has three sections and each section has its own rules. In section of BlackList
,
one may add rules to block others from connection. In section of WhiteList
,
one may add rules to grant others for connection, even if the default policy is to block
all connections. Only users listed in section of ToolList
are able to connect to
cluster server by Cluster Managing Tool (see below for details).
Specifically, rules in filter configuration have format of starting with IP address.
To specify IP range, you may include wildcard (*) in IP address, or use CIDR notation,
that is, a IPv4 address and its associated network prefix.
In addtion, a rule may include (+) or exclude (-) given user at given machine, such as
127.0.1.2/16 - user@machine
. Here, user
refers to username
, which can be queried
by whoami
on Linux/MacOS platform; machine
refers to computer name
, which can
be queried by hostname
on Linux/MacOS platform.
Web License for Compute Cluster
Besides local cluster license above, users may use web license for compute cluster to run compute cluster service. This requires that machines running compute cluster server must have internet access. However, hardware info are not required any more. That is, users are free to deploy compute cluster servers to any cloud machine or container, as long as they have internet access. Please refer to COPT Web Licenses for details.
Below are brief steps:
Follow steps to register an account and apply for trial of web license for compute cluster.
Once approved, Web License ID is generated for users
On page of API Keys, create Web Access Key using given Web License ID
Afterwards, users edit configuration file cls.ini and add values of both Web License ID and Web Access Key to related keywords in section of WLS. For instance,
[Main]
Port = 7878
# Number of total tokens, what copt jobs can run simutaneously up to.
NumToken = 3
# Password is case-sensitive and default is emtpy;
# It applies to both copt clients and cluster nodes.
PassWd =
# Data folder of cluster relative to its binary folder,
# where multiple versions of copt libraries and related licenses reside.
DataFolder = ./data
[SSL]
# Needed if connecting using SSL/TLS
CaFile =
CertFile =
CertkeyFile =
[Licensing]
# If empty or default license name, it is from binary folder;
# To get license files from cwd, add prefix "./";
# Full path is supported as well.
LicenseFile = license.dat
PubkeyFile = license.key
[WLS]
# WebServer have a default host and no need to edit in most scenarios
# Must specify WebLicenseId and WebAccesskey to trigger web licensing
WebServer =
WebLicenseId = 1ed6da0c781d26ac4fc9233718b8eb64
WebAccessKey = 1f6fcb3b68d94e2bb5185bd05c87b93f
WebTokenDuration = 300
[Cluster]
# Host name and port of parent node in cluster.
# Default is empty, meaning not connecting to other node.
Parent =
PPort = 7878
As of now, compute cluster server talks to COPT Web Licenses for licensing. Users are able to monitor its token usage and other informations online.
Example Usage
Suppose that cluster license files exist in the same folder where the cluster executable reside. To start the COPT compute cluster, just execute the following command at any directory in Windows console, or Linux/Mac terminal.
copt_cluster
If you see log information as follows, the COPT compute cluster has been
successfully started. Server monitors any connection from COPT compute cluster clients, manages
client requests in queue as well as approved clients. User may stop cluster server
anytime when entering q
or Q
.
> copt_cluster [ Info] start COPT Compute Cluster, COPT v7.1.1 20240304 [ Info] [NODE] node has been initialized [ Info] server started at port 7878
If failed to verify local cluster license, or something is wrong on remote COPT licensing server, you might see error logs as follows.
> copt_cluster [ Info] start COPT Compute Cluster, COPT v7.1.1 20240304 [Error] Invalid signature in public key file [Error] Fail to verify local license
and
> copt_cluster [ Info] start COPT Compute Cluster, COPT v7.1.1 20240304 [Error] Error to connect license server [Error] Fail to verify cluster license by server
Client Setup
The COPT compute cluster client can be COPT command-line, or any application which solves problems by COPT API, such as COPT cpp/java/csharp/python interface. The COPT compute cluster service is a better approach in terms of flexibility and efficiency. Any COPT compute cluster client can legally run Cardingal Optimizer without local license.
Configuration
Before running COPT as cluster client, please make sure that you have
installed COPT locally. For details, please refer to
How to install Cardinal Optimizer.
Users can skip obtaining local licenses by adding a cluster configuration
file client.ini
.
Below is a typical configuration file, client.ini
, of COPT compute cluster client.
Cluster = 192.168.1.11
Port = 7878
QueueTime = 600
Passwd =
As configured above, COPT compute cluster client tries to connect to 192.168.1.11
at port 7878
with waiting time in queue up to 600 seconds. Here, the default value of Cluster
is localhost;
QueueTime
or WaitTime
is set to 0 if empty or not specified. Specifically,
empty QueueTime
means client does not wait and should quit immediately, if the COPT compute
cluster have no more token available. Port
number must be great than zero if specified
and should be the same as that specified in cluster configuration file cls.ini
.
Note that keywords in the configuration file are case insensitive.
To run as a COPT compute cluster client, an application must have configuration
file, client.ini
, in one of the following three locations, that is,
current working directory, environment directory by COPT_LICENSE_DIR
and binary
directory where COPT executable resides.
By design, COPT application reads local license files instead of client.ini
,
if they both exist in the same location. However, if local license files are under
the environment directory, to connect to cluster server, user could simply
add a configuration file, client.ini
, under the current working directory (different from
the environment directory).
If a COPT application calls COPT API to solve problems, such as COPT python interface,
license is checked as soon as COPT environment object is created. If there only exists proper
configuration file, client.ini
, the application works as a COPT compute cluster client and obtains token
to offload optimization jobs. As soon as COPT environment object is destroyed, the COPT compute cluster
server is notified to release token and thus to approve more requests waiting in queue.
Example Usage
Suppose that we’ve set configuration file client.ini
properly and have no local
license, below is an example of connecting to cluster server by COPT command-line tool
copt_cmd
. Execute the following command in Windows console, or Linux/Mac terminal.
copt_cmd
If you see log information as follows, the COPT compute cluster client, copt_cmd
, has
connected to cluster server successfully. COPT command-line tool is ready to do
modelling locally and then offload optimization jobs to server.
> copt_cmd Cardinal Optimizer v7.1.1. Build date Mar 04 2024 Copyright Cardinal Operations 2024. All Rights Reserved [ Info] initialize cluster client with ./client.ini [ Info] wait for server in 0 / 39 secs [ Info] connecting to cluster server 192.168.1.11:7878 COPT>
If you see log information as follows, the COPT compute cluster client, copt_cmd
, has connected to
cluster server. However, due to limited number of tokens, it waits in queue of size 5, until timeout.
> copt_cmd Cardinal Optimizer v7.1.1. Build date Mar 04 2024 Copyright Cardinal Operations 2024. All Rights Reserved [ Info] initialize cluster client with ./client.ini [ Info] wait for server in 0 / 39 secs [ Info] connecting to cluster server 192.168.1.11:7878 [ Warn] wait in queue of size 5 [ Info] wait for license in 2 / 39 secs [ Info] wait for license in 4 / 39 secs [ Info] wait for license in 6 / 39 secs [ Info] wait for license in 8 / 39 secs [ Info] wait for license in 10 / 39 secs [ Info] wait for license in 20 / 39 secs [ Info] wait for license in 30 / 39 secs [Error] timeout at waiting for server approval [Error] Fail to initialize copt command-line tool
If you see log information as follows, the COPT compute cluster client, copt_cmd
, has connected to
cluster server. But it refused to wait in queue, as Queuetime was set to 0. Therefore,
client quits with error immediately.
> copt_cmd Cardinal Optimizer v7.1.1. Build date Mar 04 2024 Copyright Cardinal Operations 2024. All Rights Reserved [ Info] initialize cluster client with ./client.ini [ Info] wait for server in 0 / 9 secs [ Info] connecting to cluster server 192.168.1.11:7878 [ Warn] server error: "no more token available", code = 129 [Error] Fail to initialize copt command-line tool
If you see log information as follows, the COPT compute cluster client, copt_cmd
, fails to connect to
cluster server. Finally, client quits after timeout.
> copt_cmd Cardinal Optimizer v7.1.1. Build date Mar 04 2024 Copyright Cardinal Operations 2024. All Rights Reserved [ Info] initialize cluster client with ./client.ini [ Info] wait for server in 0 / 39 secs [ Info] connecting to cluster server 192.168.1.11:7878 [ Info] wait for license in 2 / 39 secs [ Info] wait for license in 4 / 39 secs [ Info] wait for license in 6 / 39 secs [ Info] wait for license in 8 / 39 secs [ Info] wait for license in 10 / 39 secs [ Info] wait for license in 20 / 39 secs [ Info] wait for license in 30 / 39 secs [Error] timeout at waiting for server approval [Error] Fail to initialize copt command-line tool
COPT Cluster Managing Tool
COPT cluster service ships with a tool copt_clstool
, for retrieving information
and tune parameters of cluster servers on fly.
Tool Usage
Execute the following command in Windows console, Linux or MacOS terminal:
> copt_clstool
Below displays help messages of this tool:
> copt_clstool
COPT Cluster Managing Tool
copt_clstool [-s server ip] [-p port] [-x passwd] command <param>
commands are: addblackrule <127.0.0.1/20[-user@machine]>
addwhiterule <127.0.*.*[+user@machine]>
getfilters
getinfo
getnodes
reload
resetfilters
setparent <xxx:7878>
setpasswd <xxx>
settoken <num>
toggleblackrule <n-th>
togglewhiterule <n-th>
writefilters
If the -s
and -p
option are present, tool connects to cluster server
with given server IP and port. Otherwise, tool connections to localhost and
default port 7878. If cluster server sets a password, tool must provide
password string after the -x
option.
This tool provides the following commands:
AddBlackRule
: Add a new rule in black filters. each rule has format starting with non-empty IP address, which may have wildcard to match IPs in the scope. In addition, it is optional to be followed by including (+) or excluding (-) user name at machine name.AddWhiteRule
: Add a new rule in white filters. Note that a white rule has the same format as a black rule.GetFilters
: Get all rules of black filters, white filters and tool filters, along with relative sequence numbers, which are parameters for command ToggleBlackRule and ToggleWhiteRule.GetInfo
: Get general information of cluster server, including token usage, connected clients, and all COPT versions in support.GetNodes
: Get information of nodes in cluster, including parent address and status, all children nodes.Reload
: Reload available token information of all child nodes, in case it is not consistent for various reasons.ResetFilters
: Reset filter lists in memory to those on filter config file.SetParent
: Change parent node address on fly and then connecting to new parent. In this way, it avoidsdraining
operation when stopping a node for maintenance purpose.SetPasswd
: Update password of target cluster server on fly.SetToken
: Change token number of target cluster server on fly.ToggleBlackRule
: Toggle between enabling and disabling a black rule, given its sequence number by GetFilters.ToggleWhiteRule
: Toggle between enabling and disabling a white rule, given its sequence number by GetFilters.WriteFilters
: Write filter lists in memory to filter config file.
Example Usage
The following command lists general information on local machine.
> copt_clstool GetInfo [ Info] COPT Cluster Managing Tool, COPT v7.1.1 20240304 [ Info] connecting to localhost:7878 [ Info] [command] wait for connecting to cluster [ Info] [cluster] general info # of available tokens is 3 / 3, queue size is 0 # of active clients is 0 # of installed COPT versions is 1 COPT v7.1.1
To run managing tool on other machine, its IP should be added to
a rule in ToolList
section in filter configuration file clsfilters.ini
.
The following command from other machine lists cluster information of server 192.168.1.11.
> copt_clstool -s 192.168.1.11 GetNodes [ Info] COPT Cluster Managing Tool, COPT v7.1.1 20240304 [ Info] connecting to 192.168.1.11:7878 [ Info] [command] wait for connecting to cluster [ Info] [cluster] node info [Parent] (null):7878 (Lost) [Child] Node_192.168.1.12:7878_N0001, v2.0=3 Total num of child nodes is 1
The following command changes token number of server 192.168.1.11 from 3 to 0.
> copt_clstool -s 192.168.1.11 SetToken 0 [ Info] COPT Cluster Managing Tool, COPT v7.1.1 20240304 [ Info] connecting to 192.168.1.11:7878 [ Info] [command] wait for connecting to cluster [ Info] [cluster] total token was 3 and now set to 0
The following command shows all filter lists of server 192.168.1.11, including those in BlackList section, WhiteList section and ToolList section.
> copt_clstool -s 192.168.1.11 GetFilters [ Info] COPT Cluster Managing Tool, COPT v7.1.1 20240304 [ Info] connecting to 192.168.1.11:7979 [ Info] [command] wait for connecting to cluster [ Info] [cluster] filters info [BlackList] [WhiteList] [ToolList] [1] 127.0.0.1
The following command added user of IP 192.168.3.13 to black list.
> copt_clstool -s 192.168.1.11 AddBlackRule 192.168.3.133 [ Info] COPT Cluster Managing Tool, COPT v7.1.1 20240304 [ Info] connecting to 192.168.1.11:7979 [ Info] [command] wait for connecting to cluster [ Info] [cluster] server added new black rule (succeeded)
The follwing command shows that a new rule in BlackList section is added.
> copt_clstool -s 192.168.1.11 GetFilters [ Info] COPT Cluster Managing Tool, COPT v7.1.1 20240304 [ Info] connecting to 192.168.1.11:7979 [ Info] [command] wait for connecting to cluster [ Info] [cluster] filters info [BlackList] [1] 192.168.3.133 [WhiteList] [ToolList] [1] 127.0.0.1
The following command disable a rule in BlackList section.
> copt_clstool -s 192.168.1.11 ToggleBlackRule 1 [ Info] COPT Cluster Managing Tool, COPT v7.1.1 20240304 [ Info] connecting to 192.168.1.11:7979 [ Info] [command] wait for connecting to cluster [ Info] [cluster] server toggle black rule [1] (succeeded)
Running as service
To run COPT compute cluster server as a system service, follow steps described
in readme.txt
under cluster
folder, and set config file
copt_cluster.service
properly.
Below is readme.txt
, which lists installing steps in both Linux and MacOS
platforms.
[Linux] To run copt_cluster as a service with systemd
Add a systemd file
copy copt_cluster.service to /lib/systemd/system/
sudo systemctl daemon-reload
Enable new service
sudo systemctl start copt_cluster.service
or
sudo systemctl enable copt_cluster.service
Restart service
sudo systemctl restart copt_cluster.service
Stop service
sudo systemctl stop copt_cluster.service
or
sudo systemctl disable copt_cluster.service
Verify service is running
sudo systemctl status copt_cluster.service
[MacOS] To run copt_cluster as a service with launchctrl
Add a plist file
copy copt_cluster.plist to /Library/LaunchAgents as current user
or
copy copt_cluster.plist to /Library/LaunchDaemons with the key 'UserName'
Enable new service
sudo launchctl load -w /Library/LaunchAgents/copt_cluster.plist
or
sudo launchctl load -w /Library/LaunchDaemons/copt_cluster.plist
Stop service
sudo launchctl unload -w /Library/LaunchAgents/copt_cluster.plist
or
sudo launchctl unload -w /Library/LaunchDaemons/copt_cluster.plist
Verify service is running
sudo launchctl list shanshu.copt.cluster
Linux
Below are steps in details of how to run COPT compute cluster server as a system service in Linux platform.
For instance, assume that COPT remote service is installed under '/home/eleven'
.
In your terminal, type the following command to enter the root directory of
cluster service.
cd /home/eleven/copt_remote71/cluster
modify template of the service config file copt_cluster.service
in text format:
[Unit]
Description=COPT Compute Cluster Server
[Service]
WorkingDirectory=/path/to/service
ExecStart=/path/to/service/copt_cluster
Restart=always
RestartSec=1
[Install]
WantedBy=multi-user.target
That is, update template path in keyword WorkingDirectory
and ExecStart
to
actual path where the cluster service exits.
[Unit] Description=COPT Compute Cluster Server [Service] WorkingDirectory=/home/eleven/copt_remote71/cluster ExecStart=/home/eleven/copt_remote71/cluster/copt_cluster Restart=always RestartSec=1 [Install] WantedBy=multi-user.target
Afterwards, copy copt_cluster.service
to system service folder
/lib/systemd/system/
(see below).
sudo cp copt_cluster.service /lib/systemd/system/
The following command may be needed if you add or update service config file. It is not needed if service unit has been loaded before.
sudo systemctl daemon-reload
The following command starts the new cluster service.
sudo systemctl start copt_cluster.service
To verify the cluster service is actually running, type the following command
sudo systemctl status copt_cluster.service
If you see logs similar to below, COPT compute cluster server is running successfully as a system service.
copt_cluster.service - COPT Cluster Server Loaded: loaded (/lib/systemd/system/copt_cluster.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2021-08-28 11:46:10 CST; 3s ago Main PID: 3054 (copt_cluster) Tasks: 6 (limit: 4915) CGroup: /system.slice/copt_cluster.service └─3054 /home/eleven/copt_remote71/cluster/copt_cluster eleven-ubuntu systemd[1]: Started COPT Cluster Server. eleven-ubuntu COPTCLS[3054]: LWS: 4.1.4-b2011a00, loglevel 1039 eleven-ubuntu COPTCLS[3054]: NET CLI SRV H1 H2 WS IPv6-absent eleven-ubuntu COPTCLS[3054]: server started at port 7878 eleven-ubuntu COPTCLS[3054]: LWS: 4.1.4-b2011a00, loglevel 1039 eleven-ubuntu COPTCLS[3054]: NET CLI SRV H1 H2 WS IPv6-absent eleven-ubuntu COPTCLS[3054]: [NODE] node has been initialized
To stop the cluster service, type the following command
sudo systemctl stop copt_cluster.service
MacOS
Below are steps in details of how to run COPT Compute Cluster server as a system service in MacOS platform.
For instance, assume that COPT remote service is installed under "/Applications"
.
In your terminal, type the following command to enter the root directory of
cluster service.
cd /Applications/copt_remote71/cluster
modify template of the service config file copt_cluster.plist
in xml format:
<?xml version="1.0" encoding="UTF-8"?>
<plist version="1.0">
<dict>
<key>Label</key>
<string>shanshu.copt.cluster</string>
<key>Program</key>
<string>/path/to/service/copt_cluster</string>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
</dict>
</plist>
That is, update template path in Program
tag to
actual path where the cluster service exits.
<?xml version="1.0" encoding="UTF-8"?> <plist version="1.0"> <dict> <key>Label</key> <string>shanshu.copt.cluster</string> <key>Program</key> <string>/Applications/copt_remote71/cluster/copt_cluster</string> <key>RunAtLoad</key> <true/> <key>KeepAlive</key> <true/> </dict> </plist>
Afterwards, copy copt_cluster.plist
to system service folder
/Library/LaunchAgents
(see below).
sudo cp copt_cluster.plist /Library/LaunchAgents
The following command starts the new cluster service.
sudo launchctl load -w /Library/LaunchAgents/copt_cluster.plist
To verify the cluster service is actually running, type the following command
sudo launchctl list shanshu.copt.cluster
If you see logs similar to below, COPT compute cluster server is running successfully as a system service.
{ "LimitLoadToSessionType" = "System"; "Label" = "shanshu.copt.cluster"; "OnDemand" = false; "LastExitStatus" = 0; "PID" = 16406; "Program" = "/Applications/copt_remote71/cluster/copt_cluster"; };
To stop the cluster service, type the following command
sudo launchctl unload -w /Library/LaunchAgents/copt_cluster.plist
If the cluster service should be run by a specific user, add UserName
tag to conifg file.
Below adds a user eleven
, who has priviledge to run the cluster service.
<?xml version="1.0" encoding="UTF-8"?> <plist version="1.0"> <dict> <key>Label</key> <string>shanshu.copt.cluster</string> <key>Program</key> <string>/Applications/copt_remote71/cluster/copt_cluster</string> <key>UserName</key> <string>eleven</string> <key>RunAtLoad</key> <true/> <key>KeepAlive</key> <true/> </dict> </plist>
Then copy new copt_cluster.plist
to system service folder
/Library/LaunchDaemons
(see below).
sudo cp copt_cluster.plist /Library/LaunchDaemons
The following command starts the new cluster service.
sudo launchctl load -w /Library/LaunchDaemons/copt_cluster.plist
To stop the cluster service, type the following command
sudo launchctl unload -w /Library/LaunchDaemons/copt_cluster.plist