Small Lab Setup, or how to convert CCP4 setup into CCP4 Cloud server¶
Read this article if thinking about running CCP4 Cloud on your home or laboratory network. Home network is coming readily with your router, which your internet provider normally supply as part of your internet subscription. All your domestic devices, connected to the router, either via cable or wirelessly, are found on the same network and can communicate not only with the Internet, but also with each other. In your workplace, your laboratory computers are often connected to LAN (Local Area Network) and can talk to each other just the same way as if the were connected to your home router.
Having home or local network makes it possible to set CCP4 Cloud on a single machine (for example, a productive Linux workstation), and work with it from any other machine on the network, be it Linux, Mac, Windows or even a tablet or smartphone.
Doing this requires only access to root or sudo-enabled account on computer designated to be CCP4 Cloud server.
Basic idea¶
CCP4 Setup series 8.0 and higher comes with CCP4 Cloud server, running on machine’s internal network, known as localhost. This server is already known to you as CCP4 Cloud Desktop Mode. By default, a server running on localhost cannot be accessed from other machines, because they are found on network different from localhost.
We can, however, use technique called Reverse proxying, for forwarding requests between the networks. The techique is used to expose a particular URL on one network, as another URL which can be accessed on another network. Reverse proxying is commonly provided by HTTP servers such as Apache, Tomcat, NGINX and similar. In this article, we will use Apache, which comes pre-installed on most modern Linux and Mac systems, or can be easily installed from system repositories.
Step-by-step setup procedure¶
Choose computer that will run CCP4 Cloud Server. A productive Linux workstation would be a good choice. Although any high-end laptop would be suitable, you probably want to have a desktop with at least 16GB RAM, 16 cores and few TBs of disk space for user projects (this would be suitable for a small Lab with 1-3 users, focusing on wetlab, rather computational, aspects).
As this machine will be a server, expect it to run 24/7; CCP4 CLoud would need to be restarted each time this machine is rebooted. The machine must have either a DNS-resolved network name or fixed IP address. As an example, you have DNS-resolved URL lab.uni.ac.uk, if you can ping that machine like this:
$ ping lab.uni.ac.uk
Fixed IP address can be found in network settings of your machines. For home routers, it will be something like 192.168.8.142, so pinging on such address will produce something like:
$ ping 192.168.8.142
PING 192.168.8.142 (192.168.8.142): 56 data bytes 64 bytes from 192.168.8.142: icmp_seq=0 ttl=64 time=0.081 ms 64 bytes from 192.168.8.142: icmp_seq=1 ttl=64 time=0.093 ms 64 bytes from 192.168.8.142: icmp_seq=2 ttl=64 time=0.074 ms 64 bytes from 192.168.8.142: icmp_seq=3 ttl=64 time=0.172 ms 64 bytes from 192.168.8.142: icmp_seq=4 ttl=64 time=0.088 ms
Find out local network address for CCP4 Cloud - designated machine and verify it with pinging as above. In what follows, we will use IP address, which can be replaced with DNS-resolved URL if you have one.
Have CCP4 8.0+ installed on CCP4 Cloud - designated machine as usual, for example, in /Xtal/ccp4-8.0. It is a good idea to install CCP4 in a protected account, different from one that will be running CCP4 Cloud.
Create unpriveleged user account for runnning CCP4 Cloud server, for example, ccp4cloud. It is equally possible, but not advisable, to run CCP4 Cloud with external access in one of user’s accounts.
Note
DO NOT run CCP4 Cloud server in a priveleged (root or sudo-enabled) account!
Login in CCP4 Cloud account and prepare CCP4 Cloud storage for users, for example, in /home/ccp4cloud:
cd ~ mkdir ccp4cloud-data mkdir ccp4cloud-data/cofe-users mkdir ccp4cloud-data/cofe-projects mkdir ccp4cloud-data/cofe-nc-storage mkdir ccp4cloud-data/cofe-client-storage mkdir ccp4cloud-data/cofe-facilities mkdir ccp4cloud-data/cofe-archive mkdir ccp4cloud-data/tutorials
In this Section, we describe the most popular CCP4 Cloud setup, comprising one Front-End Server (FE) and one or more Number Crunchers (NCs).
Prerequisites and general notes¶
The Setup procedure is described in terms of servers and file systems, rather than machines (hardware hosts). Allocation of servers to hosts, virtual or real, should be done in respect to your requirements and resources. In simplest case, all servers can be placed on the same host. A few points should be taken into account:
- FE does not perform computations, but should be efficient for file operations and network communication.
- NC(s) must be able to submit jobs to your computational infrastructure (Queue(s)), but they do not require a particular CPU power by themselves. However, if Queue represents a queue-less shell (cf. below), jobs will be run on the NC’s host machine, in which case a more powerful machine will be needed
- we recommend having at least two NCs in CCP4 Cloud, such that jobs can be run even if one NC is temporarily down. Duplicate NCs can be placed on the same host machine and work with the same Queue. Should you wish to install only one NC at the beginning, simply ignore all instructions and configurations related to NC2 in this document.
For setting up CCP4 Cloud, you will require:
Note
in what follows, /path/to/
does not stand for any particular path. Rather, it means any path of your choice, specific to the context including the host machine. Therefore, assume using ssh login@host
or scp
instead of cp
where necessary.
Computational infrastructure (Queue(s)), such as SGE or SLURM cluster. If clusters are not available, jobs can run in Linux shell on NC’s host machine.
CCP4 Setup version 7.1 or higher (download link). All FE , NC(s) and Queue(s) must have access to CCP4 Setup. For convenience, we assume that CCP4 will be installed in the following location, visible from all servers and Queue(s) :
/path/to/CCP4/ccp4-8.0
Note
having different CCP4 Setups for different servers or hosts is possible but inconvenient from the maintenance point of view.
Note
we always recommend using the latest CCP4 release; upgrading CCP4 in CCP4 Cloud setup involves only a minor adjustment of configuration files and start scripts
Directories for FE and NC(s) in respective hardware hosts:
/path/to/FE /path/to/NC1 /path/to/NC2
Disk area(s) for keeping user data and projects. They need to be visible only for FE :
/path/to/disk1 /path/to/disk2
(All optional) disk area(s) for keeping X-ray diffraction images, other data, tutorials and safe for failed jobs:
/path/to/images /path/to/pdb /path/to/job_safe /path/to/data /path/to/tutorials
Note
images
,pdb
(read-only) andjob safe
(read/write) areas must be accessible from all FE , NC(s) and Queue(s) . Other areas need a read-only access from FE only.
- Apache servers running on each host machine in the setup (excluding Queue(s) ).
- CCP4 Cloud account in each host machine. This account does not need elevated privileges, but it must have access to disk areas as described above.
Note
FE, NC(s) and computational jobs will run in the CCP4 Cloud account. For security reasons, do not use a personal account for running CCP4 Cloud and restrict the designated account such that no changes or unauthorised access can be made in sensitive parts of your system and personal disk areas.
Setup procedure¶
All actions, except 1 and 9, should be done in the designated CCP4 Cloud account on respective hosts (excluding Apache configuration).
Install CCP4 (see details in Appendix A).
Download and unpack the CCP4 Cloud setup tarball (choose convenient disk area visible from all servers, or repeat the following command on every host):
mkdir -p /path/to/setup-tmp cd /path/to/setup-tmp curl http://ccp4serv6.rc-harwell.ac.uk/jscofe-dev/ccp4cloud-setup.tar.gz > ccp4cloud-setup.tar.gz tar -xvzf ccp4cloud-setup.tar.gz
Create server directories and copy content of prototype contents from the CCP4 Cloud setup tarball (commands to be executed in respective hosts):
mkdir -p /path/to/FE cp -r /path/to/setup-tmp/ccp4cloud-setup/FE/* /path/to/FE/ mkdir -p /path/to/NC1 cp -r /path/to/setup-tmp/ccp4cloud-setup/NC/* /path/to/NC1/ mkdir -p /path/to/NC2 cp -r /path/to/setup-tmp/ccp4cloud-setup/NC/* /path/to/NC2/
Create directories for user projects and miscellaneous items:
mkdir -p /path/to/disk1/users mkdir -p /path/to/disk1/projects mkdir -p /path/to/disk1/facilities mkdir -p /path/to/disk2/projects
At list one disk must be allocated for user and project data. Additional disks (disk2
in this example) may be added later.
Make necessary changes in NC configuration files (see details in Appendix B):
vi /path/to/NC1/config.json vi /path/to/NC2/config.json
Make necessary changes in FE configuration file (see details in Appendix C):
vi /path/to/FE/config.json
Note
this step uses data from NC configuration files.
Make necessary changes in NC(s) start script(s) (see details in Appendix D):
vi /path/to/NC1/start-nc.sh vi /path/to/NC2/start-nc.sh
Make necessary changes in FE start script (see details in Appendix E):
vi /path/to/FE/start-fe.sh
Modify Apache configuration on every host (see details in Appendix F)
Start CCP4 Cloud servers:
/path/to/NC1/start-nc.sh /path/to/NC2/start-nc.sh /path/to/FE/start-fe.sh
Note
you may receive a confusing message ‘’configuration file not found’’ here. This may be indeed due to a typo in configuration file paths within the scripts, but also because of misformatted configuration file(s). Typically, extra commas between JSON items are introduced or get omitted at manual editing.
Perform checks and tests (see details in Appendix G).
Create CCP4 Cloud user with administrative privileges (see details in Appendix H)
Delete the temporary setup directory (optional)
rm -rf /path/to/setup-tmp
The End.
Configuration of CCP4 Cloud Clients¶
It is recommended that users access CCP4 Cloud via CCP4 Cloud Client, rather than direct web-link in browser. Using CCP4 Cloud Client allows to use interactive graphical software, such as Coot
, with the remote CCP4 Cloud instance.
CCP4 Cloud Client is launched by clicking on icon with CCP4 Diamond, cloud and Wi-Fi sign, found in CCP4 Setup 8.0 on user machines. By default, it connects to CCP4 Cloud run by CCP4 at Harwell, and should be re-configured for using your own CCP4 Cloud instance. For that, launch CCP4 Cloud Client Configurator by clicking on icon with CCP4 Diamond, cloud and gear sign, and change the URL in the field provided. In terms, used in this document, the URL will be
https://www.mysite.com/ccp4cloud/
Note
- the trailing slash is significant
- this operation needs to be performed only once by every user of your new CCP4 Cloud instance
Maintenance and Updates¶
CCP4 Cloud can be started and restarted as below:
/path/to/NC1/start-nc.sh
/path/to/NC2/start-nc.sh
/path/to/FE/start-fe.sh
You may find it more convenient to write a single script for (re)starting CCP4 Cloud, possibly using ssh access to all hosts involved.
CCP4 Software is regularly updated (once in 2-4 weeks). CCP4 Cloud maintainer will be informed by e-mail when a new update is issued; in addition, CCP4 Cloud users may see notification of new updates when working via CCP4 Cloud Client. Updates may be applied with the following commands, which can be, again, put into a single convenience script):
/path/to/CCP4/ccp4-8.0/bin/ccp4um -auto
/path/to/NC1/start-nc.sh
/path/to/NC2/start-nc.sh
/path/to/FE/start-fe.sh
Note
CCP4 must be updated in the account used for its installation, and CCP4 Cloud servers must be restarted in CCP4 Cloud accounts in respective hosts
Contact¶
In case of problems with CCP4 Cloud setup or questions on further customisation of your CCP4 Cloud instance, do not hesitate to contact CCP4 at ccp4@ccp4.ac.uk .
Appendix A. Installation of CCP4¶
The latest CCP4 can be conveniently installed with the following script:
# Installing the latest CCP4 version series 7.1 for Linux (minimal configuration)
cd /path/to/CCP4
h=http://series-71.fg.oisin.rc-harwell.ac.uk/downloads/packages_others
x=$(curl -s ${h}/md5sums.txt | grep linux64)
x=${x/* }
echo ${x} # on 26 Oct 2020 prints ccp4-7.1.006-linux64.tar.gz
curl -O ${h}/${x}
tar -zxf ${x}
rm ${x}
cd ccp4-7.1
./BINARY.setup
bin/ccp4um -m 99 # tells that ccp4 is up to date; can be used for updating later on
exit
Appendix B. Adjusting configuration file for Number Cruncher Servers¶
Below is an excerpt of NC configuration settings that must be revised. Other settings are for fine tuning and development, and can be left as is in most cases. Please refer to CCP4 Cloud configuration reference for more details :
{
"NumberCrunchers" : [
{
"name" : "server-name",
"port" : 8086,
"externalURL" : "https://www.mysite.com/ccp4cloud-nc-01",
"capacity" : 16,
"max_nproc" : 4,
"storage" : "/path/to/NC/nc-storage",
"jobs_safe" : {
"path" : "/path/to/job_safe",
},
"exeType" : "SGE",
"exeData" : ["-cwd","-V","-b","y","-q","all.q","-notify"],
"logflow" : {
"log_file" : "/path/to/NC/logs/node_nc"
}
}
],
"Emailer" : {
"type" : "telnet",
"emailFrom" : "name@mysite.com",
"maintainerEmail" : "name@mysite.com",
"host" : "mail.server.at.mysite.com",
"port" : 25,
"headerFrom" : "CCP4 Cloud <name@mysite.com>"
}
}
- name
- NC name, which is used exclusively in report and log pages for NC identification
- port
- port number on localhost. The port should be used exclusively for the given NC
- externalURL
- this is a DNS-resolved URL for accessing NC from other hosts. If NC shares host with FE, put blank line
""
- capacity
- this is an estimate for the number of jobs that given NC can run simultaneously
- max_nproc
- this is the number of cores that a job can use on given NC
- storage
path to disk area, used for making job’s working directories. The area is self-cleaning; working directories are deleted when job is finished and sent back to FE . Do not mistake this area with temporary disk space on individual cluster nodes. In general, this area can be located anywhere, the template distribution tarball assumes that it is within the NC directory. E.g., for 1st NC, put
/path/to/NC1/nc-storage
- jobs_safe
this is disk area for retaining working directories for failed jobs; only a limited number of latest failed jobs is captured. In general, this area can be located anywhere, however, it is convenient to have it on file system that is shared between FE and NC(s) . If such file system is not available, a directory within NC’s directory can be used, for example,
/path/to/NC1/job_safe
In any case, this directory must exist before starting CCP4 Cloud.
- exeType
- this is type of used Queue, which may be one of the following:
SHELL
,SGE
,SLURM
orSCRIPT
. The template provided assumes Sungrid Engine (SGE
). Change it forSHELL
to run jobs on NC without any queue management (recommended only at relatively low number of jobs passing). - exeData
- configuration options for Queue. The template shows some keywords for the Sungrid Engine (
SGE
type). ForSHELL
execution type, put empty line""
. - log_file
- this is optional but useful configuration, which is used to split excessively long log files in chunks.
- Emailer
NC may send e-mail messages to the maintainer in case of malfunction. This may be configured in several ways. The template shows e-mailer configuration for the
telnet
type of the e-mailer. Other possible configurations includenodemailer
(GMail-based example):"Emailer" : { "type" : "nodemailer", "emailFrom" : "CCP4 Cloud <name@gmail.com>", "maintainerEmail" : "name@gmail.com", "host" : "smtp.gmail.com", "port" : 465, "secure" : true, "auth" : { "user" : "name@gmail.com", "pass" : "insecure-password" "file" : "path-to-file-with-userId-and-password-space-separated" } }
where auth should contain either user and pass or file but not both.
Configuration
"Emailer" : { "type" : "desktop" }
will put content of e-mails in NC’s log files and display in user’s browser (when appropriate). Configuration
"Emailer" : { "type" : "none" }
will switch the e-mailer off.
Appendix C. Adjusting configuration file for the Front-End Server¶
Below is an excerpt of FE configuration settings that must be revised. Other settings are for fine tuning and development, and can be left as is in most cases. Please refer to CCP4 Cloud configuration reference for more details :
{
"FrontEnd" : {
"description" : {
"id" : "ccp4-cloud-instance-id",
"name" : "CCP4 Cloud at my site",
"icon" : "images_com/setup-harwell.png"
},
"port" : 8085,
"externalURL" : "https://www.mysite.com/ccp4cloud",
"reportURL" : "https://www.mysite.com/ccp4cloud/",
"userDataPath" : "/path/to/disk1/users",
"storage" : "/path/to/disk1/projects",
"projectsPath" : {
"disk1" : { "path" : "/path/to/disk1/projects",
},
"disk2" : { "path" : "/path/to/disk2/projects",
}
},
"jobs_safe" : {
"path" : "/path/to/job_safe",
},
"facilitiesPath" : "/path/to/disk1/facilities",
"regMode" : "admin",
"cloud_mounts" : {
"xtal-data" : "/path/to/data",
"tutorial-data" : "/path/to/tutorials"
},
"logflow" : {
"log_file" : "/path/to/FE/logs/node_fe"
}
},
"NumberCrunchers" : [
{
"serNo" : 0,
},
{
"serNo" : 1,
}
],
"Emailer" : {
"type" : "telnet",
"emailFrom" : "name@mysite.com",
"maintainerEmail" : "name@mysite.com",
"host" : "mail.server.at.mysite.com",
"port" : 25,
"headerFrom" : "CCP4 Cloud <name@mysite.com>"
}
}
Note
have all NC(s) configured first and copy-paste their configurations in the NumberCrunchers list (cf. template provided), then only change serNo fields as shown above
- description
- this serves your CCP4 Cloud instance identification. id is reserved for possible use in future, just put something unique, name is used to decorate some output pages, icon can specify path to custom setup icon. The path provided in template can also be used.
- port
- port number on localhost. The port should be used exclusively for the FE
- externalURL
- this is a DNS-resolved URL for accessing FE by users and NC(s)
- reportURL
- in most cases, this should coincide with externalURL (however note the trailing slash). Different reportURL is used in rare instances when CCP4 Cloud servers are accessed through layers of proxies and redirections
- userDataPath
- path to directory for user data (in this document referenced as
/path/to/disk1/users
) - storage
- path to directory for miscellaneous items, which must coincide with
projects
directory on one of disks (/path/to/disk1/projects
) - projectsPath
at least one disk for user projects must be configured:
"projectsPath" : { "disk1" : { "path" : "/path/to/disk1/projects", "type" : "volume", "diskReserve" : 10000 } }
Here,
"disk1"
is the logical disk name, which can be chosen arbitrarily. Disk names cannot be changed once user accounts are created- jobs_safe
- this is disk area for retaining working directories for failed jobs. In FE configuration, it should be given only if NC configurations place it in a shared file system
- facilitiesPath
this item is a rudimental item but still needed. Put this directory on
"disk1"
:"facilitiesPath" : "/path/to/disk1/facilities"
- regMode
- can be either
"admin"
or"email"
. In"admin"
mode, new users can be registered only by CCP4 Cloud administrator. In"email"
mode, new users can register by themselves using their e-mail for verification. - cloud_mounts
this optional configuration sets logical names for directories with read-only data for user projects. In the following example:
"cloud_mounts" : { "xtal-data" : "/path/to/data" }
users will see files in
/path/to/data
as/xtal-data
. Configuration may be made user-specific. For example, in case"cloud_mounts" : { "xtal-data" : "/path/to/$LOGIN/data" }
user with login
ccp4cat
will see directory/path/to/ccp4cat/data
as/xtal-data
.- log_file
- this is optional but useful configuration, which is used to split excessively long log files in chunks.
- Emailer
- this configuration is the same as in case of NC.
Appendix D. Adjusting start script for Number Cruncher Servers¶
Replace paths in the following lines of the provided NC start script (/path/to/NC/start-nc.sh
):
ccp4_dir=/path/to/CCP4/ccp4-7.1
nc_dir=/path/to/NC
export PDB_DIR=/path/to/pdb
export PDB_SEQDB=/path/to/pdb/derived_data/pdb_seqres.txt
export AFDB_SEQDB=/path/to/afdb/sequences.fasta
PDB_DIR is the local location of a pdb mirror, PDB_SEQDB is the sequence listings for the full set of pdb entries (available to download from the PDB ftp area) and AFDB_SEQDB is the sequence listings for the full set of entries in the EBI AlphaFold Database (also available to download from the EBI-AFDB ftp area)
Appendix E. Adjusting start script for the Front-End Server¶
Replace paths in the following lines of the provided FE start script (/path/to/FE/start-fe.sh
):
ccp4_dir=/path/to/CCP4/ccp4-7.1
fe_dir=/path/to/
export PDB_DIR=/path/to/pdb
export PDB_SEQDB=/data1/opt/db/pdb_derived_data/pdb_seqres.txt
export AFDB_SEQDB=/data1/opt/db/afdb/sequences.fasta
Appendix F. Adjusting Apache configuration on host machines¶
CCP4 Cloud’s Front-End Server listens to the specified port of the localhost service running on their host machines. External requests from users must be redirected to that port. In order to do that:
Note the port number and
externalURL's
path in FE configuration. In this document, the suggested values are8085
and<path>=ccp4cloud
, respectively.Identify site directory of your Apache setup. In Debian-based systems, this is typically
apache_site_dir=/etc/apache2/sites-enabled
, and in Redhat-based systemsapache_site_dir=/etc/httpd/conf.d
Edit and install the provided template configuration module for Apache:
cp /path/to/setup-tmp/ccp4cloud-setup/apache.conf /path/to/setup-tmp/<path>.conf # in <path>.conf, replace all occurrences of: # '0000' for the selected port number # 'path' for the chosen URL path ('<path>') vi /path/to/setup-tmp/<path>.conf sudo cp /path/to/setup-tmp/<path>.conf apache_site_dir/
For example, for port number and FE URL path used in this document, apache_site_dir
should receive file ccp4cloud.conf
with the following content:
<Proxy http://127.0.0.1:8085/*>
Allow from all
</Proxy>
ProxyRequests Off
ProxyPass /ccp4cloud http://localhost:8085
SetOutputFilter INFLATE;proxy-html;DEFLATE
ProxyHTMLURLMap http://localhost:8085 /ccp4cloud
LogLevel Info
ProxyPassReverse /ccp4cloud http://localhost:8085
Restart Apache:
sudo apachectl stop sudo apachectl start
Note
at this point, Apache may refuse to start if modules, required for the redirection, were not installed. Should this happen, inspect the error message and install the missing modules. Typically, mod_proxy
and mod_proxy_http
are missing.
Apache configuration for Number Cruncher Server
If NC is placed on a host machine, different from FE’s host, then DNS-resolved externalURL
should be specified in its configuration, and the corresponding redirection module provided for it, in exactly the same fashion as above.
Appendix G. Tests and checks¶
After starting CCP4 Cloud and Apache servers:
Check FE and NC’s log files:
cat /path.to/FE/logs/node_fe.err cat /path.to/FE/logs/node_fe.log cat /path.to/NC1/logs/node_nc.err cat /path.to/NC1/logs/node_nc.log cat /path.to/NC2/logs/node_nc.err cat /path.to/NC2/logs/node_nc.log
Normal content of *.err
logs may contain
/bin/sh: 1: kill: No such processError:
anything else indicates a problem. A message similar to
Error: listen EADDRINUSE 127.0.0.1:8085
may mean that the selected localhost port is used by another process running on the host machine. Messages similar to
Error: EBUSY: resource busy or locked
Error: ENOENT: no such file or directory
indicate problems with file systems or with directory specifications in CCP4 Cloud’s configuration files and/or start scripts. Errors are complemented with code trace, which may help a developer to identify the exact problem. Therefore, please include code traces in the respective communications.
If FE server starts normally, its *.log
file should start with messages similar to the following
/bin/sh: line 0: kill: (2982) - No such process
[2020-10-28T12:17:30.226Z] 14-001 +++ cannot kill process pid=2982
/bin/sh: line 0: kill: (2994) - No such process
[2020-10-28T12:17:30.232Z] 14-001 +++ cannot kill process pid=2994
/bin/sh: line 0: kill: (2995) - No such process
[2020-10-28T12:17:30.237Z] 14-001 +++ cannot kill process pid=2995
/bin/sh: line 0: kill: (2982) - No such process
[2020-10-28T12:17:30.241Z] 14-001 +++ cannot kill process pid=2982
[2020-10-28T12:17:30.250Z] 03-005 ... python version: ccp4-python not found
[2020-10-28T12:17:30.295Z] 03-001 ... FE: url=http://localhost:8081
[2020-10-28T12:17:30.296Z] 03-001 ... FE-Proxy: url=http://localhost:8085
[2020-10-28T12:17:30.296Z] 03-002 ... NC[0]: name=local-nc type=SHELL url=http://localhost:8083
[2020-10-28T12:17:30.296Z] 03-002 ... NC[1]: name=client type=CLIENT url=http://localhost:8084
[2020-10-28T12:17:30.299Z] 03-003 ... configuration written to /var/folders/zf/9_j6y85s4l743fzs2py3gztc0000gn/T/tmp-8053T07Ej39eTIQ7
[2020-10-28T12:17:30.302Z] 23-003 ... server local-nc started, pid=8063
[2020-10-28T12:17:30.303Z] 23-003 ... server client started, pid=8064
[2020-10-28T12:17:30.304Z] 00-001 ... FE: url=http://localhost:8081
[2020-10-28T12:17:30.304Z] 00-002 ... NC[0]: type=SHELL url=http://localhost:8083
[2020-10-28T12:17:30.304Z] 00-002 ... NC[1]: type=CLIENT url=http://localhost:8084
[2020-10-28T12:17:30.304Z] 00-003 ... Emailer: desktop
[2020-10-28T12:17:30.307Z] 00-005 ... front-end started, listening to http://localhost:8081 (non-exclusive)
[2020-10-28T12:17:30.307Z] 22-001 ... setting up proxy for http://localhost:8081 localhost
[2020-10-28T12:17:30.308Z] 22-002 ... setting up proxy for http://localhost:8084 localhost
[2020-10-28T12:17:30.308Z] 22-003 ... front-end proxy started, listening to http://localhost:8085 (exclusive)
[2020-10-28T12:17:30.310Z] 23-005 ... client application "/bin/bash -c 'open -a Opera http://localhost:8085'" started, pid=8065
[2020-10-28T12:17:30.425Z] 23-006 ... client application "/bin/bash -c 'open -a Opera http://localhost:8085'" quit with code 0
and a normal NC log starts with
[2020-10-28T12:17:30.722Z] 01-001 ... NC[0]: type=SHELL url=http://localhost:8083
[2020-10-28T12:17:30.724Z] 01-002 ... Emailer: desktop
[2020-10-28T12:17:30.726Z] 11-031 ... total unassigned job tokens removed: 0
[2020-10-28T12:17:30.727Z] 11-033 ... total abandoned job directories removed: 0
[2020-10-28T12:17:30.743Z] 01-007 ... number cruncher #0 started, listening to http://localhost:8083 (exclusive)
[2020-10-28T12:17:30.743Z] 03-005 ... python version: ccp4-python not found
In these logs, triple dots ...
denote messages that are merely informative, triple stars ***
highlight warnings and triple pluses +++
indicate errors (which are not fatal in most cases; fatal errors are usually found in *.err
logs). The above logs are copied from a perfectly functional system, despite confusing errors on the very top and unspecified python versions.
Poll all localhost ports from their host machines, for example:
curl http://localhost:8085/whoareyou
For the Front-End, the answer should be something like
CCP4 Cloud FE 1.6.016 [26.10.2020] CCP4-7.1.006
and Number Crunchers should reply
CCP4 Cloud NC-0 (local-nc) 1.6.016 [26.10.2020] 0
If localhost ports reply as above, repeat the same requests using respective
externalURLs
from configuration files, e.g.,curl https//www.mysite.com/ccp4cloud/whoareyou
If all replies are sensible, start CCP4 Cloud login page in browser, using
reportURL
from FE configuration, e.g.,firefox https://www.mysite.com/ccp4cloud/
Note
trailing slash is significant
- Use login name
devel
and passworddevel
for the first login. You should land in empty project list, where you can create first project and run a few jobs (e.g., Data Import) in it.
Appendix H. Creating 1st user with administrative privileges¶
Admin user account is essential for CCP4 Cloud maintenance. By default, CCP4 Cloud is installed with devel
user, which cannot be removed. Admin users can grant admin privileges to any other user, but 1st Admin must be created manually:
cp /path/to/disk1/users/devel.user /path/to/disk1/users/admin.user
cp -r /path/to/disk1/projects/devel.projects /path/to/disk1/projects/admin.projects
# in /path/to/disk1/users/admin.user, edit:
# "name" : "Admin"
# "email" : "your@email"
# "login" : "admin"
# "role" : "admin"
vi /path/to/disk1/users/admin.user
After this, go to CCP4 Cloud login page, and login as admin
with password devel
. After login, proceed to My Account and change password to your liking.
Note
for security purposes, change password for devel
user promptly.