Astro Documentation

2021-03-18

1.  Hardware Specification

1.1 Login node

  • 1 Login node mgmt
    • 2x Intel Silver 4210R CPU(2.4GHz, 10C)
    • 128G memo.
    • Host address: astro.tdli.sjtu.edu.cn

1.2 Computing nodes

  • 16 computing nodes n30 - n45
    • each node has 2x Intel Gold 6240R CPU (2.4GHz, 24C), 48 cores in total.
    • each node has 192G memo. (4G per core)
  • 1 fat node with 1 GPU card
    • 2x Intel Gold 6240R CPU (2.4GHz, 24C), 48 cores total.
    • 1TB memo.
    • 1x NVIDIA Tesla V100s PCIe 32 GB card.

1.3 Storage

  • 1 storage node
    • 2x Intel Silver 4210R CPU(2.4GHz, 10C)
    • 128G memo.
    • 36x 16T HD, usable capacity 465T (RAID 6)
    • IO speed ~900 MB/s.

1.4 Network

  • Computation and storage network: Mellanox InfiniBand HDR configured at 100Gb/s.
  • Management network: 1000Mb/s connection
  • Internet: 10Gb/s connection to the internet through campus network.

2.  Available Software

Astro provides module environment to manage frequently used softwares and libraries.

Use module avail to check the available softwares/libraries.

 

user@Astro: ~$ module avail

------------------------ /opt/app/modules ---------------------------

miniconda3  parallel_studio_xe/2015  parallel_studio_xe/2018  parallel_studio_xe/2020

------------------- /usr/share/modules/modulefiles ------------------

dot  module-git  module-info  modules  null  use.own

 

3.  Accounts

3.1 Apply for an account

Please open the link and fill out the registration form: https://wj.sjtu.edu.cn/q/qHbToMaY

3.2 Key

The users use SSH to log in to the cluster. For security reasons, password login has been disabled. The users need to provide the public key to the administrator when applying for an account.

4.  Slurm

4.1 Brief of Slurm

Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters.

4.2 Frequently used commands of Slurm

sacct Displays accounting data for all jobs and job steps in the Slurm job accounting log or Slurm database.
salloc Obtain a Slurm job allocation (a set of nodes), execute a command, and then release the allocation when the command is finished.
sbatch Submit a batch script to Slurm.
scancel Used to signal jobs or job steps that are under the control of Slurm.
scontrol View or modify Slurm configuration and state.
sinfo View information about Slurm nodes and partitions.
squeue View information about jobs located in the Slurm scheduling queue.
srun Run parallel jobs.

The command option --help also provides a brief summary of options. Note that the command options are all case sensitive. For more information, please refer to the official documentation of SLURM.

4.3 sinfo: view partition and node information

root@mgmt:/home/testy# sinfo

PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST

normal*      up   infinite     16   idle n[30-45]

high         up   infinite     16   idle n[30-45]

super        up   infinite     16   idle n[30-45]

gpu          up   infinite      1   idle gpu1

 

Fields Descriptions
PARTITION Name of a partition. Note that the suffix "*" identifies the default partition.
AVAIL Partition state. Can be either up, down, drain, or inact (for INACTIVE).
TIMELIMIT Maximum time limit for any user job in days-hours:minutes:seconds. infinite is used to identify partitions without a job time limit.
NODES Count of nodes with this particular configuration.
STATE State of the nodes. Possible states include: allocated, completing, down, drained, draining, fail, failing, future, idle, maint, mixed, perfctrs, planned, power_down, power_up, reserved, and unknown. Their abbreviated forms are: alloc, comp, down, drain, drng, fail, failg, futr, idle, maint, mix, npc, plnd, pow_dn, pow_up, resv, and unk respectively. NOTE: The suffix "*" identifies nodes that are presently not responding.
NODELIST Names of nodes associated with this particular configuration.

4.4 squeue: reports the state of jobs or job steps

root@mgmt:/home/testy# squeue

JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)

   64    normal interact     root  R       0:02      1 n30

 

Fields Descriptions
JOBID A unique value for each element of job arrays and each component of heterogeneous jobs.
PARTITION Partition of the job or job step.
NAME The name of the job. The default is the name of the submitted script.
USER The username that submitted the job.
ST Job state. PD: Pending, R: Running, S: Suspended, CG: Completing
TIME The time that the job has been running.
NODES The number of nodes occupied by the job.
NODLIST(REASON) A list of nodes occupied by the job. If it is a job in the queued state, the reason for queuing is given.

4.5 scancel: cancel a pending or running job or job step

Cancel the job by entering the scancel JOBID command.

4.6 Submit jobs with slurm

The command sbatch submits a batch script to Slurm. 

Sample script:

### This is a bash script

#!/bin/bash

 

### The job will be submitted to the partition named "normal"

#SBATCH --partition=normal

 

### Sets the name of the job

#SBATCH --job-name=JOBNAME

 

### Specifies that the job requires two nodes

#SBATCH --nodes=2

 

### The number of processes running per node is 40

#SBATCH --ntasks-per-node=40

 

### The maximum running time of a job, after which the job resources are reclaimed by SLURM

#SBATCH --time=2:00:00

 

### The execution command of the program

mpirun hostname

 

After a job submitted through sbatch, the system will return the ID of the job. The squeue command can report the status of the job. When the job is completed, the output of the job will be put into the file slurm-JOBID.out by default.

If the job needs GPU, the options --gres=gpu:<number of card> needs to be added into the script. For example, --gres=gpu:1 means that the job needs 1 GPU card.

Sample script:

### This is a bash script

#!/bin/bash

 

### Sets the name of the job

#SBATCH --job-name=gpu-example

 

### The job will be submitted to the partition named "gpu"

#SBATCH --partition=gpu

 

### Specifies that the job requires one node

#SBATCH --nodes=1

 

### 16 CPU cores are needed to run the job

#SBATCH --ntasks=16

 

### GPU card is needed to run the job

#SBATCH --gres=gpu:1

 

### The execution command of the program

python test.py

 

Common options for sbatch

--partition=<name> Request a specific partition for the resource allocation.
--job-­name=<name> Specify a name for the job allocation.
--nodes=<n> Request that a minimum of minnodes nodes be allocated to this job.
--ntasks-per-node=<ntasks> Request that ntasks be invoked on each node.
--ntasks=<n> Set the maximum number of tasks.
--cpus-per-task=<ncpus> Set the number of CPU cores required for each task. If not specified, each task is assigned one CPU core by default. Generally, it is required to run multi-threaded programs such as OpenMP, but not for ordinary MPI programs.