Self help
See below our detailed self help guides to assist you to get started.
-
Logging in
Adelaide's Phoenix cluster can be accessed via the聽聽protocol, the server name is聽isphoenix.adelaide.edu.au聽and the port used is the default port 22.
Depending on the operating system (OS), there are a few ways to connect.
Linux and Mac Open a command line (an ssh client should be installed by default on both OSs) and use the following command to connect to the phoenix cluster using 成人大片 credentials:聽ssh <userid>@phoenix.adelaide.edu.au
Here, <userid> refers to the 成人大片 identification number. Once a connection is established, a prompt asking for UoA password should appear.
Windows There are various programmes which can be used to connect to Phoenix.
Windows XP/7/8 The recommended software to use under Windows XP/7/8 is called PuTTY (聽and install PuTTY). Once you have got your copy of PuTTY, you can follow this PuTTY set up to get your PuTTY configured. An alternative to PuTTY is the .
Windows 10 The recommended software is still PuTTY (see previous section).
However, a default Windows 10 installation usually offers a . An pre-release version of exists. If it is desired to install the package open PowerShell with Administrator rights.
1. Allow running scripts with聽Set-ExecutionPolicy RemoteSigned
2. Installing Chocolatey (Package manager)聽iwr -UseBasicParsing | iex
3. Install OpenSSH via聽choco install opensshIn a similar way to Linux and Mac one can now connect to Phoenix using the Powershell:聽聽ssh.exe <userid>@phoenix.adelaide.edu.au
Cygwin If you have cygwin installed, then you can open a cygwin-terminal and then follow the聽Linux and Mac聽section, if you never heard of cygwin, safely ignore this note.
-
Transferring files
Before you run your application, you will need to upload your data and/or program code from your computer to your directory on the Phoenix system. Fortunately, this step is very easy and there are a number of ways to do this.
Using a file transfer client For Windows users, is an ideal place to start. Download the .聽 If you need help to set up WinSCP, you may find this useful.
For Mac users, is an option, you can . Use SFTP to establish the connection.
is another alternative, however, you will have to . Download .
If you need help to set up Cyberduck or Fetch, you may find. This step by step or this step by step useful.
Using terminal commands Using the command line interface may seem like a challenge to start with, but becomes easy with practice. The two common commands for transferring files using the terminal are scp and sftp.
scp uses a similar syntax to the local cp command by specifying the file or directory to copy and the destination:
scp -r myfile.txtOrMydirectory aXXXXXXX@phoenix.adelaide.edu.au:~/fastdir/MyPhoenixFolderRemote file locations are specified by prepending the file path with the user and hostname as in the example above. For further details you may wish to refer to this .
sftp (secure ftp) provides a similar interface to the ftp command. First, navigate to the local directory where your files reside. You can then initiate an sftp session to phoenix with the following command:
sftp aXXXXXXX@phoenix.adelaide.edu.auOnce the sftp session has started, you can use put and get to upload and download files to/from the remote computer. There are other commands available using the sftp protocol, to learn more you can look at this .
If you need to transfer files from tizard to Phoenix, you need to know your tizard credentials, path to folders on tizard to copy from, and path on Phoniex to copy to.
scp 聽USER@TIZARD:/path/to/files $FASTDIR/ somepathInteracting with the Phoenix System: Mastering Linux Basics
Like most HPC systems, Phoenix uses Linux as its operating system. Interacting with the system is easier if you are familiar with the basics of the Linux command line interface. As a starting point to learn about Linux basics and discover the most useful commands for using Phoenix, you can refer to our guide.
-
Loading software packages
Software packages on Phoenix are organised in modules, some of the currently available packages and documentations can be found .
In most cases, your required software is not loaded by default on the Phoenix system. After logging in, you will need to load your required software before you can perform any calculations. Phoenix uses the module system to manage the software environment. To see a list of the available software, use the module avail command as in the example below. If you can not find your required software under this list, there is a good chance we can make it available to you, contact us via email to make a software installation request.
$ module avail
----------------------------------------------------- /usr/share/Modules/modulefiles ------------------------------------------------------
dot module-git module-info modules null use.own------------------------------------------------------------ /etc/modulefiles -------------------------------------------------------------
cuda/6.0 cuda/7.0 gnu-parallel/20150322 intelmpi/5.0.3.048 openmpi/gnu/1.8.4 subread/1.4.6-p2
cuda/6.5 gcc/4.8.4 intel/13.1.3.174 matlab/2014a openmpi/intel/1.8.1To load a software package, use the module load command:
$ module load cuda/6.5
To unload a software package, use the module unload command:
$ module unload gcc/4.8.4
To swap between software packages, use the module swap command:
$ module swap cuda/6.5 cuda/7.0
For advance users:
-
Perparing a job script
There are two components required to submit a job for analysis in the Phoenix system.
- The software you wish to run (and any associated data files)
- A job script that requests system resources
To ensure a fair and optimized system for all Phoenix users, we use a resource management tool, SLURM, for job scheduling. In order to submit a job to Phoenix, you must create a SLURM job script and save this along with the program files, into your directory folder on Phoenix. Below are examples of sample job scripts called <my_job.sh> for each of the two common job types, namely simple jobs and parallel (MPI) jobs. For each job type, a downloadable version is provided for you to use. Please configure your job script according to one of following that best suits your requirements.
Creating a simple job script A job script is a text file that specifies the resources your code needs to run. The job scheduler then uses these resources to determine when to to run your job. Let's have a look at a simple job script example for some sequential code (that only runs on 1 CPU core):
#!/bin/bash
#SBATCH -p batch # partition (this is the queue your job will be added to)
#SBATCH -N 1 # number of nodes (use a single node)
#SBATCH -n 1 # number of cores (sequential job uses 1 core)
#SBATCH --time=01:00:00 # time allocation, which has the format (D-HH:MM:SS), here set to 1 hour
#SBATCH --mem=4GB # memory pool for all cores (here set to 4 GB)# Executing script (Example here is sequential script)
./my_sequential_program # your software with any argumentsWe'll begin by explaining the purpose of each line of the script example:
The header line #!/bin/bash simply tells the scheduler which shell language is going to be used to interpret the script. The default shell on Phoenix is bash.
The next set of lines all begin with the prefix #SBATCH. This prefix is used to indicate that we are specifying a resource request for the scheduler.The scheduler divides the cluster workload into partitions, or work queues. Different partitions are used for different types of compute job. Each compute job must select a partition with the -p option. To learn more about the different partitions available on Phoenix, see <reference>.
The Phoenix cluster is a collection of compute nodes, where each node has multiple CPU cores. Each job must specify the CPU resources required by using the -N option to request nodes and the -n to request the number of cores per node required. See <reference>
Each compute job needs to specify an estimate of the amount of time it needs to complete. This is commonly referred to as the walltime, specified with the --time=HH:MM:SS option. The estimated walltime needs to be larger than the actual time needed by the job, otherwise the scheduler will terminate the job for exceeding its requested time.
Dedicated memory (RAM) is allocated for each job when it runs, and the amount of memory required per node must be specified with the --mem option.
A simple job is one in which the computational process is sequential and is carried out by a single node. (Note: If your program file does not use MPI or MPI enabled libraries, your job belongs to this category.) Depending on your computational needs, you may need to use either CPU or GPU-accelerated computing nodes. This provides some insights about the differences between . If you need further support with this, please contact the team to discuss.Below is a sample job script for simple CPU jobs. You will need to create an .sh file in your directory on Phoenix, and can copy and paste the script below into that file. Please remember you must then configure the job script to your needs. The most common fields that need modification are the number of nodes and cores you wish to use, the duration of time for which you wish to run the job, and the email address to which notifications should be sent (i.e. your email address).
#!/bin/bash
#SBATCH -p cpu # partition (this is the queue your job will be added to)
#SBATCH -N 1 # number of nodes (due to the nature of sequential processing, here uses single node)
#SBATCH -n 2 # number of cores (here uses 2)
#SBATCH --time=01:00:00 # time allocation, which has the format (D-HH:MM), here set to 1 hour
#SBATCH --mem=4GB # memory pool for all cores (here set to 4 GB)# Notification configuration
#SBATCH --mail-type=END # Type of email notifications will be sent (here set to END, which means an email will be sent when the job is done)
#SBATCH --mail-type=FAIL # Type of email notifications will be sent (here set to FAIL, which means an email will be sent when the job is fail to complete)
#SBATCH --mail-user=firstname.lastname@adelaide.edu.au # Email to which notification will be sent# Executing script (Example here is sequential script and you have to select suitable compiler for your case.)
bash ./my_program.sh # bash script used here for demonstration purpose, you should select proper compiler for your needsFor simple GPU jobs, the following example job script can be copied and pasted into a new .sh file in your Phoenix directory:
#!/bin/bash
# Configure the resources required
#SBATCH -p gpu # partition (this is the queue your job will be added to)
#SBATCH -n 8 # number of cores (here uses 8, up to 32 cores are permitted)
#SBATCH --time=01:00:00 # time allocation, which has the format (D-HH:MM), here set to 1 hour
#SBATCH --gres=gpu:4 # generic resource required (here requires 4 GPUs)
#SBATCH --mem=16GB # memory pool for all cores (here set to 16 GB)# Configure notifications
#SBATCH --mail-type=END # Type of email notifications will be sent (here set to END, which means an email will be sent when the job is done)
#SBATCH --mail-type=FAIL # Type of email notifications will be sent (here set to FAIL, which means an email will be sent when the job is fail to complete)
#SBATCH --mail-user=my_email@adelaide.edu.au # Email to which notification will be sent# Execute your script (due to sequential nature, please select proper compiler as your script corresponds to)
bash ./my_program.sh # bash script used here for demonstration purpose, you should select proper compiler for your needsCreating a MPI Job Script A parallel (MPI) job is one that harnesses the computational power of multiple nodes, which are networked together and will perform related calculations or processes simultaneously. This can allow highly complex computational processes to be performed in much shorter time-frames. To enable parallel computing, the program you use will need to be MPI enabled or incorporate MPI enabled library. If you do need to run a parallel job on CPUs, following job script is an example:
#!/bin/bash
#SBATCH -p cpu # partition (this is the queue your job will be added to)
#SBATCH -N 2 # number of nodes (here uses 2)
#SBATCH -n 64 # number of cores (here 64 cores requested)
#SBATCH --time=01:00:00 # time allocation, which has the format (D-HH:MM), here set to 1 hour
#SBATCH --mem=32GB # memory pool for all cores (here set to 32 GB)mpirun -np 64 ./my_program
For jobs that use GPU accelerators, <my_job.sh> will look like something like the example below:
#!/bin/bash
#SBATCH -p gpu # partition (this is the queue your job will be added to)
#SBATCH -n 2 # number of cores (here 2 cores requested)
#SBATCH --time=01:00:00 # time allocation, which has the format (D-HH:MM), here set to 1 hour
#SBATCH --gres=gpu:1 # generic resource required (here requires 1 GPU)
#SBATCH --mem=8GB # memory pool for all cores (here set to 8 GB)mpirun -np 8 ./my_program
More commonly used options, which can be added to #SBATCH lines includes
#SBATCH --mail-type=END # Type of email notifications will be sent (here set to END, which means an email will be sent when the job is done)
#SBATCH --mail-type=FAIL # Type of email notifications will be sent (here set to FAIL, which means an email will be sent when the job is fail to complete)
#SBATCH --mail-user=my_email@adelaide.edu.au # Email to which notification will be sentSLURM is very powerful and allows detailed tailoring to fit your specific needs. If you want to explore all available SLURM parameters available, simply type following in the command line:
man sbatch
Debiting compute time: Multiple associations The -A association argument specifies the association from which you wish the compute time to be be debited. Note that you normally only need to specify an association if you have access to multiple allocations, otherwise the scheduler will debit the resources used from your default association.
-
Selecting a job queue
Standard compute jobs should select the batch queue. For further details and examples, please refer to the job requirement .
Submitting a job To submit a job script to the queue use the sbatch command:
$ sbatch my_job.sh
If your job script requires additional variables you can define these with the --export option to sbatch:
$ sbatch --export=ALL,var1=val1,var2=val2 my_job.sh
Be sure to include the ALL option to --export to ensure your job runs correctly.
Monitoring the queue You can view your job's progress through the queue with the squeue command:
$ squeue
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
2916 batch my_job. a1234567 PD 0:00 1 (Resources)
2915 batch my_job. a1234567 R 01:03 2 phoenixcs[1-2]
2914 batch my_job. a1234567 R 00:21 1 phoenixg1
The fifth column gives the status of the job as follows: R - running, PD - Pending, F - Failed, ST - Stopped, TO - TimeoutRunning squeue without arguments will list all currently running jobs. However if the list displayed is too long for you to easily locate your job, you can limit the search to your own jobs by using -u argument like this
squeue -u aXXXXXXX
where aXXXXXXX is your UofA ID number
Cancelling a job To cancel a job you own, use the sbatch command followed by the slum job ID:
$ scancel 2914
To cancel all jobs you own in a particular queue, use the -p argument:
$ scancel -p batch
-
Example pool
Sometimes it is hard to start with an empty script file. We have prepared a pool of examples to help users to start using Phoenix. This pool has many fully functional scripts with the most used software on Phoenix.
To access the pool, just navigate into /apps/examples and pick any example of you interest. At the moment we provide examples of:
- Abaqus
- Ansys
- Array jobs
- CST Studio
- Matlab
- OpenFOAM
- Picrust
- R
- Theano
-
Data management
There are several file system structures attached to the Phoenix High Performance Computing service. The most important ones are:
- /home
- /fast
- /data
- /uofaresstor
Each system plays a critical role in the Phoenix HPC workflow. Using them correctly will significantly improve job performance.
/home When you first login to Phoenix, your user shell is located in your personal home directory /home/<userid> ($HOME). The /home file system is backed up, and is solely intended for the files that define your user environment and irrecoverable data such as source code. No user should have more than 5 GB of data in their $HOME directory.
- $HOME directory is located at /home/<userid>
- Don't use $HOME for launching jobs or active job data!
- You can always get back to your home directory by typing cd
The /home file system hardware is not designed to support the intensive file access generated by the many hundreds of jobs that run on the Phoenix compute cores, you should use the /fast file system for job input/output instead.
/fast Your personal /fast directory $FASTDIR is located at /fast/users/<userid>
A symbolic link to that directory can be found in your $HOME directory, i.e. ~/fastdir.
Hence, changing into your personal /fast directory can be achieved through:
- cd ~/fastdir.
- cd /fast/users/<userid>
- cd $FASTDIR.
-
Migrating from Torque to Slurm
Torque is a scheduler of many HPC facilities for example Tizard. Phoenix uses Slurm as scheduler. This migration guide help to convert any torque job to a slurm job. A complete guide of Migrating torque to slurm is here:
Torque template script:
#!/bin/bash
#PBS -N jobname # JOB NAME
#PBS -l nodes=1:ppn=1 # Nodes and Cores
#PBS -l walltime=10:0:0 # Time
#PBS -l mem=4000mb # Memory
#PBS -o job.out # Name of job output file<YOUR APPLICATION EXECUTION HERE>
Slurm equivalence:
#!/bin/bash
#SBATCH -J jobname # JOB NAME
#SBATCH --nodes=1 # Nodes
#SBATCH --ntasks=1 # Cores
#SBATCH --time=10:0:0 # Time
#SBATCH --mem=4000mb # Memory
#SBATCH --output=job.out # Name of job output file<YOUR APPLICATION EXECUTION HERE>