|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
Supercomputers that are shared among many users typically use a job management system such as Torque to manage submission and execution of jobs. Jobs are not run directly from the command line- the user needs to create a job script which specifies both the required compute resources and the job that is to be run. The script is submitted to the job management system (or queueing system) and if the requested resources (processors, memory, etc) are available on the system, the job will by run. If not, it will be placed in a queue until such time as the resources do become available. In order to provide a fair share of the resources among users, the priority of jobs in the queue may be varied based on how much resources someone has used, so it is possible that jobs may not run in the order in which they have been submitted to the queue.
Users wanting to use the supercomputers need to understand how to use the queuing system, how to create the job submission script, as well as check its progress or delete a job from the queuing system.
See PBS command vs SGE commands
Job Status | Commands |
---|---|
qstat
-q |
list all queues |
qstat
-a |
list all jobs |
qstat
-u userid |
list jobs for userid |
qstat
-r |
list running jobs |
qstat
-f job_id |
list full information about job_id |
qstat
-Qf queue |
list full information about queue |
qstat
-B |
list summary status of the job server |
pbsnodes -a |
list status of all compute nodes -- similar to qhost in SGE |
Examples:
pbsnodes -a | egrep '^[a-z]|state' -- get only states
TORQUE command | SGE command | |
---|---|---|
Job submission | qsub [scriptfile] |
qsub [scriptfile] |
Job deletion | qdel [job_id] |
qdel [job_id] |
Job status (for user) | qstat -u [username] |
qstat [-j job_id] |
Extended job status | qstat -f [job_id] |
qstat -f [-j job_id] |
Hold a job temporarily | qhold [job_id] |
qhold [job_id] |
Release job hold | qrls [job_id] |
qrls [job_id] |
List of usable queues | qstat -Q |
qconf -sql |
TORQUE command | SGE command | |
---|---|---|
Job submission | qsub [scriptfile] |
qsub [scriptfile] |
Job deletion | qdel [job_id] |
qdel [job_id] |
Job status (for user) | qstat -u [username] |
qstat [-j job_id] |
Extended job status | qstat -f [job_id] |
qstat -f [-j job_id] |
Hold a job temporarily | qhold [job_id] |
qhold [job_id] |
Release job hold | qrls [job_id] |
qrls [job_id] |
List of usable queues | qstat -Q |
qconf -sql |
Note: TORQUE (Big Red II, Karst, and Mason) relies on Moab to dispatch jobs; SGE (Rockhopper) does not. For a list of useful Moab commands, see Common Moab scheduler commands.
Commands | Function | Basic Usage | Example |
---|---|---|---|
qsub |
submit a pbs job | qsub [script] | $ qsub job.pbs |
qdel |
delete pbs batch job | qdel [job_id] | $ qdel 123456 |
qhold |
hold pbs batch jobs | qhold [job_id] | $ qhold 123456 |
qrls |
release hold on pbs batch jobs | qrls [job_id] | $ qrls 123456 |
Job Status | Commands |
---|---|
qstat
-q |
list all queues |
qstat
-a |
list all jobs |
qstat
-u userid |
list jobs for userid |
qstat
-r |
list running jobs |
qstat
-f job_id |
list full information about job_id |
qstat
-Qf queue |
list full information about queue |
qstat
-B |
list summary status of the job server |
pbsnodes -a |
list status of all compute nodes |
A typical job script will look like this:
#!/bin/bash #PBS -l nodes=1:ppn=16 #PBS -l walltime=48:00:00 #PBS -N #PBS -o ${PBS_JOBNAME}.o${PBS_JOBID} #PBS -e ${PBS_JOBNAME}.e${PBS_JOBID} #PBS -m ae -M [email protected] cd $PBS_O_WORKDIR module use /data003/GIF/software/modules/ module load yourmodule your_commands_goes_here
Lines starting with #PBS
are for Torque
resource manager to request resources for HPC. Some important options
are as follows:
Option | Examples | Description |
---|---|---|
-l |
#PBS
-l nodes=1:ppn=16 |
Number of nodes and processor per nodes |
-l |
#PBS
-l walltime=HH:MM:SS |
Total time requested for your job |
-q |
#PBS
-q queue-name |
Queue name (Note: this will be auto redirected depending or your reource request) |
-o |
#PBS
-o filename |
STDOUT to a file |
-e |
#PBS
-e filename |
STDERR to a file |
-m
a|b|e|n |
#PBS
-m abe |
Email notification: a=aborts, b=begins, e=ends |
-m
n |
#PBS
-m n |
No notifications |
-M |
#PBS
-M [email protected] |
Email address to send notifications |
-N |
#PBS
-N jobname |
Provide a useful jobname for your script |
To start a interactive session you need to use the option
-I
, with all the other options
you normally put in a PBS script file. For example:
qsub -I -N stdin -l nodes=1:ppn=16 -l walltime=1:00:00
qsub | Submits a job script. qsub followed by a filename will read the job script from a file. This is the preferred method Without a file name it will read the job script from the command line. A job can be specified in a single command line. |
qstat | Queries the current job queue and lists its contents. Useful options include: -a lists all jobs. -au user lists all jobs for user. -f gives a full listing of queued jobs. This includes information on why a job is not currently running. -an lists the allocated nodes for running jobs. See the qstat man page for a description of all options. |
qdel jobid | This command will delete a running or queued pbs job that is identified by jobid. The parameter jobid is the identifier returned by qsub and listed in qstat. If the job will not delete - for example returning a message "failed to communicate with the MOM"- then you can purge the job from the queue with qsub -p jobid |
pbsnodes -a | This command will list all the nodes that PBS can send jobs to, and the attributes
of those nodes. Attributes are arbitrary keys listed in the properties field. They can be used to control which set of nodes a job executes on. The node name (for example tizard01) is also an attribute that can be specified. Specifying nodes' names as attributes requests that a job be run on those specific nodes. For example: to run jobs on the specific nodes tizard01 and tizard02, you can specify these in the properties arguments to qsub e.g. qsub -I -q tizard nodes=1:ppn:4:tizard01+tizard02 |
See the HPC quick start guide for how to connect to the HPC Machines.
mkdir MyFirstExperiment
cd MyFirstExperiment
cp ~/.templates/tizard.sub myexperiment-1.sub
MyJobName
to give the job a descriptive
name so you can recognise the output files:### Job name
#PBS -N MyJobName
Your-email-Address
to your actual
email address:### email address for user
#PBS -M Your-email-Address
nodes
,
ppn
, mem
,
vmem
, walltime
:### Request Resources
#PBS -l nodes=1:ppn=Y
#PBS -l mem=Xgb,vmem=Xgb
#PBS -l walltime=HH:MM:SS
#Load module(s) if required
module load application_module
# Run the executable
MyProgram+Arguments
|
when the lines are different:[demouser@tizard1 ~/MyFirstExperiment]$ sdiff myexperiment-1.sub ~/.templates/tizard.sub
#!/bin/csh #!/bin/csh
#PBS -V #PBS -V
### Job name ### Job name
#PBS -N MyFirstExperiment | #PBS -N MyJobName
### Join queuing system output and error files into a single ### Join queuing system output and error files into a single
#PBS -j oe #PBS -j oe
### Send email to user when job ends or aborts ### Send email to user when job ends or aborts
#PBS -m ae #PBS -m ae
### email address for user ### email address for user
#PBS -M [email protected] | #PBS -M Your-email-Address
### Queue name that job is submitted to ### Queue name that job is submitted to
#PBS -q tizard #PBS -q tizard
### Request nodes NB THIS IS REQUIRED ### Request nodes NB THIS IS REQUIRED
#PBS -l nodes=1:ppn=2 | #PBS -l nodes=1:ppn=Y
#PBS -l mem=2gb,vmem=4gb | #PBS -l mem=Xgb,vmem=Xgb
#PBS -l walltime=00:10:00 | #PBS -l walltime=HH:MM:SS
# This job's working directory # This job's working directory
echo Working directory is $PBS_O_WORKDIR echo Working directory is $PBS_O_WORKDIR
cd $PBS_O_WORKDIR cd $PBS_O_WORKDIR
echo Running on host `hostname` echo Running on host `hostname`
echo Time is `date` echo Time is `date`
#Load module(s) if required #Load module(s) if required
module load intel/11.9.293 openmpi/intel64/1.4.3 | module load application_module
# Run the executable # Run the executable
mpirun -np 2 hostname | MyProgram+Arguments
#SLEEP for 60 seconds - so you get time to see the job in the queue
sleep 60
qsub
qsub myexperiment-1.sub
92853.tizard1
qstat -a
qstat -a
tizard1:
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
-------------------- ----------- -------- ---------------- ------ ----- ------ ------ ----- - -----
92853.tizard1 demouser tizard MyFirstExperimen -- 1 2 2gb 00:10 Q --
Once the job has started, let's look again.
qstat -an
qstat -an
tizard1:
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
-------------------- ----------- -------- ---------------- ------ ----- ------ ------ ----- - -----
92853.tizard1 demouser tizard MyFirstExperimen 986 1 2 2gb 00:10 R --
tizard20/1+tizard20/0
qstat -f
Below we see a snippet of qstat -f
qstat -f 92853
[demouser@tizard1 ~/MyFirstExperiment]$ qstat -f 92853
Job Id: 92853.tizard1
Job_Name = MyFirstExperiment
Job_Owner = demouser@tizard1
job_state = R
queue = tizard
server = tizard1
...
Run the qstat -f command to see full results.
qdel 92853
... nothing to see .... but the job is deleted when you look at qstat
Let's submit four jobs and see them on the queue:
[demouser@tizard1 ~/MyFirstExperiment]$ qsub myexperiment-1.sub
92863.tizard1
[demouser@tizard1 ~/MyFirstExperiment]$ qsub myexperiment-1.sub
92864.tizard1
[demouser@tizard1 ~/MyFirstExperiment]$ qsub myexperiment-1.sub
92865.tizard1
[demouser@tizard1 ~/MyFirstExperiment]$ qsub myexperiment-1.sub
92866.tizard1
Now let's see only our own jobs:
qstat -u username -an
[demouser@tizard1 ~/MyFirstExperemnt]$ qstat -u $USER -an
tizard1:
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
-------------------- ----------- -------- ---------------- ------ ----- ------ ------ ----- - -----
92863.tizard1 demouser tizard MyFirstExperimen 25064 1 2 2gb 00:10 R --
tizard46/47+tizard46/46
92864.tizard1 demouser tizard MyFirstExperimen 1293 1 2 2gb 00:10 R --
tizard20/1+tizard20/0
92865.tizard1 demouser tizard MyFirstExperimen 1333 1 2 2gb 00:10 R --
tizard20/3+tizard20/2
92866.tizard1 demouser tizard MyFirstExperimen 1338 1 2 2gb 00:10 R --
tizard20/5+tizard20/4
[demouser@tizard1 ~/MyFirstExperiment]$ qdel 92865
[demouser@tizard1 ~/MyFirstExperiment]$ qstat -u $USER -an
tizard1:
Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
-------------------- ----------- -------- ---------------- ------ ----- ------ ------ ----- - -----
92863.tizard1 demouser tizard MyFirstExperimen 25064 1 2 2gb 00:10 R --
tizard46/47+tizard46/46
92864.tizard1 demouser tizard MyFirstExperimen 1293 1 2 2gb 00:10 R --
tizard20/1+tizard20/0
92866.tizard1 demouser tizard MyFirstExperimen 1338 1 2 2gb 00:10 R --
tizard20/5+tizard20/4
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
Working directory is /home/users/demouser/MyFirstExperiment
Running on host tizard20
Time is Tue Oct 30 16:49:32 CST 2012
tizard20
tizard20
.
Resources Requested:
=========================================
mem=2gb
neednodes=1:ppn=2
nodes=1:ppn=2
vmem=4gb
walltime=00:10:00
=========================================
.
Resouces Used:
=========================================
cput=00:00:00
mem=0kb
vmem=0kb
walltime=00:00:14
=========================================
.
Exit Status : [0]
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
no access to tty
no job control in this shell
Working directory is /home/users/demouser/MyFirstExperiment
Running on host tizard20
Time is Tue Oct 30 16:49:32 CST 2012
# This job's working directory
echo Working directory is $PBS_O_WORKDIR
cd $PBS_O_WORKDIR
echo Running on host `hostname`
echo Time is `date`
tizard20
tizard20
Resouces Used:
=========================================
cput=00:00:00
mem=0kb
vmem=0kb
walltime=00:00:14
=========================================
Resources Requested:
=========================================
mem=2gb
neednodes=1:ppn=2
nodes=1:ppn=2
vmem=4gb
walltime=00:10:00
=========================================
Exit Status : [0]
The following are some MPI Examples for you try.
cp -ax /opt/shared/training/MPI-EXAMPLES .
cd MPI-EXAMPLES
more HOW-2-compile-MPI-Programs
module load intel/11.10.319 openmpi/intel64/1.6.2
cd HelloWorld
mpicc -o helloworld helloworld.c
cd ..
cd Matrix
mpicc -o matrix matrix.cc
cd ..
cd HelloWorld
cp ~/.templates/tizard.sub helloworld.sub
cd ..
cd Matrix
cp ~/.templates/tizard.sub matrix.sub
cd ..
*For each qsub script, change the resources to four nodes with one cpu on each, 2GB of mem and vmem and a walltime of 10 minutes:
#PBS -l nodes=4:ppn=1
#PBS -l mem=2gb,vmem=2gb
#PBS -l walltime=00:10:00
echo
Time is `date`
echo Time is `date`
echo Using Nodes from PBS_NODEFILE:
cat $PBS_NODEFILE
(We add this so we can see what NODES PBS gave us to run our job.)
### Job name
#PBS -N HelloWorld
# Run the executable
mpirun -np 4 ./helloworld_program
### Job name
#PBS -N Matrix
# Run the executable
mpirun -np 4 ./matrix_program
cd HelloWorld
qsub helloworld.sub
cd ..
cd Matrix
qsub matrix.sub
cd ..
-rw-r--r-- 1 demouser demouser 2024 Oct 31 18:17 /opt/shared/training/MPI-EXAMPLES/HOW-2-compile-MPI-Programs
/opt/shared/training/MPI-EXAMPLES/Matrix:
total 112
-rw-r--r-- 1 demouser demouser 746 Oct 31 18:15 matrix-tizard.sub
-rwxrwxr-x 1 demouser demouser 101652 Oct 31 17:53 matrix_program
-rw-r--r-- 1 demouser demouser 4424 Oct 31 17:12 matrix.cc
drwxrwxr-x 4 demouser demouser 69 Oct 31 18:17 ..
drwxrwxr-x 2 demouser demouser 67 Oct 31 18:15 .
/opt/shared/training/MPI-EXAMPLES/HelloWorld:
total 108
-rw-r--r-- 1 demouser demouser 755 Oct 31 18:15 helloworld-tizard.sub
-rwxrwxr-x 1 demouser demouser 99054 Oct 31 17:54 helloworld_program
-rw-r--r-- 1 demouser demouser 951 Oct 31 17:47 helloworld.c
drwxrwxr-x 4 demouser demouser 69 Oct 31 18:17 ..
drwxrwxr-x 2 demouser demouser 78 Oct 31 18:15 .
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
Working directory is /home/users/demouser/MyFirstExperiemnt/MPI-EXAMPLES/HelloWorld
Running on host tizard47
Time is Thu Nov 1 11:53:45 CST 2012
Using Nodes from PBS_NODEFILE:
tizard47
tizard47
tizard47
tizard47
Greetings from process 1 on host[tizard47]!
Greetings from process 2 on host[tizard47]!
Greetings from process 3 on host[tizard47]!
Greetings from the MASTER process [0] on host[tizard47]!
Resources Requested:
=========================================
mem=2gb
neednodes=4:ppn=1
nodes=4:ppn=1
vmem=2gb
walltime=00:10:00
=========================================
Resouces Used:
=========================================
cput=00:00:00
mem=4320kb
vmem=228364kb
walltime=00:00:03
=========================================
Exit Status : [0]
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
Working directory is /home/users/demouser/MyFirstExperiemnt/MPI-EXAMPLES/Matrix
Running on host tizard47
Time is Thu Nov 1 11:53:26 CST 2012
Using Nodes from PBS_NODEFILE:
tizard47
tizard47
tizard47
tizard47
Preparing the matrix...
The matrix is:
0 0 0
0 0 0
0 0 0
Sending the matrix to the other nodes...
Waiting for responses...
The result is...
Level 0:
0 1 2
0 0 0
0 0 0
Level 1:
0 0 0
0 2 4
0 0 0
Level 2:
0 0 0
0 0 0
0 3 6
Collapsing results...
0 1 2
0 2 4
0 3 6
Resources Requested:
=========================================
mem=2gb
neednodes=4:ppn=1
nodes=4:ppn=1
vmem=2gb
walltime=00:10:00
=========================================
Resouces Used:
=========================================
cput=00:00:00
mem=0kb
vmem=0kb
walltime=00:00:08
=========================================
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
Google matched content |
PBS Professional - Quick Reference Guide for users
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: March 20, 2020