|
|
Note: mpirun, mpiexec, and orterun are all synonyms. Using any of the names will produce the same behavior. http://www.open-mpi.org/doc/v1.4/man1/mpiexec.1.php
The mpirun command controls several aspects of program execution in Open MPI. mpirun uses the Open Run-Time Environment (ORTE) to launch jobs. If you are running under distributed resource manager software, such as Sun Grid Engine or PBS, ORTE launches the resource manager for you.
If you are using rsh/ssh instead of a resource manager, you must use a hostfile or host list to identify the hosts on which the program will be run. When you issue the mpirun command, you specify the name of the hostfile or host list on the command line; otherwise, mpirun executes all the copies of the program on the local host, in round-robin sequence by CPU slot. For more information about hostfiles and their syntax, see Specifying Hosts By Using a Hostfile.
Both MPI programs and non-MPI programs can use mpirun to launch the user processes.
Some example programs are provided in the /opt/SUNWhpc/HPC8.1/examples directory for you to try to compile/run as sanity tests.
The following example shows the general single-process syntax for mpirun:
% mpirun [options] [program-name]
For a simple SPMD (Single Process, Multiple Data) job, the typical syntax is:
% mpirun -np x program-name
For jobs involving multiple instructions, the command syntax appears similar to the following:
% mpirun [options] [program-name] : [options2] [program-name2] ...
For an MPMD (Multiple Program, Multiple Data) parallel application, the syntax follows this form:
% mpirun -np x program1 : -np y program2
This command starts x number of copies of the program program1, and then starts y copies of the program program2.
The options control the behavior of the mpirun command. They might or might not be followed by arguments.
Caution - If you do not specify an argument for an option that expects to be followed by an argument (for example, the --app<filename> option), that option will read the next option on the command line as an argument. This might result in inconsistent behavior.
“Invalid Cross-Reference Format” lists the options in alphabetical order, with a brief description of each.
Use the -x args option (where args is the environment variable(s) you want to use) to specify any environment variable you want to pass during runtime. The -x option exports the variable specified in args and sets the value for args from the current environment. For example:
% mpirun -x LD_LIBRARY_PATH=/opt/SUNWhpc/HPC8.1/lib -np 4 a.out
The mpirun command uses MCA (Multiple Component Architecture) parameters to pass environment variables. To specify an MCA parameter, use the -mca option with the mpirun command, and then specify the parameter type, the parameter you want to pass as an environment variable, and the value you want to set. For example:
% mpirun --mca mpi_show_handle_leaks 1 -np 4 a.out
This sets the MCA parameter mpi_show_handle_leaks to the value of 1 before running the program named a.out with four processes. In general, the format used on the command line is --mca parameter_name value.
Note - There are multiple ways to specify the values of MCA parameters. This chapter discusses how to use them from the command line with the mpirun command. MCA parameters are discussed in more detail in Chapter 7. |
Open MPI supports the canceling of receive operations. However, the canceling of sends is not supported; therefore, a send will never be successfully canceled.
For more information about canceling send and receive operations, see the MPI_Cancel(3) man page.
The examples in this section show how to use the mpirun command options to specify how and where the processes and programs run.
The following table shows the process control options for the mpirun command. The procedures that follow the table explain how these options are used and show the syntax for each.
Task |
mpirun option |
---|---|
To run a program with default settings |
(no need to specify an option) |
To run multiple parallel processes |
-c or |
To display command help |
-h or --help |
To change the working directory |
-wdir or |
To specify the list of hosts on which to invoke processes (also known as the rankmap string) |
-host or --host or -H |
To specify the list of hosts on which to execute the program (also known as the rankmap file) |
-hostfile <filename> or |
To start up in debugging mode |
-d or --debug or -debugger or --debugger <sequence> |
To specify verbose output |
-v |
To specify multiple executables |
-np 2 exe1 : -np 6 exe2 |
To run the program with default settings, enter the command and program name, followed by any required arguments to the program:
% mpirun program-name
By default, an MPI program started with mpirun runs as one process.
To run the program as multiple processes, use the -np option:
% mpirun -np process-count program-name
When you request multiple processes, ORTE attempts to start the number of processes you request, regardless of the number of CPUs available to run those processes. For more information, see Oversubscribing Nodes.
You can use a type of text file (called an appfile) to direct mpirun. The appfile specifies
the nodes on which to run, the number of processes to launch on each node, and the programs to execute
in a parallel application. When you use the
--app option, mpirun takes all its direction from the contents of the appfile
and ignores any other nodes or processes specified on the command line.
For example the following shows an appfile called my_appfile:
# Comments are supported; comments begin with # # Application context files specify each sub-application in the # parallel job, one per line. The first sub-application is the 2 # a.out processes: -np 2 a.out # The second sub-application is the 2 b.out processes: -np 2 b.out
To use the --app option with the mpirun command, specify the name and path of the appfile on the command line. For example:
% mpirun --app my_appfile
This command produces the same results as running a.out and b.out from the command line.
When you issue the mpirun command from the command line, ORTE reads the number of processes to be launched from the -np option, and then determines where the processes will run.
To determine where the processes will run, ORTE uses the following criteria:
You specify the available hosts to Open MPI in three ways:
The hostfile lists each node, the available number of slots, and the maximum number of slots on that node. For example, the following listing shows a simple hostfile:
node0 node1 slots=2 node2 slots=4 max_slots=4 node3 slots=4 max_slots=20
In this example file, node0 is a single-processor machine. node1 has two slots. node2 and node3 both have 4 slots, but the values of slots and max_slots are the same (4) on node2. This disallows the processors on node2 from being oversubscribed. The four slots on node3 can be oversubscribed, up to a maximum of 20 processes.
When you use this hostfile with the --nooversubscribe option (see Oversubscribing Nodes), mpirun assumes that the value of max_slots for each node in the hostfile is the same as the value of slots for each node. It overrides the values for max_slots set in the hostfile.
Open MPI assumes that the maximum number of slots you can specify is equal to infinity, unless explicitly specified. Resource managers also do not specify the maximum number of available slots.
Note - Open MPI includes a commented default hostfile at /opt/SUNWhpc/HPC8.1/etc/openmpi-default-hostfile. Unless you specify a different hostfile at a different location, this is the hostfile that OpenMPI uses. It is empty by default, but you may edit this file to add your list of nodes. See the comments in the hostfile for more information. |
You can use the --host option to mpirun to specify the hosts you want to use on the command line in a comma-delimited list. For example, the following command directs mpirun to run a program called a.out on hosts a, b, and c:
% mpirun -np 3 --host a,b,c a.out
Open MPI assumes that the default number of slots on each host is one, unless you explicitly specify otherwise.
To Specify Multiple Slots Using the --host Option
To specify multiple slots with the -host option for each host repeat the host name on the command line for each slot you want to use. For example:
% mpirun -host node1,node1,node2,node2 ...
If you are using a resource manager such as Sun Grid Engine or PBS, the resource manager maintains an accurate count of available slots.
You can also use the --host option in conjunction with a hostfile to exclude any nodes not explicitly specified on the command line. For example, assume that you have the following hostfile call /tr>
Suppose you issue the following command to run program a.out:
% mpirun -np 1 --hostfile my_hosts --host c a.out
This command launches one instance of a.out on host c, but excludes the other hosts in the hostfile (a, b, and d).
Note - If you use these two options (--hostfile and --host) together, make sure that the host(s) you specify using the --host option also exist in the hostfile. Otherwise, mpirun exits with an error.If you schedule more processes to run than there are available slots, this is referred to as oversubscribing. Oversubscribing a host is not suggested, as it might result in performance degradation.
mpirun has a --nooversubscribe option. This option implicitly sets the max_slots value (maximum number of available slots) to the same value as the slots value for each node, as specified in your hostfile. If the number of processes requested is greater than the slots value, mpirun returns an error and does not execute the command. This option overrides the value set for max_slots in your hostfile.
For more information about oversubscribing, see the following URL:
http://www.open-mpi.org/faq/?category=running#oversubscribing
ORTE uses two types of scheduling policies when it determines where processes will run:
This is the default scheduling policy for Open MPI. If you do not specify a scheduling policy, this is the policy that is used.
In by-slot scheduling, Open MPI schedules processes on a node until all of its available slots are exhausted (that is, all slots are running processes) before proceeding to the next node. In MPI terms, this means that Open MPI tries to maximize the number of adjacent ranks in MPI_COMM_WORLD on the same host without oversubscribing that host.
To Specify By-Slot Scheduling |
If you want to explicitly specify by-slot scheduling for some reason, there are two ways to do it:
1. Specify the --byslot option to mpirun. For example, the following command specifies the --byslot and --hostfile options:
% mpirun -np 4 --byslot --hostfile myfile a.out
The following example uses the -host option:
% mpirun -np 4 --byslot -host node0,node0,node1,node1 a.out
2. Set the MCA parameter rmaps_base_schedule_policy to the value slot. For example:
% mpirun --mca rmaps_base_schedule_policy slot -np 4 a.out
Note - The examples in this chapter set
MCA parameters on the command line. For more information about the ways in which you can set
MCA parameters, see Chapter 7. In addition, the Open MPI FAQ contains information about MCA
parameters at the following URL: http://www.open-mpi.org/faq/?category=tuning#setting-mca-params |
The following output example shows the contents of a simple hostfile called my-hosts and the results of the mpirun command using by-slot scheduling.
% cat my-hosts node0 slots=2 max_slots=20 node1 slots=2 max_slots=20 % mpirun --hostfile my-hosts -np 8 --byslot hello | sort Hello World I am rank 0 of 8 running on node0 Hello World I am rank 1 of 8 running on node0 Hello World I am rank 2 of 8 running on node1 Hello World I am rank 3 of 8 running on node1 Hello World I am rank 4 of 8 running on node0 Hello World I am rank 5 of 8 running on node0 Hello World I am rank 6 of 8 running on node1 Hello World I am rank 7 of 8 running on node1
In by-node scheduling, Open MPI schedules a single process on each node in a round-robin fashion (looping back to the beginning of the node list as necessary) until all processes have been scheduled. Nodes are skipped once their default slot counts are exhausted.
To Specify By-Node Scheduling |
There are two ways to specify by-node scheduling:
% mpirun -np 4 --bynode --hostfile my-hosts a.out
% mpirun --mca rmaps_base_schedule_policy node -np 4 a.out
The following output example shows the contents of the same hostfile used in the previous example and the results of the mpirun command using by-node scheduling.
% cat my-hosts node0 slots=2 max_slots=20 node1 slots=2 max_slots=20 % mpirun --hostfile my-hosts -np 8 --bynode hello | sort Hello World I am rank 0 of 8 running on node0 Hello World I am rank 1 of 8 running on node1 Hello World I am rank 2 of 8 running on node0 Hello World I am rank 3 of 8 running on node1 Hello World I am rank 4 of 8 running on node0 Hello World I am rank 5 of 8 running on node1 Hello World I am rank 6 of 8 running on node0 Hello World I am rank 7 of 8 running on node1
In the examples in this section, node0 and node1 each have two slots. The diagrams show the differences in scheduling between the two methods.
By-slot scheduling for the two nodes can be represented as follows:
node0
node1
0
2
1
3
4
6
5
7
By-node scheduling for the same two nodes can be represented this way:
node0
node1
0
1
2
3
4
5
6
7
Open MPI directs UNIX standard input to /dev/null on all processes except the rank 0 process of MPI_COMM_WORLD. The MPI_COMM_WORLD rank 0 process inherits standard input from mpirun. The node from which you invoke mpirun need not be the same as the node where the MPI_COMM_WORLD rank 0 process resides. Open MPI handles the redirection of the mpirun standard input to the rank 0 process.
Open MPI directs UNIX standard output and standard error from remote nodes to the node that invoked mpirun, and then prints the information from the remote nodes on the standard output/error of mpirun. Local processes inherit the standard output/error of mpirun and transfer to it directly.
To Redirect Standard I/O |
To redirect standard I/O for Open MPI applications, use the typical shell redirection procedure on mpirun. For example:
% mpirun -np 2 my_app < my_input > my_output
In this example, only the MPI_COMM_WORLD rank 0 process will receive the stream from my_input on stdin. The stdin on all the other nodes will be tied to /dev/null. However, the stdout from all nodes will be collected into the my_output file.
To Perform This Task |
Use This Option |
---|---|
To change the working directory |
-wdir or --wdir |
To display debugging output |
-d |
To display command help |
-h |
Use the -wdir or --wdir option to specify the path of an alternative working directory to be used by the processes spawned when you run your program:
% mpirun --wdir working-directory program-name
Setting a path with --wdir does not affect where the runtime environment looks for executables. If you do not specify --wdir, the default is the current working directory. For example:
% mpirun --wdir /home/mystuff/bin a.out
The syntax above changes the working directory for a.out to /home/mystuff/bin.
Use this syntax to specify debugging output. For example:
% mpirun -d a.out
The -d option shows the user-level debugging output for all of the ORTE modules used with mpirun. To see more information from a particular module, you can set additional MCA debugging parameters. The availability of the additional debugging information depends on how the module of interest is implemented.
For more information on MCA parameters, see Chapter 7. For more information about whether a module provides additional verbose or debug mode, run the ompi_info command on that module.
To display a list of mpirun options, use the -h option (alone). The following example shows the output from mpirun -h:
% ./mpirun -h mpirun (Open MPI) 1.3r19845-ct8.1-b06a-r21 Usage: mpirun [OPTION]... [PROGRAM]... Start the given program using Open RTE -am <arg0> Aggregate MCA parameter set file list --app <arg0> Provide an appfile; ignore all other command line options -bynode|--bynode Whether to allocate/map processes round-robin by node -byslot|--byslot Whether to allocate/map processes round-robin by slot (the default) -c|-np|--np <arg0> Number of processes to run -cf|--cartofile <arg0> Provide a cartography file -d|-debug-devel|--debug-devel Enable debugging of OpenRTE -debug|--debug Invoke the user-level debugger indicated by the orte_base_user_debugger MCA parameter -debug-daemons|--debug-daemons Enable debugging of any OpenRTE daemons used by this application -debug-daemons-file|--debug-daemons-file Enable debugging of any OpenRTE daemons used by this application, storing output in files -debugger|--debugger <arg0> Sequence of debuggers to search for when "--debug" is used -default-hostfile|--default-hostfile <arg0> Provide a default hostfile -display-allocation|--display-allocation Display the allocation being used by this job -display-devel-allocation|--display-devel-allocation Display a detailed list (mostly intended for developers) of the allocation being used by this job -display-devel-map|--display-devel-map Display a detailed process map (mostly intended for developers) just before launch -display-map|--display-map Display the process map just before launch -do-not-launch|--do-not-launch Perform all necessary operations to prepare to launch the application, but do not actually launch it -do-not-resolve|--do-not-resolve Do not attempt to resolve interfaces -gmca|--gmca <arg0> <arg1> Pass global MCA parameters that are applicable to all contexts (arg0 is the parameter name; arg1 is the parameter value) -h|--help This help message -H|-host|--host <arg0> List of hosts to invoke processes on --hetero Indicates that multiple app_contexts are being provided that are a mix of 32/64 bit binaries -hostfile|--hostfile <arg0> Provide a hostfile -launch-agent|--launch-agent <arg0> Command used to start processes on remote nodes (default: orted) -leave-session-attached|--leave-session-attached Enable debugging of OpenRTE -loadbalance|--loadbalance Balance total number of procs across all allocated nodes -machinefile|--machinefile <arg0> Provide a hostfile -mca|--mca <arg0> <arg1> Pass context-specific MCA parameters; they are considered global if --gmca is not used and only one context is specified (arg0 is the parameter name; arg1 is the parameter value) -n|--n <arg0> Number of processes to run -nolocal|--nolocal Do not run any MPI applications on the local node -nooversubscribe|--nooversubscribe Nodes are not to be oversubscribed, even if the system supports such operation --noprefix Disable automatic --prefix behavior -npernode|--npernode <arg0> Launch n processes per node on all allocated nodes -ompi-server|--ompi-server <arg0> Specify the URI of the Open MPI server, or the name of the file (specified as file:filename) that contains that info -path|--path <arg0> PATH to be used to look for executables to start processes -pernode|--pernode Launch one process per available node on the specified number of nodes [no -np => use all allocated nodes] --prefix <arg0> Prefix where Open MPI is installed on remote nodes --preload-files <arg0> Preload the comma separated list of files to the remote machines current working directory before starting the remote process. --preload-files-dest-dir <arg0> The destination directory to use in conjunction with --preload-files. By default the absolute and relative paths provided by --preload-files are used. -q|--quiet Suppress helpful messages -rf|--rankfile <arg0> Provide a rankfile file -s|--preload-binary Preload the binary on the remote machine before starting the remote process. -server-wait-time|--server-wait-time <arg0> Time in seconds to wait for ompi-server (default: 10 sec) -slot-list|--slot-list <arg0> List of processor IDs to bind MPI processes to (e.g., used in conjunction with rank files) -tmpdir|--tmpdir <arg0> Set the root for the session directory tree for orterun ONLY -tv|--tv Deprecated backwards compatibility flag; synonym for "--debug" -v|--verbose Be verbose -V|--version Print version and exit -wait-for-server|--wait-for-server If ompi-server is not already running, wait until it is detected (default: false) -wd|--wd <arg0> Synonym for --wdir -wdir|--wdir <arg0> Set the working directory of the started processes -x <arg0> Export an environment variable, optionally specifying a value (e.g., "-x foo" exports the environment variable foo and takes its value from the current environment; "-x foo=bar" exports the environment variable name foo and sets its value to "bar" in the started processes) -xml|--xml Provide all output in XML format Report bugs to http://www.open-mpi.org/community/help/
There are two ways to submit jobs under Sun Grid Engine integration: interactive mode and batch mode. The instructions in this chapter describe how to submit jobs interactively. For information about how to submit jobs in batch mode, see Chapter 6.
A PE needs to be defined for all the queues in the Sun Grid Engine cluster to be used as ORTE nodes. Each ORTE node should be installed as an Sun Grid Engine execution host. To allow the ORTE to submit a job from any ORTE node, configure each ORTE node as a submit host in Sun Grid Engine.
Each execution host must be configured with a default queue. In addition, the default queue set must have the same number of slots as the number of processors on the hosts.
To Use PE Commands |
To display a list of available PEs (parallel environments), type the following:
% qconf -spl makeTo define a new PE, you must have Sun Grid Engine manager or operator privileges. Use a text editor to modify a template for the PE. The following example creates a PE named orte.
% qconf -ap orteTo modify an existing PE, use this command to invoke the default editor:
% qconf -mp orteTo show a particular PE that has been defined, type this command:
% qconf -sp orte pe_name orte slots 8 user_lists NONE xuser_lists NONE start_proc_args /bin/true stop_proc_args /bin/true allocation_rule $round_robin control_slaves TRUE job_is_first_task FALSE urgency_slots min
The value NONE in user_lists and xuser_lists mean enable everybody and exclude nobody.
The value of control_slaves must be TRUE; otherwise, qrsh exits with an error message.
The value of job_is_first_task must be FALSE or the job launcher consumes a slot. In other words, mpirun itself will count as one of the slots and the job will fail, because only n-1 processes will start.
To show all the defined queues, type the following command:
% qconf -sql all.q
The queue all.q is set up by default in Sun Grid Engine.
To configure the orte PE from the example in the previous section to the existing queue, type the following:
% qconf -mattr queue pe_list "orte" all.q
You must have Sun Grid Engine manager or operator privileges to use this command.
Before you submit a job, you must have your DISPLAY environment variable set so that the interactive
window will appear on your desktop, if you have not already done so.
For example, if you are working in the C shell, type the following command:
setenv DISPLAY desktop:0.0
To Submit Jobs Interactively
1. Use the source command to set the Sun Grid Engine environment variables from a file:
mynode4% source /opt/sge/default/common/settings.csh
2. Use the qsh command to start the interactive X Windows session, and specify the parallel environment (in this example, ORTE) and the number of slots to use:
mynode4% qsh -pe orte 2 waiting for interactive job to be scheduled... Your interactive job 324 has been successfully scheduled.
3. On a different node in the cluster, use the cd command to switch to the directory where your executable is located.
mynode5% cd /workspace/joeuser/ompi/trunk/builds/sparc32-g/bin
4. Issue the mpirun command.
mynode5% /opt/SUNWhpc/HPC8.1/sun/bin/mpirun -np 4 hostname
In the above example, Sun Grid Engine starts the user executable hostname with 4 processes on the two Sun Grid Engine assigned slots. The following example shows the output from the mpirun command with the specified options.
mynode5% /opt/SUNWhpc/HPC8.1/sun/bin/mpirun -np 4 --hostname mynode5 mynode5
To Verify That Sun Grid Engine Is Running
The following is not required for normal operation, but if you want to verify that Sun Grid Engine is being used, add --mca ras_gridengine_verbose to the mpirun command line. For example:
% ./mpirun -np 4 -mca ras_gridengine_verbose 100 hostname [mynode6:04234] ras:gridengine: JOB_ID: 28 [mynode6:04234] ras:gridengine: mynode6: PE_HOSTFILE shows slots=2 [mynode6:04234] ras:gridengine: mynode7: PE_HOSTFILE shows slots=2 mynode6 mynode6 mynode7 mynode7 %
To Start an Interactive Session Using qrsh |
An alternate way to start an interactive session is by using qrsh instead of qsh. For example:
% qrsh -V -pe orte 8 mpirun -np 4 -byslot hostname
The instructions in this section explain how to get best results when starting Open MPI client/server applications.
1. Type the following command to launch the server application. Substitute the name of your MPI job’s universe for univ1:
% ./mpirun -np 1 --universe univ1 t_accept
2. Type the following command to launch the client application, substituting the name of your MPI job’s universe for univ1:
% ./mpirun -np 4 --universe univ1 t_connect
If the client and server jobs span more than 1 node, the first job (that is, the server job) must specify on the mpirun command line all the nodes that will be used. Specifying the node names allocates the specified hosts from the entire universe of server and client jobs.
For example, if the server runs on node0 and the client job runs on node1 only, the command to launch the server must specify both nodes (using the -host node0,node1 flag) even it uses only one process on node0.
Assuming that the persistent daemon is started on node0, the command to launch the server would look like this:
node0% ./mpirun -np 1 --universe univ1 -host node0,node1 t_accept
The command to launch the client is:
node0% ./mpirun -np 4 --universe univ1 -host node1 t_connect
If you are planning on using name publishing, you must perform some additional tasks. You need to start up an ompi-server processon your server so that both the clients andservers can exchange information using that server.
For information about how to start the ompi-server process, type the following command on your server:
% man ompi-server
If the MPI client/server job fails to start, you might see error messages similar to this:
node0% ./orted --persistent --seed --scope public --universe univ4 --debug [node0:21760] procdir: (null) [node0:21760] jobdir: (null) [node0:21760] unidir: /tmp/openmpi-sessions-joeuser@node0_0/univ4 [node0:21760] top: openmpi-sessions-joeuser@node0_0 [node0:21760] tmp: /tmp [node0:21760] orte_init: could not contact the specified universe name univ4 [node0:21760] [NO-NAME] ORTE_ERROR_LOG: Unreachable in file /opt/SUNWhpc/HPC8.1/sun/bin/orted/runtime/orte_init_stage1.c at line 221
These messages indicate that there is residual data left in the /tmp directory. This can happen if a previous client/server job has already run from the same node.
To empty the /tmp directory, use the orte-clean utility. For more information about orte-clean, see the orte-clean man page.
You might also need to run orte-clean if you see error messages similar to the following:
node0% ./orted --persistent --seed --scope public --universe univ4 --debug [node0:21760] procdir: (null) [node0:21760] jobdir: (null) [node0:21760] unidir: /tmp/openmpi-sessions-joeuser@node0_0/univ4 [node0:21760] top: openmpi-sessions-joeuser@node0_0 [node0:21760] tmp: /tmp [node0:21760] orte_init: could not contact the specified universe name univ4 [node0:21760] [NO-NAME] ORTE_ERROR_LOG: Unreachable in file /opt/SUNWhpc/HPC8.1/sun/bin/orted/runtime/orte_init_stage1.c at line 221 ---------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here’s some additional information (which may only be relevant to an Open MPI developer): orte_sds_base_contact_universe failed --> Returned value -12 instead of ORTE_SUCCESS ---------------------------------------------------------------- [node0:21760] [NO-NAME] ORTE_ERROR_LOG: Unreachable in file /opt/SUNWhpc/HPC8.1/sun/bin/orted/runtime/orte_system_init.c at line 42 [node0:21760] [NO-NAME] ORTE_ERROR_LOG: Unreachable in file /opt/SUNWhpc/HPC8.1/sun/bin/orte/runtime/orte_init.c at line 52 Open RTE was unable to initialize properly. The error occured while attempting to orte_init(). Returned value -12 instead of ORTE_SUCCESS.
This section provides a quick reference for the mpirun command options.
Option |
Description |
---|---|
-am list-name |
Use the MCA parameter set file list called list-name. |
--app appfile |
Directs mpirun to use the appfile specified by appfile and to ignore other programs specified on the command line |
-bynode --bynode |
Allocates (maps) the processes specified in a round-robin scheme by node. -byslot is the default (see below). |
-byslot --byslot |
Allocates (maps) the processes specified in a round-robin scheme by slot (processor) This is the default. |
-c number |
Same as the -np <number> option. Directs mpirun to run the number of copies (specified in number) of the specified program on the selected nodes. See the description of the -np option for more information. |
-cf |
Run using the cartography file filename. Cartography files describe the layout of and connections between components in a cluster. For more information about cartography files, see the mpirun(1) man page. |
-debug --debug |
Invokes the user-level debugger specified in the MCA parameter orte_base_user_debugger. The default value for the MCA parameter is totalview. To change the specified debugger, change the value of the MCA parameter. (See Chapter 7 for more information.) |
--debug-daemons |
Enable debugging of any ORTE daemons used by this application. |
-debug-daemons-file |
Enable debugging of any OpenRTE daemons used by this application, storing output in files |
--debug-devel |
Enable debugging of OpenRTE. |
-debugger --debugger |
Specifies the sequence of debuggers you want to use with mpirun.This option is a synonym for the orte_base_user_debugger, and has the same default value. If you use this option, the value you specify overrides any value set in orte_base_user_debugger. |
-default-hostfile |
Run using the provided default hostfile filename. |
-display-allocation |
Display the allocation being used by this job. |
-display-devel-allocation |
Intended for Open MPI/OpenRTE developers. Display a detailed list of the allocation being used by this job. |
-display-map |
Display the process map just before launch. |
--display-devel-map |
Intended for Open MPI/OpenRTE developers. Displayes a detailed process map just before launch. |
--do-not-launch |
Perform all necessary operations to prepare to launch the application, but do not actually launch it.
|
-do-not-resolve |
Do not attempt to resolve interfaces. |
-gmca --gmca param value |
Specifies global MCA parameters. param is the name of the specified MCA parameter. value is the value for that parameter. |
-h --help |
Displays help for the mpirun command. When this option is specified on the command line, it overrides any other options and displays the command help. |
-H host1, host2, ...hostn |
Specifies the list of hosts on which to invoke processes. This is a synonym for -host. |
--hetero |
Indicates that multiple app_contexts are being provided that are a mix of 32 - amd 64-bit binaries. |
-host --host <host1,host2,...hostn> |
Specifies the list of hosts on which to invoke processes. This is a synonym for -H. |
-hostfile --hostfile filename |
Directs mpirun to use the specified hostfile. If -hostfile is specified without using filename, mpirun uses the default hostfile located at /opt/SUNWhpc/HPC8.1/etc/openmpi-default-hostfile. |
--launch-agent command-name |
Command used to start processes on remote nodes (default: orted) |
-leave-seeesion-attached |
Enable debugging of OpenRTE. |
-loadbalance |
Balance total number of processes across all allocated nodes.
|
-machinefile --machinefile filename |
Synonymous with -hostfile. |
-mca --mca param value |
Specifies an MCA parameter, where param is the name of the desired MCA parameter and value is the desired value for that parameter. These parameters and values are considered to be global parameters unless the -gmca option appears on the same command line. |
-n, --n number |
Specifies the number of processes to run. Synonymous with |
--no-daemonize |
Keeps the ORTE daemons used by this application from being detached and used by other processes. |
-nolocal, --nolocal |
Specifies that MPI applications should not be run on the local node (the same node on which mpirun is running). |
-nooversubscribe --nooversubscribe |
Never oversubscribe the nodes, even if the system supports such operations. This option sets the effective value of max_slots to equal the value of slots, and overrides the settings for that node in the hostfile. |
--noprefix value |
Cancels any previously specified directory options specified by the --prefix option. |
-npernode |
Launch number processes per node on all allocated nodes. |
-ompi-server |
Specify the URL of the Open MPI server, or the name of the file (specified as file:filename) that contains that information needed to run the job. |
-path --path pathname |
Specifies to mpirun that the executables to be used for the current job are stored in pathname. |
-pernode |
Launch one process per available node on the number of nodes specified in the -np option. If no -np option is used, then use all allocated nodes. |
--prefix pathname |
Specifies the path to the directory where Open MPI is located on remote node(s). This option is used to run Open MPI on remote nodes (as opposed to running on the local node). |
--preload-files filename |
Preload the comma separated list of files (specified by filename) to the remote machine’s current working directory before starting the remote process. |
--preload-files-dest-dir directory |
Specifies the destination directory (specified by directory) that contains the list of files (specified by --preload-files filename) to be used with the --preload-files option. By default, this option uses both absolute and relative paths.. |
-q, -quiet |
Suppresses output messages from Open MPI. |
-rf, --rankfile filename |
Provide a rankfile file. |
-s, --preload-binary |
Preload the binary on the remote machine before starting the remote process. |
--server-wait-time seconds |
Time in seconds to wait for ompi-server (default: 10 sec). |
--slot-list id-list |
List of processor IDs to which you want to bind MPI processes (for example, a list of processors used in conjunction with rankfile files) |
--tmpdir pathname |
Specifies the root for the session directory tree for mpirun only. This applies only to the current job. |
-tv, --tv |
Synonymous with --debug. This option is deprecated; use |
--universe username@hostname:universe_name |
Sets the Open MPI universe for this application to username@hostname:universe_name. |
-v, --verbose |
Specifies verbose output. |
-V, --version |
Displays the mpirun version number. If no other options are specified on the same command line, this option also causes mpirun to exit. |
--wait-for-server |
If ompi-server is not already running, wait until it is detected (default: false) |
-wd directory-name |
Change to the specified directory before executing the application. |
-wdir, --wdir |
Synonymous with -wd. |
-x variable -x variable=value |
Exports the environment variable variable and its value in the current environment to the started processes. If value is specified, the option sets the variable’s value to value in the started processes. |
-xml, --xml |
Provide all output in XML format. |
For more information about the mpirun command and its options, see the following:
Google matched content |
Running Programs With the mpirun Command
Reference
« Return to documentation listing
Table of ContentsNote: mpirun, mpiexec, and orterun are all synonyms for each other. Using any of the names will produce the same behavior.
mpirun [ options ] <program> [ <args> ]
Multiple Instruction Multiple Data (MIMD) Model:
mpirun [ global_options ] [ local_options1 ]
<program1> [ <args1> ] : [ local_options2 ]
<program2> [ <args2> ] : ... :
[ local_optionsN ]
<programN> [ <argsN> ]
Note that in both models, invoking mpirun via an absolute path name is equivalent to specifying the --prefix option with a <dir> value equivalent to the directory where mpirun resides, minus its last subdirectory. For example:
% /usr/local/bin/mpirun ...
is equivalent to
% mpirun --prefix /usr/local
% mpirun [ -np X ] [ --hostfile <filename> ] <program>
This will run X copies of <program> in your current run-time environment (if running under a supported resource manager, Open MPI’s mpirun will usually automatically use the corresponding resource manager process starter, as opposed to, for example, rsh or ssh, which require the use of a hostfile, or will default to running all X copies on the localhost), scheduling (by default) in a round-robin fashion by CPU slot. See the rest of this page for more details.
To specify which hosts (nodes) of the cluster to run on:
To specify the number of processes to launch:
To map processes:
To order processes’ ranks in MPI_COMM_WORLD:
For process binding:
For rankfiles:
To manage standard I/O:
To manage files and runtime environment:
The parser for the -x option is not very sophisticated; it does not even understand quoted values. Users are advised to set variables in the environment, and then use -x to export (not define) them.
Setting MCA parameters:
For debugging:
There are also other options:
The following options are useful for developers; they are not generally useful to most ORTE and/or MPI users:
There may be other options listed with mpirun --help.
If the application is multiple instruction multiple data (MIMD), comprising of multiple programs, the set of programs and argument can be specified in one of two ways: Extended Command Line Arguments, and Application Context.
An application context describes the MIMD program set including all arguments in a separate file. This file essentially contains multiple mpirun command lines, less the command name itself. The ability to specify different options for different instantiations of a program is another reason to use an application context.
Extended command line arguments allow for the description of the application layout on the command line using colons (:) to separate the specification of programs and arguments. Some options are globally set across all specified programs (e.g. --hostfile), while others are specific to a single program (e.g. -np).
For example,
Or, consider the hostfile
% cat myhostfile
aa slots=2
bb slots=2
cc slots=2
Here, we list both the host names (aa, bb, and cc) but also how many "slots" there are for each. Slots indicate how many processes can potentially execute on a node. For best performance, the number of slots may be chosen to be the number of cores on the node or the number of processor sockets. If the hostfile does not provide slots information, a default of 1 is assumed. When running under resource managers (e.g., SLURM, Torque, etc.), Open MPI will obtain both the hostnames and the number of slots directly from the resource manger.
The number of processes launched can be specified as a multiple of the number of nodes or processor sockets available. For example,
Another alternative is to specify the number of processes with the -np option. Consider now the hostfile
% cat myhostfile
aa slots=4
bb slots=4
cc slots=4
Now,
Consider the same hostfile as above, again with -np 6:
node aa node bb node cc
mpirun 0 1 2 3 4 5
mpirun -bynode 0 3 1 4 2 5
mpirun -nolocal 0 1 2 3 4 5
The -bynode option does likewise but numbers the processes in "by node" in a round-robin fashion.
The -nolocal option prevents any processes from being mapped onto the local host (in this case node aa). While mpirun typically consumes few system resources, -nolocal can be helpful for launching very large jobs where mpirun may actually need to use noticeable amounts of memory and/or processing time.
Just as -np can specify fewer processes than there are slots, it can also oversubscribe the slots. For example, with the same hostfile:
One can also specify limits to oversubscription. For example, with the same hostfile:
Limits to oversubscription can also be specified in the hostfile itself: % cat myhostfile
aa slots=4 max_slots=4
bb max_slots=4
cc slots=4
The max_slots field specifies such a limit. When it does, the slots value defaults to the limit. Now:
Using the --nooversubscribe option can be helpful since Open MPI currently does not get "max_slots" values from the resource manager.
Of course, -np can also be used with the -H or -host option. For example,
And here is a MIMD example:
Another way to specify arbitrary mappings is with a rankfile, which gives you detailed control over process binding as well. Rankfiles are discussed below.
To bind processes, one must first associate them with the resources on which they should run. For example, the -bycore option associates the processes on a node with successive cores. Or, -bysocket associates the processes with successive processor sockets, cycling through the sockets in a round-robin fashion if necessary. And -cpus-per-proc indicates how many cores to bind per process.
But, such association is meaningless unless the processes are actually bound to those resources. The binding option specifies the granularity of binding -- say, with -bind-to-core or -bind-to-socket. One can also turn binding off with -bind-to-none, which is typically the default.
Finally, -report-bindings can be used to report bindings.
As an example, consider a node with two processor sockets, each comprising four cores. We run mpirun with -np 4 -report-bindings and the following additional options:
% mpirun ... -bycore -bind-to-core
[...] ... binding child [...,0] to cpus 0001
[...] ... binding child [...,1] to cpus 0002
[...] ... binding child [...,2] to cpus 0004
[...] ... binding child [...,3] to cpus 0008
% mpirun ... -bysocket -bind-to-socket
[...] ... binding child [...,0] to socket 0 cpus 000f
[...] ... binding child [...,1] to socket 1 cpus 00f0
[...] ... binding child [...,2] to socket 0 cpus 000f
[...] ... binding child [...,3] to socket 1 cpus 00f0
% mpirun ... -cpus-per-proc 2 -bind-to-core
[...] ... binding child [...,0] to cpus 0003
[...] ... binding child [...,1] to cpus 000c
[...] ... binding child [...,2] to cpus 0030
[...] ... binding child [...,3] to cpus 00c0
% mpirun ... -bind-to-none
Here, -report-bindings shows the binding of each process as a mask. In the first case, the processes bind to successive cores as indicated by the masks 0001, 0002, 0004, and 0008. In the second case, processes bind to all cores on successive sockets as indicated by the masks 000f and 00f0. The processes cycle through the processor sockets in a round-robin fashion as many times as are needed. In the third case, the masks show us that 2 cores have been bound per process. In the fourth case, binding is turned off and no bindings are reported.
Open MPI’s support for process binding depends on the underlying operating system. Therefore, certain process binding options may not be available on every system.
Process binding can also be set with MCA parameters. Their usage is less convenient than that of mpirun options. On the other hand, MCA parameters can be set not only on the mpirun command line, but alternatively in a system or user mca-params.conf file or as environment variables, as described in the MCA section below. The correspondences are:
mpirun option MCA parameter key value
-bycore rmaps_base_schedule_policy core
-bysocket rmaps_base_schedule_policy socket
-bind-to-core orte_process_binding core
-bind-to-socket orte_process_binding socket
-bind-to-none orte_process_binding none
The orte_process_binding value can also take on the :if-avail attribute. This attribute means that processes will be bound only if this is supported on the underlying operating system. Without the attribute, if there is no such support, the binding request results in an error. For example, you could have
% cat $HOME/.openmpi/mca-params.conf
rmaps_base_schedule_policy = socket
orte_process_binding = socket:if-avail
rank <N>=<hostname> slot=<slot list>
For example:
$ cat myrankfile
rank 0=aa slot=1:0-2
rank 1=bb slot=0:0,1
rank 2=cc slot=1-2
$ mpirun -H aa,bb,cc,dd -rf myrankfile ./a.out
Means that
Rank 0 runs on node aa, bound to socket 1, cores 0-2.
Rank 1 runs on node bb, bound to socket 0, cores 0 and 1.
Rank 2 runs on node cc, bound to cores 1 and 2.
The hostnames listed above are "absolute," meaning that actual resolveable hostnames are specified. However, hostnames can also be specified as "relative," meaning that they are specified in relation to an externally-specified list of hostnames (e.g., by mpirun’s --host argument, a hostfile, or a job scheduler).
The "relative" specification is of the form "+n<X>", where X is an integer specifying the Xth hostname in the set of all available hostnames, indexed from 0. For example:
$ cat myrankfile
rank 0=+n0 slot=1:0-2
rank 1=+n1 slot=0:0,1
rank 2=+n2 slot=1-2
$ mpirun -H aa,bb,cc,dd -rf myrankfile ./a.out
Starting with Open MPI v1.7, all socket/core slot locations are be specified as logical indexes (the Open MPI v1.6 series used physical indexes). You can use tools such as HWLOC’s "lstopo" to find the logical indexes of socket and cores.
If a relative directory is specified, it must be relative to the initial working directory determined by the specific starter used. For example when using the rsh or ssh starters, the initial directory is $HOME by default. Other starters may set the initial directory to the current working directory from the invocation of mpirun.
If the -wdir option appears both in a context file and on the command line, the context file directory will override the command line value.
If the -wdir option is specified, Open MPI will attempt to change to the specified directory on all of the remote nodes. If this fails, mpirun will abort.
If the -wdir option is not specified, Open MPI will send the directory name where mpirun was invoked to each of the remote nodes. The remote nodes will try to change to that directory. If they are unable (e.g., if the directory does not exist on that node), then Open MPI will use the default directory determined by the starter.
All directory changing occurs before the user’s program is invoked; it does not wait until MPI_INIT is called.
Open MPI directs UNIX standard output and error from remote nodes to the node that invoked mpirun and prints it on the standard output/error of mpirun. Local processes inherit the standard output/error of mpirun and transfer to it directly.
Thus it is possible to redirect standard I/O for Open MPI applications by using the typical shell redirection procedure on mpirun.
% mpirun -np 2 my_app < my_input > my_output
Note that in this example only the MPI_COMM_WORLD rank 0 process will receive the stream from my_input on stdin. The stdin on all the other nodes will be tied to /dev/null. However, the stdout from all nodes will be collected into the my_output file.
SIGUSR1 and SIGUSR2 signals received by orterun are propagated to all processes in the job.
One can turn on forwarding of SIGSTOP and SIGCONT to the program executed by mpirun by setting the MCA parameter orte_forward_job_control to 1. A SIGTSTOP signal to mpirun will then cause a SIGSTOP signal to be sent to all of the programs started by mpirun and likewise a SIGCONT signal to mpirun will cause a SIGCONT sent.
Other signals are not currently propagated by orterun.
User signal handlers should probably avoid trying to cleanup MPI state (Open MPI is currently not async-signal-safe; see MPI_Init_thread(3) for details about MPI_THREAD_MULTIPLE and thread safety). For example, if a segmentation fault occurs in MPI_SEND (perhaps because a bad buffer was passed in) and a user signal handler is invoked, if this user handler attempts to invoke MPI_FINALIZE, Bad Things could happen since Open MPI was already "in" MPI when the error occurred. Since mpirun will notice that the process died due to a signal, it is probably not necessary (and safest) for the user to only clean up non-MPI state.
See the "Remote Execution" section for more details.
However, it is not always desirable or possible to edit shell startup files to set PATH and/or LD_LIBRARY_PATH. The --prefix option is provided for some simple configurations where this is not possible.
The --prefix option takes a single argument: the base directory on the remote node where Open MPI is installed. Open MPI will use this directory to set the remote PATH and LD_LIBRARY_PATH before executing any Open MPI or user applications. This allows running Open MPI jobs without having pre-configured the PATH and LD_LIBRARY_PATH on the remote nodes.
Open MPI adds the basename of the current node’s "bindir" (the directory where Open MPI’s executables are installed) to the prefix and uses that to set the PATH on the remote node. Similarly, Open MPI adds the basename of the current node’s "libdir" (the directory where Open MPI’s libraries are installed) to the prefix and uses that to set the LD_LIBRARY_PATH on the remote node. For example:
If the following command line is used:
% mpirun --prefix /remote/node/directory
Open MPI will add "/remote/node/directory/bin" to the PATH and "/remote/node/directory/lib64" to the D_LIBRARY_PATH on the remote node before attempting to execute anything.
The --prefix option is not sufficient if the installation paths on the remote node are different than the local node (e.g., if "/lib" is used on the local node, but "/lib64" is used on the remote node), or if the installation paths are something other than a subdirectory under a common prefix.
Note that executing mpirun via an absolute pathname is equivalent to specifying --prefix without the last subdirectory in the absolute pathname to mpirun. For example:
% /usr/local/bin/mpirun ...
is equivalent to
% mpirun --prefix /usr/local
The -mca switch takes two arguments: <key> and <value>. The <key> argument generally specifies which MCA module will receive the value. For example, the <key> "btl" is used to select which BTL to be used for transporting MPI messages. The <value> argument is the value that is passed. For example:
The -mca switch can be used multiple times to specify different <key> and/or <value> arguments. If the same <key> is specified more than once, the <value>s are concatenated with a comma (",") separating them.
Note that the -mca switch is simply a shortcut for setting environment variables. The same effect may be accomplished by setting corresponding environment variables before running mpirun. The form of the environment variables that Open MPI sets is:
OMPI_MCA_<key>=<value>
Thus, the -mca switch overrides any previously set environment variables. The -mca settings similarly override MCA parameters set in the $OPAL_PREFIX/etc/openmpi-mca-params.conf or $HOME/.openmpi/mca-params.conf file.
Unknown <key> arguments are still set as environment variable -- they are not checked (by mpirun) for correctness. Illegal or incorrect <value> arguments may or may not be reported -- it depends on the specific MCA module.
To find the available component types under the MCA architecture, or to find the available parameters for a specific component, use the ompi_info command. See the ompi_info(1) man page for detailed information on the command.
By default, OMPI records and notes that MPI processes exited with non-zero termination status. This is generally not considered an "abnormal termination" - i.e., OMPI will not abort an MPI job if one or more processes return a non-zero status. Instead, the default behavior simply reports the number of processes terminating with non-zero status upon completion of the job.
However, in some cases it can be desirable to have the job abort when any process terminates with non-zero status. For example, a non-MPI job might detect a bad result from a calculation and want to abort, but doesn’t want to generate a core file. Or an MPI job might continue past a call to MPI_Finalize, but indicate that all processes should abort due to some post-MPI result.
It is not anticipated that this situation will occur frequently. However, in the interest of serving
the broader community, OMPI now has a means for allowing users to direct that jobs be aborted upon any
process exiting with non-zero status. Setting the MCA parameter "orte_abort_on_non_zero_status" to 1
will cause OMPI to abort all processes once any process exits with non-zero status.
Terminations caused in this manner will be reported on the console as an "abnormal termination", with the first process to so exit identified along with its exit status.
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: March, 12, 2019