|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
|
In modern Unix programming environment Perl can be used as AWK replacement even in simple scripts. AWK no longer has advantage due to its more compact size as on modern computer load time of Perl interpreter is negligible unless it is done in a deeply nested loops. Recently Perl became a standard, installed by default in all major Unixes including AIX, HP-UX, Linux and Solaris. It is usually available in /usr/bin.
|
Default availability dramatically changed role of Perl in Unix system scripting and routine text processing. For most administrators it is much easier to use Perl that any of the older alternatives (AWK, sed), or newer scripting languages like Python and Ruby because it is closer to shell, the language they already know (and many know really well).
Python is far less "aligned" with Unix then Perl. Ruby puts too much emphasis on OO and is not available by default on any of the major platforms. That does not mean that this situation can't change: as of 2017 Python (which for a while enjoyed support of Google) is gradually replacing Perl outside system administration, becoming the primary scripting language on Linux. I think that for system admin Perl is a better deal, popularity notwithstanding.
The main advantage of Perl over alternatives is the power of the language and availability of a very good, built-in debugger. Neither bash nor AWK has built-in debugger installed by default (but for both debuggers are available; they are just relatively unknown). For sysadmins who need to administer often around a hundred servers that tips the scales in Perl favor, as each administrator has a lot of things to do to spend time instead of installing and learning additional software, or debugging scripts using multiple echo/print statements.
That's why Perl is gradually displacing older Unix utilities such as cut, sed, wc, and, of course, AWK. On modern system penalty for using Perl instead of AWK is negligible.
Moreover, often you can replace quite complex set of pipe stages that use classic UNIX utilities with one Perl loop. Don't interpret me wrong, pipes are a blessing and are an extremely powerful and elegant tool that each UNIX admins should use to the max, but if you can avoid unnecessary complexity, why stick to old practices. There is a project called Perl power tools to create Perl equivalents of classic Unix utilities. Those re-implementations are useful as they alleviate inherent flaws and limitations of several classic utilities such as cut (which is way too primitive indeed) at the same time proving high level of compatibility. the last thing any sysadmin wnats is to leran yet another utility instead of a classic one. Linux is alredy possited by dozens if not hundred of such utilities and it is cleraly outside human cpability to lern them all. Evenremebering about their existnace is not easy. Generally number of utilities on Linux clearly exceed human capacity to remember them and in many cases it is easier towrite asmm perl script then to find suitable utility for the task, learn necessary switches and after several experiment get the processing you want.
I would like to repeat it again: Perl is also amazingly Unix-friendly language that give a programmer full access to the Unix API. For an introduction to Perl written for system administrators see Nikolai Bezroukov. Skeptical Introduction to Perl
The simplest way to start is to remember -e option (execute), which instructs Perl interpreter that the next argument is a Perl statement to be compiled and run. If -e is given, Perl will not look for a script filename in the argument list in will take the argument that follows -e as the text of the script. Make sure to use semicolons where you would in a normal program. For example
perl -e 'print "Hello world of Perl command line";'
Multiple -e commands may be given to simplify building a multi-line script.
There are several more useful "in-line" oriented options that Perl interpreter accepts:
while (<>) { ... # your script goes here )You need to supply you own print statement(s) to have output printed, Nothing is printed by default. See -p to have lines printed. If a file named by an argument cannot be opened for some reason, Perl warns you about it, and moves on to the next file.
while (<>) { ... # your script } continue { print or die "-p destination: $!\n"; }
-a turns on autosplit mode using the default pattern / /, which is not that useful. -a implicitly sets -n.. It provides a split command to the @F array is done as the first thing inside the while loop produced by the option -n or -p. For example:
perl -ane 'print pop(@F), "\n";'
is equivalent to
while (<>) { @F = split(' '); print pop(@F), "\n";}It is important to know that an alternate delimiter (which can be any regulqr expression, unlike Unix cut) for split may be specified using option -F.
specifies the pattern to split on for -a. -F implicitly sets both -a
and -n. The pattern may be surrounded
by //
, ""
, or ''
, otherwise it will
be put in single quotes. You can't use literal whitespace in the pattern.
<>
construct are to be
edited in-place. It does this by renaming the input file, opening the output
file by the original name, and selecting that output file as the default for
print() statements. The extension, if supplied, is used to modify the name of
the old file to make a backup copy, following these rules:*
, then it is appended to
the end of the current filename as a suffix. If the extension does contain one
or more *
characters, then each *
is replaced with
the current filename. In Perl terms, you could think of this as:($backup = $extension)=~ s/\*/$file_name/g;
To suppress printing use the -n switch. A option -p overrides the options -n.
As a useful example let's look how we can combine power of Perl command line with find utility to produce a very simple but still useful command global string/pattern replace utility for multiple files :
find . -type f -exec perl -i -pe 's/something/another/g' {} \;
To make a command named lower that converts all filenames in the current directory to lower case, you can add the following function to your ~/.bashrc or ~/.kshrc
function lower { perl -e 'for (@ARGV) { rename $_, lc($_) unless -e lc($_); }' * }
Tips:
You can use Perl debugger to debug one-liners.
As with everything excessive zeal hurts. You need to exercise judgment and not to miss the moment when one liner became counterproductive because of excessive complexity.
In this case it should be converted into a regular script.
Many one-liners are interesting as an art of creating powerful and elegant Perl regular expressions that solve important task while being very compact. And creating useful one-liners is an art, in the same sense that Donald Knuth considers programming to be an art (see introduction to TAOCP and his Turing lecture). Some of them demonstrate great inventiveness and take language constructs to its limits.
There are multiple Perl one-liners that are very useful:
perl -i.bak -pe 's/pattern1/pattern2/g' inputFile
perl -i.bak -pe 'tr/\r//d' filename
This one line strips the carriage returns out of a file, turning a DOS file (which ends in both carriage returns and linefeeds) into a Unix file (which includes only linefeeds). It is basically the equivalent to Unix command:
tr -d "\015"
This functionality is useful on SLES 10 and SLES 11 as it does not include dos2unix utility by default): See more information at Conversion of files from Windows to Unix format
unix2dos (conversion from Unix to Windows format). You can convert Unix file back to Windows format too:perl -i.bak -pe 's/\n/\r\n/' filename
perl -i.bak -ne 'next if ($_ =~/pattern_for_deletion/); print;' filename
perl -i.bak -ne 'print unless 1 .. 10' foo.txt
perl -i.bak -e '@x=<>; pop(@x); print @x'
perl -i.bak -ne 'print unless /pattern1/ .. /pattern2/' filenamefor example
perl -i.bak -ne 'print unless /^START$/ .. /^END$/' filename
perl -ne 'print "$.\t$_";' filename
perl -ne '$q=($_=~tr/"//); print"$.\t$q\t$_";' filename
perl -ne '$o+=($_=~tr/{//);$c+=($_=~tr/}//); $b=$o-$c; print"$.\t$b\t$_";' < myjoin3_lines.pl
perl -nae 'next if /^#/; print "$F[0]\n"'
Here is example on how to print first and the last fields:
perl -lane 'print "$F[0]:$F[-1]\n"'See below (More on emulation of Unix cut utility in Perl) for more information. See also cut command.
perl -pe 's/(\w)(.*)$/\U$1\L$2/'
perl -pe 's/\w.+/\u\L$&/'
The second one belongs to
Matz Kindahl and simultaneously belongs to the class of Perl idioms.
It is more difficult to understand so the first version is preferable. Beware
Perl authors who prefer the second variant to the first ;-)
In-place editing one-liner:
# in-place edit of *.c files changing all [instances of the word] foo to bar perl -p -i.bak -e 's/\bfoo\b/bar/g' *.c # change all the isolated oldvar occurrences to newvar perl -i.bak -pe 's{\boldvar\b}{newvar}g' *.[chy]
See
You can do much more then with Unix cut command using Perl one-liners. Here's a simple one-line script that will print out the first word of every line, but also skip any line beginning with a # because it's a comment line.
perl -ae 'next if /^#/; print "$F[0]\n"'
Here is example on how to print the first and the last field in each line:
perl -ae 'print "$F[0]:$F[-1]\n"'Additional capabilities are provided by rarely used -l switch:
-l[octnum]
This switch enables automatic line-ending processing. In the simplest form you can understand it as removing the input record separator (\n) on reading records and automatic addition of " \n" back to implicit print statement in the loop for each printed record. End of the line (\n) is added by -l switch so you do not need explicit print statement:
perl -lpe 'substr($_, 80) = ""'
Formally it has two
separate effects. First, it automatically chomps
$/
(the
input record separator) from each read line when used with -n or -p.
Second, it adds back $\
(the output record separator) to any print statement. It sets
$\
to the
current value of $/
(\n by default) if octnum is omitted,
. For instance, the one-liner below trims lines to 80 columns.
Note that the assignment
$\ = $/
is done when the switch is processed, so the input record
separator can be different than the output record separator
if the -l switch is followed by a -0 switch:
find / -print0 | perl -ln0e 'print "found $_" if -p'
This sets $\
to newline and then sets
$/
to the null character.
Among modern additions I would like to note Matz Kindahl collection:
- perl -pe '$_ = " $_ "; tr/ \t/ /s; $_ = substr($_,1,-1)'
- This piece will remove spaces at the beginning and end of a line and squeeze all other sequences of spaces into one single space. This was one of the "challenges" from comp.lang.perl.misc that occurs frequently; I am just unable to resist those. :)
- perl -ne '$n += $_; print $n if eof'
perl5 -ne '$n += $_; END { print "$n\n" }'- To sum numbers on a stream, where each number appears on a line by itself. That kind of output is what you get from cut(1), if you cut out a numerical field from an output. There is also a C program called sigma that does this faster.
- perl5 -pe 's/(\w)(.*)$/\U$1\L$2/'
perl5 -pe 's/\w.+/\u\L$&/'- To capitalize the first letter on the line and convert the other letters to small case. The last one is much nicer, and also faster.
- perl -e 'dbmopen(%H,".vacation",0666);printf("%-50s: %s\n",$K,scalar(localtime(unpack("L",$V)))while($K,$V)=each(%H)'
- Well, it is a one-liner. :) You can use it to examine who wrote you a letter while you were on vacation. It examines the file that vacation(1) produces.
- perl5 -p000e 'tr/ \t\n\r/ /;s/(.{50,72})\s/$1\n/g;$_.="\n"x2'
- This piece will read paragraphs from the standard input and reformat them in such a manner that every line is between 50 and 72 characters wide. It will only break a line at a whitespace and not in the middle of a word.
- perl5 -pe 's#\w+#ucfirst lc reverse $&#eg'
- This piece will read lines from the standard input and transform them into the Zafir language used by Zafirs troops, i.e. "Long Live Zafir!" becomes "Gnol Evil Rifaz!" (for some reason they always talk using capital letters). Andrew Johnson and I posted slightly different versions, and we both split the string unnecessarily. This one avoids splitting the string.
In 2003 Theodor Zlatanov, who authored a series of interesting articles about Perl on IBM DeveloperWorks site published an article Cultured Perl One-liners 102. Among other interesting one-liners he provided a collection of useful one-liners of using ranges in Perl one-lines as well as in place editing:
Listing 3: Printing a range of lines# 1. just lines 15 to 17 perl -ne 'print if 15 .. 17' # 2. just lines NOT between line 10 and 20 perl -ne 'print unless 10 .. 20' # 3. lines between START and END perl -ne 'print if /^START$/ .. /^END$/' # 4. lines NOT between START and END perl -ne 'print unless /^START$/ .. /^END$/'A problem with the first one-liner in Listing 3 is that it will go through the whole file, even if the necessary range has already been covered. The third one-liner does not have that problem, because it will print all the lines between the START and END markers. If there are eight sets of START/END markers, the third one-liner will print the lines inside all eight sets.
Preventing the inefficiency of the first one-liner is easy: just use the $. variable, which tells you the current line. Start printing if $. is over 15 and exit if $. is greater than 17.
... ... ...
Listing 5: In-place editing# 1. in-place edit of *.c files changing all foo to bar perl -p -i.bak -e 's/\bfoo\b/bar/g' *.c # 2. delete first 10 lines perl -i.old -ne 'print unless 1 .. 10' foo.txt # 3. change all the isolated oldvar occurrences to newvar perl -i.old -pe 's{\boldvar\b}{newvar}g' *.[chy] # 4. increment all numbers found in these files perl -i.tiny -pe 's/(\d+)/ 1 + $1 /ge' file1 file2 .... # 5. delete all but lines between START and END perl -i.old -ne 'print unless /^START$/ .. /^END$/' foo.txt # 6. binary edit (careful!) perl -i.bak -pe 's/Mozilla/Slopoke/g' /usr/local/bin/netscape
See also Cultured Perl One-liners by Theodor Zlatanov
|
Switchboard | ||||
Latest | |||||
Past week | |||||
Past month |
Q: Hi, I'm looking forward to learn Perl, I m a systems administrator ( unix ) .I'm interested in an online course, any recommendations would be highly appreciated. Syed
A: I used to teach sysadmins Perl in corporate environment and I can tell you that the main danger of learning Perl for system administrator is overcomplexity that many Perl books blatantly sell. In this sense anything written by Randal L. Schwartz is suspect and Learning Perl is a horrible book to start. I wonder how many sysadmins dropped Perl after trying to learn from this book
See http://www.softpanorama.org/Bookshelf/perl.shtml
It might be that the best way is to try first to replace awk in your scripts with Perl. And only then gradually start writing full-blown Perl scripts. For inspiration you can look collection on Perl one-liners but please beware that some (many) of them are way too clever to be useful. Useless overcomplexity rules here too.
I would also recommend to avoid OO features on Perl that many books oversell. A lot can be done using regular Algol-style programming with subroutines and by translating awk into Perl. OO has it uses but like many other programming paradigms it is oversold.
Perl is very well integrated in Unix (better then any of the competitors) and due to this it opens for sysadmin levels of productivity simply incomparable with those levels that are achievable using shell. You can automate a lot of routine work and enhance existing monitoring systems and such with ease if you know Perl well.
March 22, 2006 | Linux.com
Notice that the whole script is enclosed in single quotes. The shell will expand *, $, and other shell variables that aren't in single quotes, so usually you will want to enclose a script in single quotes. Perl scripts use single and double quotes in the same way as the shell: substituting in variables when strings are enclosed in double quotes, and leaving strings in single quotes as literals. If you want to use single quotes within the script itself, Perl allows you to instead use the format q/Hello World/ instead of enclosing a string in single quotes, or qq/Hello World/ for double quotes.
Compare the results of these two commands:
perl -e "$a = qq/Hello World/; print $a" # error
perl -e '$a = qq/Hello World/; print $a' # prints "Hello World"In the first case, the shell attempts to substitute the shell variable $a because the entire string is in double quotes. The example fails because $a is probably not set by the your shell. It will send to Perl the script " = qq/Hello World/; print " causing an invalid syntax error if you use bash or ksh, or if using csh or tcsh your shell will throw an error before sending it to Perl because you are using an undefined variable. In the second example, the shell does not expand $a but correctly sends the literal script '$a = qq/Hello World/; print $a' to Perl. This has the desired effect of setting the $a variable to "Hello World" and then printing it.
When you use perl -pe for a stream operation like perl -pe 's/here is a typi/here is a typo/g' < inputfile > outputfile, Perl allows you to edit the file in place and make a backup of the old version as well.
The -i option of Perl means that you want to edit the file in place and overwrite the original version. Use this option with caution. A safer solution is to use an argument to the -i option to store a backup copy of the original file. For example, if you used the option -i.bak on a file named foo, the new edited version of the file would be foo and the original would be saved as foo.bak.
In effect, you can shorten a command like mv file file.old; perl -pe 's/oldstring/newstring/g' file.old > file to perl -p -i.old -e 's/oldstring/newstring/g' file. Even better, you can do it as a bulk operation on a set of files, like so:
perl -p -i.old -e 's/oldstring/newstring/g' file*
Here's an easy way to change all the strings in all files recursively:
find . | xargs perl -p -i.old -e 's/oldstring/newstring/g'
Perl's -a and -F options help parse a file while you are reading it. -a turns on autosplit mode. When autosplit is enabled, after reading each line of input, Perl will automatically do a split on the line and assign the resulting array to the @F variable. Each line is split up by whitespace unless the -F parameter is used to specify a new field delimiter. These two features simplify parsing when the file is a simple record-oriented format.
Here's a simple one-line script that will print out the fourth word of every line, but also skip any line beginning with a # because it's a comment line.
perl -naF 'next if /^#/; print "$F[3]\n"'
This second one-line script will extract all usernames from /etc/passwd.
perl -na -F: -e 'print "$F[0]\n"' < /etc/passwd
You can use -F/:/ to split on a pattern instead of a string literal. Be careful, because the shell may escape characters preceded by a \ if they are not enclosed in single quotes. To split on whitespace use -F'/\s+/'.
Of course, command line Perl can be used for more general sysadmin tasks in addition to file editing. Let's say you have some HTTP log files named with a date timestamp like access_log.2005-1-1 .. access_log.2005-12-31 and you want to copy them to access_log.old.2005-1-1 .. access_log.old.2005-12-31. Copying the files by hand would be a slow and error-prone operation. You could create a script to do this, but it is a simple enough operation that you can do it with a quick line of Perl.
perl -e 'for(<access_log.*>){$a = $_; s/log/log.old/; `cp $a $_`}'
In this script the <access_log.*> uses Perl's file globbing to create a for loop of all the files. It assigns each one to the variable $_, then executes the loop. The loop first stores the old file name in the $a variable, then changes the $_ variable to substitute log and changes it to log.old. Finally it calls an external cp command to copy the filename from the old filename to the new one. An external cp might be slower than calling Perl's File::Copy module, but brevity prevails in these short code bits when you have no concerns about the script's performance.
In Perl, backticks call an external command and then return the output of that command. This can make your scripts much simpler. Sometimes it is the only reasonable way to accomplish a task. As an example, it is possible to write a Perl script to read the /proc file system to find out which processes are running and what they are doing, but it's easier to capture the output of a ps command and then parse that.
If you know a faster way to get your task done with another program, don't feel obligated to write a pure Perl solution. In many cases, using an external program is the only way to accomplish a task, but wrapping command line Perl around it is good way to automate your task.
The following example copies a script named command.pl to machines B, C, and D, then execute it and prints out the results. Using scp and ssh you can do this with a script like this:
perl -e 'for("B","C","D"){`scp command.pl $_:`; `ssh $_ command.pl`; }'.This works even better if you have public key authentication set up properly.
Selected Comments
Anonymous Coward :
Two of my favorite command-line constructs are:
Pick a random file (e.g. MP3)
ls | perl -e '@a=<>;print $a[rand(@a)]'and:
Shuffle the contents of a file
cat foo | perl -e '@a=<>;while(@a){print splice(@a,rand(@a),1)}'The latter is awash with Perl Wizardry (using an array in scalar context, the splice() function, etc.) but it's simplistic in its brevity.
Jan 28, 2004
I'd take out the part from perldoc perl and replace it with the output from perl --help. Seems perl.pod has gotten out of date. In particular, I notice the -C switch (which did one thing <5.8.1 and something very different >=5.8.1) is missing.
Output from 5.8.3:
Usage: ./perl [switches] [--] [programfile] [arguments] -0[octal] specify record separator (\0, if no argument) -a autosplit mode with -n or -p (splits $_ into @F) -C[number/list] enables the listed Unicode features -c check syntax only (runs BEGIN and CHECK blocks) -d[:debugger] run program under debugger -D[number/list] set debugging flags (argument is a bit mask or alpha +bets) -e program one line of program (several -e's allowed, omit prog +ramfile) -F/pattern/ split() pattern for -a switch (//'s are optional) -i[extension] edit <> files in place (makes backup if extension su +pplied) -Idirectory specify @INC/#include directory (several -I's allowe +d) -l[octal] enable line ending processing, specifies line termin +ator -[mM][-]module execute `use/no module...' before executing program -n assume 'while (<>) { ... }' loop around program -p assume loop like -n but print line also, like sed -P run program through C preprocessor before compilatio +n -s enable rudimentary parsing for switches after progra +mfile -S look for programfile using PATH environment variable -t enable tainting warnings -T enable tainting checks -u dump core after parsing program -U allow unsafe operations -v print version, subversion (includes VERY IMPORTANT p +erl info) -V[:variable] print configuration summary (or a single Config.pm v +ariable) -w enable many useful warnings (RECOMMENDED) -W enable all warnings -x[directory] strip off text before #!perl line and perhaps cd to +directory -X disable all warningsAlso, I have trouble with your "Uncommon". Seems to me you cover only the most common switches (other than -w).
These are one liners that might be of use. Some of them are from the net and some are one that I have had to use for some simple task. If Perl 5 is required, the perl5 is used.
perl -ne '$n += $_; print $n if eof' perl5 -ne '$n += $_; END { print "$n\n" }'- To sum numbers on a stream, where each number appears on a line by itself. That kind of output is what you get from cut(1), if you cut out a numerical field from an output. There is also a C program called sigma that does this faster.
- perl5 -pe 's/(\w)(.*)$/\U$1\L$2/'
perl5 -pe 's/\w.+/\u\L$&/'- To capitalize the first letter on the line and convert the other letters to small case. The last one is much nicer, and also faster.
- perl -e 'dbmopen(%H,".vacation",0666);printf("%-50s: %s\n",$K,scalar(localtime(unpack("L",$V)))while($K,$V)=each(%H)'
- Well, it is a one-liner. :)
You can use it to examine who wrote you a letter while you were on vacation. It examines the file that vacation(1) produces.- perl5 -p000e 'tr/ \t\n\r/ /;s/(.{50,72})\s/$1\n/g;$_.="\n"x2'
- This piece will read paragraphs from the standard input and reformat them in such a manner that every line is between 50 and 72 characters wide. It will only break a line at a whitespace and not in the middle of a word.
- perl5 -pe 's#\w+#ucfirst lc reverse $&#eg'
- This piece will read lines from the standard input and transform them into the Zafir language used by Zafirs troops, i.e. "Long Live Zafir!" becomes "Gnol Evil Rifaz!" (for some reason they always talk using capital letters).
Andrew Johnson and I posted slightly different versions, and we both split the string unnecessarily. This one avoids splitting the string.- perl -pe '$_ = " $_ "; tr/ \t/ /s; $_ = substr($_,1,-1)'
- This piece will remove spaces at the beginning and end of a line and squeeze all other sequences of spaces into one single space.
This was one of the "challenges" from comp.lang.perl.misc that occurs frequently; I am just unable to resist those. :)
perl.com
Perl has a large number of command-line options that can help to make your programs more concise and open up many new possibilities for one-off command-line scripts using Perl. In this article we'll look at some of the most useful of these.
perl.com
If you only want to read in one or more files, apply a regex to the contents, and spit out the altered text as one big stream -- the best approach is probably a one-liner such as the following:
perl -p -e "s/Foo/Bar/g" <FileList>This command calls perl with the options -p and -e "s/Foo/Bar/g" against the files listed in FileList . The first argument, -p , tells Perl to print each line it reads after applying the alteration. The second option, -e< , tells Perl to evaluate the provided substitution regex rather than reading a script from a file. The Perl interpreter then evaluates this regex against every line of all (space separated) files listed on the command line and spits out one huge stream of the concatenated fixed lines.
In standard fashion, Perl allows you to concatenate options without arguments with following options for brevity and convenience. Therefore, you'll more often see the previous example written as:
perl -pe "s/Foo/Bar/g" <FileList>In-place Editing
If you want to edit the files in place, editing each file before going on to the next, that's pretty easy, too:
perl -pi.bak -e "s/Foo/Bar/g" <FileList>The only change from the last command is the new option -i.bak , which tells Perl to operate on files in-place, rather than concatenating them together into one big output stream. Like the -e option, -i takes one argument, an extension to add to the original file names when making backup copies; for this example I chose .bak .
Warning: If you execute the command twice, you've most likely just overwritten your backups with the changed versions from the first run. You probably didn't want to do that.
Because -i takes an argument, I had to separate out the -e option, which Perl otherwise would interpret as the argument to -i, leaving us with a backup extension of .bak, unlikely to be correct unless you happen to be a pastry chef. In addition, Perl would have thought that "s/Foo/Bar/" was the filename of the script to run, and would complain when it could not find a script by that name.
Of course, you may want to make more extensive changes than just one regex. To make several changes all at once, add more code to the evaluated script. Remember to separate each additional line of code with a semicolon (technically, you should place a semicolon at the end of each line of code, but the very last one in any code block is optional). For example, you could make a series of changes:
perl -pi.bak -e "s/Bill Gates/Microsoft CEO/g; s/CEO/Overlord/g" <FileList>"Bill Gates" would then become "Microsoft Overlord" throughout the files. (Here, as in all one-liners, we ignore such finicky things as making sure we don't change "HERBACEOUS" to "HERBAOverlordUS"; for that kind of information, refer to a good treatise on regular expressions, such as Jeffrey Friedl's impressive book Mastering Regular Expressions, 2nd Edition. Also, I've wrapped the command to fit, but you should type it in as just one line.)
You may wish to override the behavior created by -p , which prints every line read in, after any changes made by your script. In this case, change to the -n option. -p -e "s/Foo/Bar/" is roughly equivalent to -n -e "s/Foo/Bar/; print". This allows you to write interesting commands, such as removing lines beginning with hash marks (Perl comments, C-style preprocessor directives, etc.):
perl -ni.bak -e "print unless /^\s*#/;" <FileList>
One of the more unusual past-times of Unix geeks is determining how much logic can be crammed into a single line of code. Perl hackers do this exceedingly well. For example, here's a one-liner test for prime numbers:
perl -le 'print "PRIME" if (1 x shift) !~ /^(11+)\1+$/' 19[email protected] contributed that snippet to the Perl Journal's collection of one-liners available at http://www.itknowledge.com/tpj/one-liners01.html . You substitute whatever number you want to check for the number 19 at the end.
Perl is as terse as it is powerful. Even so, few Perl programmers code as tightly as this! Most are happy producing more relaxed and readable code. Even so, one-liners are useful for a lot more than proving your worth as a tight coder. A versatile one-liner can be used to do much quick and dirty processing on the command line. Here's a simple substitute command that might come in very handy:
perl -p -i -e 's/this/that/g' filenameThis command replaces the string *this* with the string *that* in the specified file -- much the same as the scripts recently included in this column.
You can insert this command into a howto file for apredo it, turn it into a script that prompts for the *this* and *that* strings, or just memorize it. It's not that hard, even if you're not a regular Perl user. Just remember that the arguments spell "pie" and that the substitute command looks like a substitute command in sed. Since the command replaces the original file with the changed one, it's very quick. Arguments:
p print i in-place edit e execute the following commandThe Perl command:
perl -p -i -e 's/'.. strips the carriage returns out of a file, turning a DOS file (which ends in both carriage returns and linefeeds) into a Unix file (which includes only linefeeds). It is basically the equivalent of:
tr -d "\015"Another very good use of one-liners is to test your understand about the language. The very terse nature of a one-liner might force you to look at some syntax you might not normally use.
If you maintain a number of HTML documents on a Unix WWW server, you may sometimes want to make the same change to a number of files. Doing so by hand in a text editor can be tedious, but one time-saving option is to edit your files in place with a perl "one-liner". Best of all, you don't have to be a perl expert to do it.
Warning: Be sure to try this on a dummy copy of your files before you use it to edit the real thing! Since the editing happens in place, a mistake can be tricky to undo, even if you use the backup -i.bak option.
Sections of this page:
one-liners
- Change the hostname "xyz.rice.edu" to "abc.rice.edu":
perl -i.bak -p -e 's/xyz\.rice\.edu/abc.rice.edu/ig' *.html- Change localhost URLs to remote URLs:
perl -i.bak -p \ -e 's#file://localhost/localpath/#http://riceinfo.rice.edu/remotepath/#ig' \ *.html- Insert a department name at the beginning of every <TITLE>:
perl -i.bak -p \ -e 's#<title>#<title>Rice Fooology Dept.: #i' *.html- Insert a maintainer signature at the end of every file (before the closing <BODY> tag):
perl -i.bak -p \ -e 's#</body>#<p>\n<address>-- Jane Doe (jdoe\@rice.edu) 1999.12.31</address>\n</body>#i' \ *.htmlAnatomy of a perl one-line substitution command
perl -i[.backup-extension] -p -e 's#pat1#pat2#ig' files
- -i[.backup-extension]
- Tells perl to run the command on the named files in-place, i.e., using the named files both as input and output. If a backup extension is provided, the unmodified version of each file will be saved with the extension appended.
Example: -i.bak- -p
- Tells perl to assume an input loop around your one-line program and echo the output.
- -e
- The one-line program follows.
- 's#pat1#pat2#ig'
- The perl "substitution" function. Matches every instance of the pattern pat1 and replaces it with pat2. The "#" used to delimit the patterns can be any character that isn't found in pat1 or pat2. The perl pattern matching used in pat1 is very powerful and somewhat complex; the main pitfall to remember is that you may need to escape special characters such as "." with a preceding backslash, e.g. "xyz\.rice\.edu". The trailing "i" flag means to ignore case when matching pat1. The trailing "g" flag means to apply the substitution multiple times on the same line (without the "g" it will only be applied to the leftmost pattern match on each line).
- files
- The file(s) on which the command should be run. In an HTML context, you probably want to specify a pattern in the shell to match your HTML files, taking into account any subdirectories you also want to include. one-liners:
*.html (HTML files in current dir) *.html blah/*.html (HTML files in current dir and subdir "blah") *.html */*.html (HTML files in current dir and all subdirs one level deep) {.,*,*/*,*/*/*}/*.html (HTML files in current dir and all subdirs three levels deep)For more information
- Pertinent sections of the perl man page (if unavailable on the Web, type man perl at the Unix prompt):
- Learning Perl (the llama book) by Randal Schwartz
- Programming Perl (the camel book) by Larry Wall and Randal Schwartz
The following are a collection of perl one liners for command line use. First, quick recap of the important perl command line arguments is done before the example one liners. These one-liners are followed by a section that demonstrates how to convert a one liner into a full Perl script.
This page is likely not suitable for those with no Perl experience; consulting sites such as learn.perl.org may be necessary to learn about $_ and other constructs that will not be explained here.
For more background on these command line arguments, peruse perlrun.
- -e is used to specify Perl expressions to be run. More than one can be used, if needed. Other options should not follow this option.
- -p is to loop over and print input by default.
- -n is to loop over input without printing anything.
- -l handles newlines for you, and generally can be used by default— unless you need to do something special with the line endings themselves.
The -ple or -nle sets are good groups to start with, depending on whether printing will be used by default or not.
- -i causes perl to operate on files in-place, and optionally also backs up the files via -i.bak or whatever.
Using -ie '…' is a mistake, as the -i option reads the e as the backup filename suffix, followed shortly thereafter by perl failing as there is no longer any -e option to denote the following expression. Hence, I tend to use the in-place option before the loop control set.
$ echo | perl -i -ple 42
In general, practice constructing command lines such that -e is the last thing before an expression to avoid this sort of argument processing problem.
- -a enables autosplit of input into the @F array.
Usually I use perl -lane … when processing input into columns, as it is easy to remember, and most often do not want to print by default. Alternatives such as cut or awk might be better to use when dealing with columns.
- -F allows one to alter the pattern input is split on with -a. Like -i it takes an argument, so should be used apart from other option sets.
$ perl -F: -lane 'print $F[0] unless /^#/' /etc/passwd
- -0 specifies the input record separator.
- -M lets you load nifty modules such as File::Slurp or IO::All.
There are more, so be sure to consult the depths of perlrun at some point.
In the Perl spirit of "Programming is fun", here are some one-liners that might be actually useful. Please mail me yours, the best one-liner writer wins a Perl magnetic poetry kit. Contest closes July 31st, 2000. Please note that it is me personally running this competition, not NRCC, CBR or IMB. "Best" is subjective, and will be determined by an open vote.
Take a multiple sequence FASTA sequence file and print the non-redundant subset with the description lines for identical sequence concatenated with ;'s. Not a small one-liner, but close enough.
perl -ne 'BEGIN{$/=">";$"=";"}($d,$_)=/(.*?)\n(.+?)>?$/s;push @{$h{lc()}},$d if$_;END{for(keys%h){print">@{$h{$_}}$_"}}' filename
Split a multi-sequence FastA file into individual files named after their description lines.
perl -ne 'BEGIN{$/=">"}if(/^\s*(\S+)/){open(F,">$1")||warn"$1 write failed:$!\n";chomp;print F ">", $_}'
Take a blast output and print all of the gi's matched, one per line.
perl -pe 'next unless ($_) = /^>gi\|(\d+)/;$_.="\n"' filename
Filter all repeats of length 4 or greater from a FASTA input file. This one is thanks to Lincoln Stein and Gustavo Glusman's discussions on the bio-perl mailing list.
perl -pe 'BEGIN{$_=<>;print;undef$/}s/((.+?)\2{3,})/"N"x length$1/eg' filename
By : anonymous ( Fri Oct 8 01:39:43 2004 )
Try using the -P option with grep. This enables perl regular expressions in grep e.g.
grep -P "\S+\s+\S+" file
By : anonymous ( Fri Sep 17 20:22:02 2004 )
perl -e 'chmod 0000 $_ while <*>'
By : anonymous ( Tue Mar 16 15:05:27 2004 )
perl -wne 'BEGIN{$" = ","} @fields = split/\s+/; print "@fields\n";'
Abstract This article introduces some of the more common perl options found in command line programs, also known as ...
Adding a long list of numbers on the command line:
perl -e ’print eval join("+", @ARGV)’ 6 10 20 11 9 16 17 28 100 33333 14 -7Preserving case in a substitution
To replace substring $x with an equal length substring $y, but preserving the case of $x:Z
$string =~ s/($x)/"\L$y"^"\L$1"^$1/ie;How To Use The Perl Debugger as a Command-Line Interpreter
perl -de 0
perl -pi -e 's/foo/bar/' fileDoes an inplace SED on the file. GNU sed(1) v4 also supports this with -i and will probably be quicker if you only need a simple query and replacement. However, Perl's RegularExpressions are more powerful and easier on the hands than the POSIX variety offered by SED. With GNU sed(1), you can use the -r switch to get an extended RegularExpression syntax which also requires fewer backslashes than the POSIX flavour.
Removing empty lines from a file
perl -ni.bak -e'/\S/ && print' file1 file2In Shell:
for FILE in file1 file2 ; do mv "$F"{,.bak} ; grep '[^ ]' "$F.bak" > "$F" ; doneCollapse consecutive blank lines to a single one
perl -00 -pi.bak -e1 file1 file2Note the use of 1 as a no-op piece of Perl code. In this case, the -00 and -p switches already do all the work, so only a dummy needs to be supplied.
Binary dump of a string
perl -e 'printf "%08b\n", $_ for unpack "C*", shift' 'My String'Replace literal "\n" and "\t" in a file with newlines and tabs
perl -pe 's!\\n!\n!g; s!\\t!\t!g' $fileNote that you can use any punctuation as the separator in an s/// command, and if you have backslashes or even need literal slashes in your pattern then doing this can increase clarity.
List all currently running processes
This is useful if you suspect that ps(1) is not reliable, whether due to a RootKit or some other cause. It prints the process ID and command line of every running process on the system (except some "special" kernel processes that lie about/don't have command lines).
perl -0777 -pe 'BEGIN { chdir "/proc"; @ARGV = sort { $a <=> $b } glob("*/cmdline") } $ARGV =~ m!^(\d+)/!; print "$1\t"; s/\0/ /g; $_ .= "\n";'It runs an implicit loop over the /proc/*/cmdline files, by priming @ARGV with a list of files sorted numerically (which needs to be done explicitly using <=> -- the default sort is ASCIIbetical) and then employing the -p switch. -0777 forces files to be slurped wholesale. Per file, the digits that lead the filename are printed, followed by a tab. Since a null separates the arguments in these files, all of them are replaced by spaces to make the output printable. Finally, a newline is appended. The print call implicit in the -p switch then takes care of outputting the massaged command line.
These are one liners that might be of use. Some of them are from the net and some are one that I have had to use for some simple task. If Perl 5 is required, the perl5 is used.
- perl -ne '$n += $_; print $n if eof'
perl5 -ne '$n += $_; END { print "$n\n" }'- To sum numbers on a stream, where each number appears on a line by itself. That kind of output is what you get from cut(1), if you cut out a numerical field from an output. There is also a C program called sigma that does this faster.
- perl5 -pe 's/(\w)(.*)$/\U$1\L$2/'
perl5 -pe 's/\w.+/\u\L$&/'- To capitalize the first letter on the line and convert the other letters to small case. The last one is much nicer, and also faster.
v- perl -e 'dbmopen(%H,".vacation",0666);printf("%-50s: %s\n",$K,scalar(localtime(unpack("L",$V)))while($K,$V)=each(%H)'
- Well, it is a one-liner. :)
You can use it to examine who wrote you a letter while you were on vacation. It examines the file that vacation(1) produces.- perl5 -p000e 'tr/ \t\n\r/ /;s/(.{50,72})\s/$1\n/g;$_.="\n"x2'
- This piece will read paragraphs from the standard input and reformat them in such a manner that every line is between 50 and 72 characters wide. It will only break a line at a whitespace and not in the middle of a word.
- perl5 -pe 's#\w+#ucfirst lc reverse $&#eg'
- This piece will read lines from the standard input and transform them into the Zafir language used by Zafirs troops, i.e. "Long Live Zafir!" becomes "Gnol Evil Rifaz!" (for some reason they always talk using capital letters).
Andrew Johnson and I posted slightly different versions, and we both split the string unnecessarily. This one avoids splitting the string.
- perl -pe '$_ = " $_ "; tr/ \t/ /s; $_ = substr($_,1,-1)'
- This piece will remove spaces at the beginning and end of a line and squeeze all other sequences of spaces into one single space.
This was one of the "challenges" from comp.lang.perl.misc that occurs frequently; I am just unable to resist those. :)
find . -name "*.mp3" | perl -pe 's/.\/\w+-(\w+)-.*/$1/' | sort | uniq
perl -pi -e'$_ = sprintf "%04d %s", $., $_' test # inserting numbers in the file
find . -name "*.jpg" | perl -ne'chomp; $name = $_; $quote = chr(39); s/[$quote\\!]/_/ ; print "mv \"$name\" \"$_\"\n"'
#grep abba foo
perl -en 'print if /abba/' foo
# add first and penultimate columns
# NOTE the equivalent awk script:
# awk '{i = NF - 1; print $1 + $i}'
perl -lane 'print $F[0] + $F[-2]'Practical uses include omitting lines matching a regular expression, printing a range of lines, inplace editing of multiple files, etc
Printing of the range of lines:
$ (echo a; echo b) | perl -nle 'print unless /b/'
a$ (echo a; echo b) | perl -nle 'print unless $. == 1'
bAny time the $. line number variable is being used with multiple files, the eof function may need to be used to reset the current line number counter. The following one-liners demonstrate this feature by reading the file input twice, and resetting the line number counter in the second case.
# print the range of lines 5 to 50
perl -ne 'print if $. >= 5; exit if $. >= 50;'
Add a line to a file
Appending data to existing files is easy. So is inserting data into arbitrary locations in a file, such as prepending a new first line to a set of files. In the following case, #!/usr/bin/perl will be added as the first line of all *.pl files in the current directory.
$ perl -i -ple 'print q{#!/usr/bin/perl} if $. == 1; close ARGV if eof' *.plIf a recursive replace is needed, either investigate the use of the modules File::Find or IO::All, or simply have the unix shell pull in the required files as arguments to perl. While the second example is longer, it will work properly if filenames have spaces in their names, due to find -print0 and xargs -0 using the NUL character to delimit filenames instead of spaces.
$ perl -i -ple 'print q{#!/usr/bin/perl} if $. == 1; close ARGV if eof' \
`find . -type f -name "*.pl"`$ find . -type f -name "*.pl" -print0 | \
xargs -0 perl -i -ple 'print q{#!/usr/bin/perl} if $. == 1; close ARGV if eof'The following trick shows how to replace the second line of a file with some text, but only if that line is blank.
$ perl -ple 's/^$/some text/ if $. == 2; close ARGV if eof'
Home on the range
To skip ranges of text, use the .. operator. This operator is documented in perlop. The following one-liners illustrate different ways of collapsing runs of newlines. The first example eliminates all blank lines.
$ cat input
foobar
$ perl -ne 'print unless /^$/../^$/' input
foo
barThe unless statement is equivalent to if not, but is different from if ! due to the associativity and precedence rules covered in perlop. A benefit of this behavior allows the reduction of runs of blank lines to a single blank line.
$ perl -ne 'print if ! /^$/../^$/' input
foobar
Line numbers can also be used with the range operator, for instance to remove the first four lines of a file.
$ perl -nle 'print unless 1 .. 4' input
bar
Altering record parsing
Perl uses the -0 option to allow changing the input record separator. The two main uses of this option are -00 to operate in paragraph mode, and -0777 to read all input into $_ at once. The paragraphs file contains the -0 documentation from perlrun, and is used in the following example to extract just the paragraph with the word special in it.
$ perl -00 -ne 'print if /special/' paragraphs
The special value 00 will cause Perl to slurp files in paragraph mode. The value 0777 will cause Perl to slurp files whole because there is no legal byte with that value.
Parsing the entire input file as a single line can be used to alter the newlines that otherwise require a range operator to deal with, as shown above. The following is a different way to remove runs of newlines from a file: by treating the entire file as a single line, a repeating s///g expression can be used to replace newlines as needed.
$ cat input
foobar
$ perl -0777 -pe 's/\n+/\n/g' input
foo
barCustom Quoting
Shell quoting may cause problems when writing expressions on the command line. Single quotes are usually used to delimit Perl expressions, to prevent shell interpolation of the code. To use a literal single quote inside such a single quoted string, the awkward '\'' syntax will need to be used, to end the single quoted string, include a literal quote, then restart the quoted string.
$ perl -le 'print "'\'' is a single quote"'
' is a single quoteAlternatives include using an octal escape code instead; see ascii(1) for a listing of codes.
$ perl -le 'print "\047 is a single quote"'
' is a single quotePerl also allows different quoting operators, see the “Quote and Quote-like Operators” section under perlop for more information on these.
$ perl -le 'print q{single quoted: $$} . qq{ interpolated: $$}'
single quoted: $$ interpolated: 11506
Output to Multiple Files
To split output among multiple files, change where standard output points at based on some test. For example, the following will split a standard unix mailbox file inbox into multiple files named out.*, incrementing a number for each message in the mailbox.
$ perl -pe 'BEGIN { $n=1 } open STDOUT, ">out.$n" and $n++ if /^From /' inbox
Converting One Liners
One liners may be used as quick example code, or could be found in someone’s shell history. The following section demonstrates how to convert such one liners to full Perl scripts.
- Newline handling
The -l command line option can easily be ported to a script by using it on the shebang line.
#!/usr/bin/perl -w -l
use strict;- Loop over input (-pe or -ne)
Printing loops can be replaced with a while block that prints by default. For a non-printing loop, remove the print statement.
#!/usr/bin/perl -w -l
use strict;while (<>) {
# code from -e expressions hereprint;
} continue {
close ARGV if eof;
}Special BEGIN or END blocks will need to be located outside of the while loop.
- In Place Editing
To convert the -i in-place option, use the $^I variable, and ensure the files to be processed are in @ARGV before looping over <>.
# trick to expand globs in input for systems with poor shells (Win32)
local @ARGV = map glob, @ARGV;local $^I = '.orig';
while (<>) {
# code hereprint;
} continue { close ARGV if eof }
In-place editing
# 1. in-place edit of *.c files changing all foo to bar perl -p -i.bak -e 's/\bfoo\b/bar/g' *.c # 2. delete first 10 lines perl -i.old -ne 'print unless 1 .. 10' foo.txt # 3. change all the isolated oldvar occurrences to newvar perl -i.old -pe 's{\boldvar\b}{newvar}g' *.[chy] # 4. increment all numbers found in these files perl -i.tiny -pe 's/(\d+)/ 1 + $1 /ge' file1 file2 .... # 5. delete all but lines between START and END perl -i.old -ne 'print unless /^START$/ .. /^END$/' foo.txt # 6. binary edit (careful!) perl -i.bak -pe 's/Mozilla/Slopoke/g' /usr/local/bin/netscape
Google matched content |
One-liner program - Wikipedia, the free encyclopedia
Famous Perl One-Liners Explained, Part I File Spacing
Here is the general plan:
Famous Perl One-Liners Explained, Part VII Handy Regular Expressions
one-liners by Jeff Bay,
Perl Scripts And One Liners - www.socher.org
TPJ One Liners - The Perl Journal, Fall 1998
Perl tricks by Neil Kandalgaonkar
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Hater’s Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright © 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: October, 09, 2019