See Also
Referenced By
The man page hstr(1) is an alias of hh(1).
|
Home | Switchboard | Unix Administration | Red Hat | TCP/IP Networks | Neoliberalism | Toxic Managers |
(slightly skeptical) Educational society promoting "Back to basics" movement against IT overcomplexity and bastardization of classic Unix |
|
Bash is a standard shell on Linux and with version 3.2 or later available on all enterprise platform and installed on Solaris and AIX it make sense to make it standard interactive shell. The current version as of June 2017 is 4.2. Only HP-UX does not include bash by-default, but HP does provide a depot package for bash 4.00. Bash 3.2 and later has important enhancements that make it a better shell (see Annotated List of Bash Enhancements) although scripts should be generally limited to small to medium size as Perl represents much better scripting environment then bash and is installed by default on all platforms including HP-UX and AIX.
|
Bash can be called an accidental scripting language. If started very modestly more then 40 years ago with Borne shell (1977) and C shell (1978). And then gradually had grown into a scripting language. the first major enhancement were done in ksh88 and then ksh93, Then bash 30, and 4,0 introduced additional one (but generally in the framework of ksh93).
The key problem with bash is that it is "rarely used language" for most users. That means that while users might use bash as a command interpreter daily, they program more or less complex scripts rarely (say once a quarter on average). And from one time to another manage to forget most of the important staff related to this language. That makes programming in bash a real challenge.
Also your programming skills evaporate in several months or so, unless you carefully document your "achievements". You will forget them and go down again to the most basic level of usage of this language when you need to write another bash script.
And you might repeat the same mistakes or even blunders again and again. Logbooks is better then nothing but generally is not enough. You need a personal wiki. Frontage can serve as a surrogate Wiki on windows very well. One of the advantages is that you do not need to learn special wiki script. You can use HTML. The author uses it in this capacity for 20 year or so.
Bash is so called "glue" language and set of classic Unix utilities represent important part of bash functionality. So in addition to knowledge of bash you need to know at least several dozens of Unix utilities (and you better know them well ;-). That requirement alone often lead to 'stack overflow". So mental crutches in the form of your custom references are needed and creating them is a the necessary step because of complexity of the environment. Which now is beyond human understanding (the situation called Blind men and an elephant - Wikipedia).
At the current level of complexity the fact that bash has source code available changes absolutely nothing for 99% of Unix/linux sysadmins. It is a classic example of self-closing open source ;-).
At the beginning a good reference book might suffice too (see Best Shell Books for recommendations) but gradually you should create you own website or wiki and work on enhancing it. Even people with 20 year Unix experience often can't remember of the vital option of the most common utilities such as ls, grep and find.
Also blunders in bash can be very costly and lead to the lostt of data, crash of the OS or both. They are often connected iether with misunderstanding of bash behaviour or misunderstanding of behaviour or one of Unix utilities you use, or Unix itself, or make accidentally or out of exhaustion or under time pressure some nasty error yourself. See Sysadmin Horror Stories
The key problem with bash is that it is "rarely used language" for most users. That means that while users might use bash as a command interpreter daily, they program more or less complex scripts rarely (say once a quarter on average). And from one time to another manage to forget most of the important staff related to this language. That makes programming in bash a real challenge. |
While bash has a humble start and much weaker designer then ksh93 and was at times extremely buggy, the gap narrowed in version 3.2 to the extent that better command line user friendliness of bash make it more attractive, especially as an interactive shell then supplied with OS ksh and C-shell. Most ksh93 innovations are now present in some form in bash 3.2. As bash is standard shell on Linux and is installed by default in Solaris it status as an enterprise shell is almost as strong as ksh93 (which is mostly present in old, weaker versions. Current bash has the best debugger and from this point of view represents the best shell. But portable script still are probably better to be written for ksh88 or POSIX shell which is lowest common denominator available on all Unixes. To write scripts in Borne shell now is extremely stupid and wasteful.
Bash 3.2 and later is the one of the most portable advanced shell around (ksh93 and zsh are still a strong competition; ksh93 is definitely more reliable for scripts and does not contain such design blunders as the last stage of the pipe belonging to subshell instead of the invoking shell (actually fixed in lager version of bash).
Bash-related books dominates shell-related publications and as such the level of known-how for bash is higher then for other shells (see Best Shell Books). The advantage of bash for large enterprise environment is that it comes by default with linux, Solaris and AIX (unfortunately in pretty different versions). Only HP-UX does not have bash installed by default. Also it is the best portable interactive shell, much closer to tcsh then any competition.
Still you need some efforts to make the default shell. Unfortunately the default shell for Solaris is the "Bourne shell" or /usr/bin/sh
Bourne shell is a pretty weak outdated shell and attempt to base shell scripting portability on this outdated shell is a serious strategic error (bash probably should be used instead). It is still used as root shell in Solaris ,but that's due to Solaris legacy not because it gives anything but disadvantages; attempts to claim that this somehow increases the security of root due to the fact that it is static linked are weak and the argumentation is open for discussion. All-in-all usage of Bourne shell as a default root shell in Solaris might be considered to be a blunder: system administrators definitely need a better shell.
Usage of Borne shall as a default shell might slightly increase the chances of recovery in case /usr partition is damaged, but this is a pretty serious case and usually means serious troubles with other partitions on the disk anyway (unless this is the case when in Solaris link /bin -> usr/bin is destroyed, but such cases are simple to fight by refereeing shell as /usr/bin/ksh in /etc/passwd). If this is a serious trouble than booting from a CD a mounting the damaged volume is always a good idea and in this case it does not matter what shell root is using; you can change it anyway.
Bash 3.2 is reasonably easy to build from source. Get and unpack in your home directory the archive. This will create a directory called bash-3.2 in your home directory. If you do not have the gunzip utility, you can obtain it in the same way you obtained bash or simply use gzip -d instead. The archive contains all of the source code needed to compile bash and a large amount of documentation and examples. the latter have their own value.The bash archive contains a main directory and a set of files and subdirectories. Among the first files you should examine are:
CHANGES A comprehensive list of bug fixes and new features since the last version
COPYING The GNU Copyleft for bash
MANIFEST A list of all the files and directories in the archive
NEWS A list of new features since the last version
README A short introduction and instructions for compiling bash
doc Directory with information related to bash in various formats (please note that many syadmin never read those docs)
examples Directory with examples of startup files, scripts, and functions
The doc directory contains a few articles that are worth reading. You can print man page for reference with command nroff-man bash.1 | more It is convenient to have a hardcopy so you can write notes all over it. Also valuable, FAQ is a Frequently Asked Questions document with answers, readline.3 is the manual entry for the readline facility, and article.ms is an article about the shell that appeared in Linux Journal, and was written by the current bash maintainer Chet Ramey.
An examples directory is especially important and is well worth exploring (after you've finished reading this book, of course). It includes sample code, scripts, functions, and startup files. See Examples shipped with bash 3.2 and newer
An examples directory directory is especially important and is well worth exploring (after you've finished reading this book, of course). It includes sample code, scripts, functions, and startup files. See Examples shipped with bash 3.2 and newer |
Some interesting features of bash include:
<(list)
To substitute a command pipeline for
an input file >(list)
To substitute a command pipeline for
an output file In the case of the <
and >
forms, the shell will
run process list asynchronously, connected to a named pipe (FIFO).
The name of this pipe will become the argument to the command.
Please visit Heiner Steven SHELLdorado the best shell scripting site on the Internet |
<
is selected then result of execution of the
process list will serve as an input file. This, for allows you to use the output
of one or several commands as parameters to the utilities that accepts file.
For instance you can compare the contents of two directories by typing:diff <( ls dir1 ) <( ls dir2 )
That can be used for concatenating input in pipes:
cat <(echo hello) <(echo world) | echo
If <
is used, then the file passed as an argument will be a
named pipe connected to the output of the list process.
Another, more complex, example,
paste <(cut -f1 file1) < (cut -f3 file2) | tee >(process1) >(process2) >/dev/null
cuts fields 1 and 3 from the files file1 and file2 respectively, pastes the results together, and sends it to the processes process1 and process2. Note that the file, which is passed as an argument to the command, is a system pipe so programs that expect to lseek(2) on the file will not work. Also note that the previous example can be more compactly and efficiently written as:
paste <(cut -f1 file1) <(cut -f3 file2) >>(process1)
>>(process2)
The shell uses pipes instead of FIFOs to implement the latter two process substitutions in the above example.
See also the article in Linux Journal:
Bash
Process Substitution
Due to this bash shell is gradually gaining grounds as the preferred interactive shell for Solaris and other enterprise class Unixes.
Older version of bash (2.x series) are obsolete and should not be used. The recommended version is 3.2 patch level 3 or above. 4.x is recommended on RHEL 6.x and up.
Some pages that form kind of bash tuorial:
Dr. Nikolai Bezroukov
Jun 12, 2021 | anto.online
What if you needed to execute a specific command again, one which you used a while back? And you can't remember the first character, but you can remember you used the word "serve".
You can use the up key and keep on pressing up until you find your command. (That could take some time)
Or, you can enter CTRL + R and type few keywords you used in your last command. Linux will help locate your command, requiring you to press enter once you found your command. The example below shows how you can enter CTRL + R and then type "ser" to find the previously run "PHP artisan serve" command. For sure, this tip will help you speed up your command-line experience.
anto@odin:~$ (reverse-i-search)`ser': php artisan serveYou can also use the history command to output all the previously stored commands. The history command will give a list that is ordered in ascending relative to its execution.
Jun 10, 2021 | www.redhat.com
Exit status
In Bash scripting,
$?
prints the exit status. If it returns zero, it means there is no error. If it is non-zero, then you can conclude the earlier task has some issue.A basic example is as follows:
$ cat myscript.sh #!/bin/bash mkdir learning echo $?If you run the above script once, it will print
0
because the directory does not exist, therefore the script will create it. Naturally, you will get a non-zero value if you run the script a second time, as seen below:$ sh myscript.sh mkdir: cannot create directory 'learning': File exists 1In the cloudBest practices
- Understanding cloud computing
- Free course: Red Hat OpenStack Technical Overview
- Free e-book: Hybrid Cloud Strategy for Dummies
It is always recommended to enable the debug mode by adding the
-e
option to your shell script as below:$ cat test3.sh !/bin/bash set -x echo "hello World" mkdiir testing ./test3.sh + echo 'hello World' hello World + mkdiir testing ./test3.sh: line 4: mkdiir: command not foundYou can write a debug function as below, which helps to call it anytime, using the example below:
$ cat debug.sh #!/bin/bash _DEBUG="on" function DEBUG() { [ "$_DEBUG" == "on" ] && $@ } DEBUG echo 'Testing Debudding' DEBUG set -x a=2 b=3 c=$(( $a + $b )) DEBUG set +x echo "$a + $b = $c"Which prints:
$ ./debug.sh Testing Debudding + a=2 + b=3 + c=5 + DEBUG set +x + '[' on == on ']' + set +x 2 + 3 = 5Standard error redirectionYou can redirect all the system errors to a custom file using standard errors, which can be denoted by the number 2 . Execute it in normal Bash commands, as demonstrated below:
$ mkdir users 2> errors.txt $ cat errors.txt mkdir: cannot create directory "˜users': File existsMost of the time, it is difficult to find the exact line number in scripts. To print the line number with the error, use the PS4 option (supported with Bash 4.1 or later). Example below:
$ cat test3.sh #!/bin/bash PS4='LINENO:' set -x echo "hello World" mkdiir testingYou can easily see the line number while reading the errors:
$ /test3.sh 5: echo 'hello World' hello World 6: mkdiir testing ./test3.sh: line 6: mkdiir: command not found
Mar 10, 2021 | www.networkworld.com
... ... ...
Different ways to loopThere are a number of ways to loop within a script. Use for when you want to loop a preset number of times. For example:
#!/bin/bash for day in Sun Mon Tue Wed Thu Fri Sat do echo $day doneor
#!/bin/bash for letter in {a..z} do echo $letter doneUse while when you want to loop as long as some condition exists or doesn't exist.
#!/bin/bash n=1 while [ $n -le 4 ] do echo $n ((n++)) doneUsing case statementsCase statements allow your scripts to react differently depending on what values are being examined. In the script below, we use different commands to extract the contents of the file provided as an argument by identifying the file type.
#!/bin/bash if [ $# -eq 0 ]; then echo -n "filename> " read filename else filename=$1 fi if [ ! -f "$filename" ]; then echo "No such file: $filename" exit fi case $filename in *.tar) tar xf $filename;; *.tar.bz2) tar xjf $filename;; *.tbz) tar xjf $filename;; *.tbz2) tar xjf $filename;; *.tgz) tar xzf $filename;; *.tar.gz) tar xzf $filename;; *.gz) gunzip $filename;; *.bz2) bunzip2 $filename;; *.zip) unzip $filename;; *.Z) uncompress $filename;; *.rar) rar x $filename ;; *) echo "No extract option for $filename" esacNote that this script also prompts for a file name if none was provided and then checks to make sure that the file specified actually exists. Only after that does it bother with the extraction.
Reacting to errorsYou can detect and react to errors within scripts and, in doing so, avoid other errors. The trick is to check the exit codes after commands are run. If an exit code has a value other than zero, an error occurred. In this script, we look to see if Apache is running, but send the output from the check to /dev/null . We then check to see if the exit code isn't equal to zero as this would indicate that the ps command did not get a response. If the exit code is not zero, the script informs the user that Apache isn't running.
#!/bin/bash ps -ef | grep apache2 > /dev/null if [ $? != 0 ]; then echo Apache is not running exit fi
softpanorama.org
Those shortcuts belong to the class of commands known as bang commands . Internet search for this term provides a wealth of additional information (which probably you do not need ;-), I will concentrate on just most common and potentially useful in the current command line environment bang commands. Of them !$ is probably the most useful and definitely is the most widely used. For many sysadmins it is the only bang command that is regularly used.
!! is the bang command that re-executes the last command . This command is used mainly as a shortcut sudo !! -- elevation of privileges after your command failed on your user account. For example:
fgrep 'kernel' /var/log/messages # it will fail due to unsufficient privileges, as /var/log directory is not readable by ordinary user sudo !! # now we re-execute the command with elevated privileges!$ puts into the current command line the last argument from previous command . For example:
mkdir -p /tmp/Bezroun/Workdir cd !$In this example the last command is equivalent to the command cd /tmp/Bezroun/Workdir. Please try this example. It is a pretty neat trick.NOTE: You can also work with individual arguments using numbers.
For example:
- !:1 is the previous command and its options
- !:2 is the first argument of the previous command
- !:3 is the second
- And so on
cp !:2 !:3 # picks up the first and the second argument from the previous commandFor this and other bang command capabilities, copying fragments of the previous command line using mouse is much more convenient, and you do not need to remember extra staff. After all, band commands were created before mouse was available, and most of them reflect the realities and needs of this bygone era. Still I met sysadmins that use this and some additional capabilities like !!:s^<old>^<new> (which replaces the string 'old' with the string 'new" and re-executes previous command) even now.The same is true for !* -- all arguments of the last command. I do not use them and have had troubles writing this part of this post, correcting it several times to make it right 4/0
Nowadays CTRL+R activates reverse search, which provides an easier way to navigate through your history then capabilities in the past provided by band commands.
linuxiac.com
The The'!'
symbol or operator in Linux can be used as Logical Negation operator as well as to fetch commands from history with tweaks or to run previously run command with modification. All the commands below have been checked explicitly in bash Shell. Though I have not checked but a major of these won't run in other shell. Here we go into the amazing and mysterious uses of'!'
symbol or operator in Linux commands.4. How to handle two or more arguments using (!)
Let's say I created a text file 1.txt on the Desktop.
$ touch /home/avi/Desktop/1.txtand then copy it to " /home/avi/Downloads " using complete path on either side with cp command.
$ cp /home/avi/Desktop/1.txt /home/avi/downloadsNow we have passed two arguments with cp command. First is " /home/avi/Desktop/1.txt " and second is " /home/avi/Downloads ", lets handle them differently, just execute
echo [arguments]
to print both arguments differently.$ echo "1st Argument is : !^" $ echo "2nd Argument is : !cp:2"Note 1st argument can be printed as
"!^"
and rest of the arguments can be printed by executing"![Name_of_Command]:[Number_of_argument]"
.In the above example the first command was " cp " and 2nd argument was needed to print. Hence
5. Execute last command on the basis of keywords"!cp:2"
, if any command say xyz is run with 5 arguments and you need to get 4th argument, you may use"!xyz:4"
, and use it as you like. All the arguments can be accessed by"!*"
.We can execute the last executed command on the basis of keywords. We can understand it as follows:
$ ls /home > /dev/null [Command 1] $ ls -l /home/avi/Desktop > /dev/null [Command 2] $ ls -la /home/avi/Downloads > /dev/null [Command 3] $ ls -lA /usr/bin > /dev/null [Command 4]Here we have used same command (ls) but with different switches and for different folders. Moreover we have sent to output of each command to " /dev/null " as we are not going to deal with the output of the command also the console remains clean.
Now Execute last run command on the basis of keywords.
$ ! ls [Command 1] $ ! ls -l [Command 2] $ ! ls -la [Command 3] $ ! ls -lA [Command 4]Check the output and you will be astonished that you are running already executed commands just by
ls
keywords.6. The power of !! Operator
You can run/alter your last run command using
(!!)
. It will call the last run command with alter/tweak in the current command. Lets show you the scenarioLast day I run a one-liner script to get my private IP so I run,
$ ip addr show | grep inet | grep -v 'inet6'| grep -v '127.0.0.1' | awk '{print $2}' | cut -f1 -d/Then suddenly I figured out that I need to redirect the output of the above script to a file ip.txt , so what should I do? Should I retype the whole command again and redirect the output to a file? Well an easy solution is to use
UP
navigation key and add'> ip.txt'
to redirect the output to a file as.$ ip addr show | grep inet | grep -v 'inet6'| grep -v '127.0.0.1' | awk '{print $2}' | cut -f1 -d/ > ip.txtThanks to the life Savior
UP
navigation key here. Now consider the below condition, the next time I run below one-liner script.$ ifconfig | grep "inet addr:" | awk '{print $2}' | grep -v '127.0.0.1' | cut -f2 -d:As soon as I run script, the bash prompt returned an error with the message
"bash: ifconfig: command not found"
, It was not difficult for me to guess I run this command as user where it should be run as root.So what's the solution? It is difficult to login to root and then type the whole command again! Also ( UP Navigation Key ) in last example didn't came to rescue here. So? We need to call
"!!"
without quotes, which will call the last command for that user.$ su -c "!!" rootHere su is switch user which is root,
-c
is to run the specific command as the user and the most important part!!
will be replaced by command and last run command will be substituted here. Yeah! You need to provide root password.I make use of
!!
mostly in following scenarios,1. When I run apt-get command as normal user, I usually get an error saying you don't have permission to execute.
$ apt-get upgrade && apt-get dist-upgradeOpps error"don't worry execute below command to get it successful..
$ su -c !!Same way I do for,
$ service apache2 start or $ /etc/init.d/apache2 start or $ systemctl start apache2OOPS User not authorized to carry such task, so I run..
$ su -c 'service apache2 start' or $ su -c '/etc/init.d/apache2 start' or $ su -c 'systemctl start apache2'7. Run a command that affects all the file except ![FILE_NAME]
The
!
( Logical NOT ) can be used to run the command on all the files/extension except that is behind'!'
.A. Remove all the files from a directory except the one the name of which is 2.txt .
$ rm !(2.txt)B. Remove all the file type from the folder except the one the extension of which is " pdf ".
$ $ rm !(*.pdf)... ... ...
- Edgar Allen May 19, 2015 at 10:30 pm
You might also mention !? It finds the last command with its' string argument. For example, if"
1013 grep tornado /usr/share/dict/words 1014 grep hurricane /usr/share/dict/words 1015 wc -l /usr/share/dict/wordsare all in the history then !?torn will grep for tornado again where !torn would search in vain for a command starting with torn.
And `wc !?torn?:2` works to select argument two from the command containing tornado and run `wc` on it.
- Stephen May 19, 2015 at 6:07 pm
I didn't see a mention of historical context in the article, so I'll give some here in the comments. This form of history command substitution originated with the C Shell (csh), created by Bill Joy for the BSD flavor of UNIX back in the late 70's. It was later carried into tcsh, and bash (Bourne-Again SHell).
Personally, I've always preferred the C-shell history substitution mechanism, and never really took to the fc command (that I first encountered in the Korne shell).
- suzy May 16, 2015 at 11:45 am
4th command. You can access it much simpler. There are actually regular expressions:
- ^ -- is at the begging expression
- $ -- is at the end expression
- :number -- any number parameter
Examples:
touch a.txt b.txt c.txt echo !^ ""> display first parameter echo !:1 ""> also display first parameter echo !:2 ""> display second parameter echo !:3 ""> display third parameter echo !$ ""> display last (in our case 3th) parameter echo !* ""> display all parameters- Tomasz Wiszkowski May 16, 2015 at 10:50 am
I think (5) works differently than you pointed out, and redirection to devnull hides it, but ZSh still prints the command.
When you invoke "! ls"", it always picks the last ls command you executed, just appends your switches at the end (after /dev/null).
One extra cool thing is the !# operator, which picks arguments from current line. Particularly good if you need to retype long path names you already typed in current line. Just say, for example
cp /some/long/path/to/file.abc !#:1And press tab. It's going to replace last argument with entire path and file name.
- Avishek Kumar May 18, 2015 at 11:37 pm
Tomasz,
For your first part of feedback: It doesn't pick the last command executed and just to prove this we have used 4 different switches for same command. ($ ! ls $ ! ls -l $ ! ls -la $ ! ls -lA ). Now you may check it by entering the keywords in any order and in each case it will output the same result.
As far as it is not working in ZSH as expected, i have already mentioned that it i have tested it on BASH and most of these won't work in other shell.
For the second part, what you mentioned is a HASH TAG in Linux Command Line and we have included it in one of our article. You may like to read it here: https://www.tecmint.com/linux-commandline-chat-server-and-remove-unwanted-packages/
May 20, 2021 | www.redhat.com
You can achieve the same result by replacing the backticks with the
$
parens, like in the example below:⯠echo "There are $(ls | wc -l) files in this directory" There are 3 files in this directoryHere's another example, still very simple but a little more realistic. I need to troubleshoot something in my network connections, so I decide to show my total and waiting connections minute by minute.
⯠cat netinfo.sh #!/bin/bash while true do ss -an > netinfo.txt connections_total=$(cat netinfo.txt | wc -l) connections_waiting=$(grep WAIT netinfo.txt | wc -l) printf "$(date +%R) - Total=%6d Waiting=%6d\n" $connections_total $connections_waiting sleep 60 done ⯠./netinfo.sh 22:59 - Total= 2930 Waiting= 977 23:00 - Total= 2923 Waiting= 963 23:01 - Total= 2346 Waiting= 397 23:02 - Total= 2497 Waiting= 541It doesn't seem like a huge difference, right? I just had to adjust the syntax. Well, there are some implications involving the two approaches. If you are like me, who automatically uses the backticks without even blinking, keep reading.
Deprecation and recommendations
Deprecation sounds like a bad word, and in many cases, it might really be bad.
When I was researching the explanations for the backtick operator, I found some discussions about "are the backtick operators deprecated?"
The short answer is: Not in the sense of "on the verge of becoming unsupported and stop working." However, backticks should be avoided and replaced by the
$
parens syntax.The main reasons for that are (in no particular order):
1. Backticks operators can become messy if the internal commands also use backticks.
- You will need to escape the internal backticks, and if you have single quotes as part of the commands or part of the results, reading and troubleshooting the script can become difficult.
- If you start thinking about nesting backtick operators inside other backtick operators, things will not work as expected or not work at all. Don't bother.
2. The
$
parens operator is safer and more predictable.
- What you code inside the
$
parens operator is treated as a shell script. Syntactically it is the same thing as having that code in a text file, so you can expect that everything you would code in an isolated shell script would work here.Here are some examples of the behavioral differences between backticks and
$
parens:⯠echo '\$x' \$x ⯠echo `echo '\$x'` $x ⯠echo $(echo '\$x') \$xYou can find additional examples of the differences between backticks and
$
parens behavior here .[ Free cheat sheet: Get a list of Linux utilities and commands for managing servers and networks . ]
Wrapping upIf you compare the two approaches, it seems logical to think that you should always/only use the
$
parens approach. And you might think that the backtick operators are only used by sysadmins from an older era .Well, that might be true, as sometimes I use things that I learned long ago, and in simple situations, my "muscle memory" just codes it for me. For those ad-hoc commands that you know that do not contain any nasty characters, you might be OK using backticks. But for anything that is more perennial or more complex/sophisticated, please go with the
$
parens approach.
May 23, 2021 | www.redhat.com
Handling optionsThe ability for a Bash script to handle command line options such as
-h
to display help gives you some powerful capabilities to direct the program and modify what it does. In the case of your-h
option, you want the program to print the help text to the terminal session and then quit without running the rest of the program. The ability to process options entered at the command line can be added to the Bash script using thewhile
command in conjunction with thegetops
andcase
commands.The
getops
command reads any and all options specified at the command line and creates a list of those options. Thewhile
command loops through the list of options by setting the variable $options for each in the code below. Thecase
statement is used to evaluate each option in turn and execute the statements in the corresponding stanza. Thewhile
statement will continue to assess the list of options until they have all been processed or an exit statement is encountered, which terminates the program.Be sure to delete the help function call just before the echo "Hello world!" statement so that the main body of the program now looks like this.
############################################################ ############################################################ # Main program # ############################################################ ############################################################ ############################################################ # Process the input options. Add options as needed. # ############################################################ # Get the options while getopts ":h" option; do case $option in h) # display Help Help exit;; esac done echo "Hello world!"Notice the double semicolon at the end of the exit statement in the case option for
-h
. This is required for each option. Add to this case statement to delineate the end of each option.Testing is now a little more complex. You need to test your program with several different options -- and no options -- to see how it responds. First, check to ensure that with no options that it prints "Hello world!" as it should.
[student@testvm1 ~]$ hello.sh Hello world!That works, so now test the logic that displays the help text.
[student@testvm1 ~]$ hello.sh -h Add a description of the script functions here. Syntax: scriptTemplate [-g|h|t|v|V] options: g Print the GPL license notification. h Print this Help. v Verbose mode. V Print software version and exit.That works as expected, so now try some testing to see what happens when you enter some unexpected options.
[student@testvm1 ~]$ hello.sh -x Hello world![student@testvm1 ~]$ hello.sh -q Hello world![student@testvm1 ~]$ hello.sh -lkjsahdf Add a description of the script functions here. Syntax: scriptTemplate [-g|h|t|v|V] options: g Print the GPL license notification. h Print this Help. v Verbose mode. V Print software version and exit. [student@testvm1 ~]$Handling invalid optionsThe program just ignores the options for which you haven't created specific responses without generating any errors. Although in the last entry with the
-lkjsahdf
options, because there is an "h" in the list, the program did recognize it and print the help text. Testing has shown that one thing that is missing is the ability to handle incorrect input and terminate the program if any is detected.You can add another case stanza to the case statement that will match any option for which there is no explicit match. This general case will match anything you haven't provided a specific match for. The case statement now looks like this.
while getopts ":h" option; do case $option in h) # display Help Help exit;; \?) # Invalid option echo "Error: Invalid option" exit;; esac doneKubernetes and OpenShift
- Free cheatsheet: Kubernetes and Minikube
- Free ebook: Designing Cloud-Native Applications
- Interactive course: Getting Started with OpenShift
- Free ebook: Build Applications with Kubernetes and Openshift
This bit of code deserves an explanation about how it works. It seems complex but is fairly easy to understand. The while – done structure defines a loop that executes once for each option in the getopts – option structure. The ":h" string -- which requires the quotes -- lists the possible input options that will be evaluated by the case – esac structure. Each option listed must have a corresponding stanza in the case statement. In this case, there are two. One is the h) stanza which calls the Help procedure. After the Help procedure completes, execution returns to the next program statement, exit;; which exits from the program without executing any more code even if some exists. The option processing loop is also terminated, so no additional options would be checked.
Notice the catch-all match of \? as the last stanza in the case statement. If any options are entered that are not recognized, this stanza prints a short error message and exits from the program.
Any additional specific cases must precede the final catch-all. I like to place the case stanzas in alphabetical order, but there will be circumstances where you want to ensure that a particular case is processed before certain other ones. The case statement is sequence sensitive, so be aware of that when you construct yours.
The last statement of each stanza in the case construct must end with the double semicolon (
;;
), which is used to mark the end of each stanza explicitly. This allows those programmers who like to use explicit semicolons for the end of each statement instead of implicit ones to continue to do so for each statement within each case stanza.Test the program again using the same options as before and see how this works now.
The Bash script now looks like this.
#!/bin/bash ############################################################ # Help # ############################################################ Help() { # Display Help echo "Add description of the script functions here." echo echo "Syntax: scriptTemplate [-g|h|v|V]" echo "options:" echo "g Print the GPL license notification." echo "h Print this Help." echo "v Verbose mode." echo "V Print software version and exit." echo } ############################################################ ############################################################ # Main program # ############################################################ ############################################################ ############################################################ # Process the input options. Add options as needed. # ############################################################ # Get the options while getopts ":h" option; do case $option in h) # display Help Help exit;; \?) # Invalid option echo "Error: Invalid option" exit;; esac done echo "hello world!"Be sure to test this version of your program very thoroughly. Use random input and see what happens. You should also try testing valid and invalid options without using the dash (
-
) in front.Using options to enter dataFirst, add a variable and initialize it. Add the two lines shown in bold in the segment of the program shown below. This initializes the $Name variable to "world" as the default.
<snip> ############################################################ ############################################################ # Main program # ############################################################ ############################################################ # Set variables Name="world" ############################################################ # Process the input options. Add options as needed. # <snip>Change the last line of the program, the
echo
command, to this.echo "hello $Name!"Add the logic to input a name in a moment but first test the program again. The result should be exactly the same as before.
[dboth@david ~]$ hello.sh hello world! [dboth@david ~]$# Get the options while getopts ":hn:" option; do case $option in h) # display Help Help exit;; n) # Enter a name Name=$OPTARG;; \?) # Invalid option echo "Error: Invalid option" exit;; esac done$OPTARG is always the variable name used for each new option argument, no matter how many there are. You must assign the value in $OPTARG to a variable name that will be used in the rest of the program. This new stanza does not have an exit statement. This changes the program flow so that after processing all valid options in the case statement, execution moves on to the next statement after the case construct.
Test the revised program.
[dboth@david ~]$ hello.sh hello world![dboth@david ~]$ hello.sh -n LinuxGeek46 hello LinuxGeek46![dboth@david ~]$ hello.sh -n "David Both" hello David Both! [dboth@david ~]$The completed program looks like this.
#!/bin/bash ############################################################ # Help # ############################################################ Help() { # Display Help echo "Add description of the script functions here." echo echo "Syntax: scriptTemplate [-g|h|v|V]" echo "options:" echo "g Print the GPL license notification." echo "h Print this Help." echo "v Verbose mode." echo "V Print software version and exit." echo } ############################################################ ############################################################ # Main program # ############################################################ ############################################################ # Set variables Name="world" ############################################################ # Process the input options. Add options as needed. # ############################################################ # Get the options while getopts ":hn:" option; do case $option in h) # display Help Help exit;; n) # Enter a name Name=$OPTARG;; \?) # Invalid option echo "Error: Invalid option" exit;; esac done echo "hello $Name!"Be sure to test the help facility and how the program reacts to invalid input to verify that its ability to process those has not been compromised. If that all works as it should, then you have successfully learned how to use options and option arguments.
May 23, 2021 | sookocheff.com
The Bash String Operators Posted on December 11, 2014 | 3 minutes | Kevin Sookocheff
A common task in bash programming is to manipulate portions of a string and return the result. bash provides rich support for these manipulations via string operators. The syntax is not always intuitive so I wanted to use this blog post to serve as a permanent reminder of the operators.
The string operators are signified with the
Substring Extraction${}
notation. The operations can be grouped in to a few classes. Each heading in this article describes a class of operation.
Extract from a position1${string:position}Extraction returns a substring of
string
starting atposition
and ending at the end ofstring
.string
is treated as an array of characters starting at 0.1 2 3 4 5> string="hello world" > echo ${string:1} ello world > echo ${string:6} world
Extract from a position with a length${string:position:length}Adding a length returns a substring only as long as the
length
parameter.> string="hello world" > echo ${string:1:2} el > echo ${string:6:3} worSubstring Removal
Remove shortest starting match${variable#pattern}If
variable
starts withpattern
, delete the shortest part that matches the pattern.> string="hello world, hello jim" > echo ${string#*hello} world, hello jim
Remove longest starting match${variable##pattern}If
variable
starts withpattern
, delete the longest match fromvariable
and return the rest.> string="hello world, hello jim" > echo ${string##*hello} jim
Remove shortest ending match${variable%pattern}If
variable
ends withpattern
, delete the shortest match from the end ofvariable
and return the rest.> string="hello world, hello jim" > echo ${string%hello*} hello world,
Remove longest ending match${variable%%pattern}If
variable
ends withpattern
, delete the longest match from the end ofvariable
and return the rest.> string="hello world, hello jim" > echo ${string%%hello*}Substring Replacement
Replace first occurrence of word${variable/pattern/string}Find the first occurrence of
pattern
invariable
and replace it withstring
. Ifstring
is null,pattern
is deleted fromvariable
. Ifpattern
starts with#
, the match must occur at the beginning ofvariable
. Ifpattern
starts with%
, the match must occur at the end of thevariable
.> string="hello world, hello jim" > echo ${string/hello/goodbye} goodbye world, hello jim
Replace all occurrences of word${variable//pattern/string}Same as above but finds all occurrences of
pattern
invariable
and replace them withstring
. Ifstring
is null,pattern
is deleted fromvariable
.> string="hello world, hello jim" > echo ${string//hello/goodbye} goodbye world, goodbye jim
See alsobash
May 10, 2021 | www.xmodulo.com
When you need to split a string in bash, you can use bash's built-in
read
command. This command reads a single line of string from stdin, and splits the string on a delimiter. The split elements are then stored in either an array or separate variables supplied with theread
command. The default delimiter is whitespace characters (' ', '\t', '\r', '\n'). If you want to split a string on a custom delimiter, you can specify the delimiter inIFS
variable before callingread
.# strings to split var1="Harry Samantha Bart Amy" var2="green:orange:black:purple" # split a string by one or more whitespaces, and store the result in an array read -a my_array <<< $var1 # iterate the array to access individual split words for elem in "${my_array[@]}"; do echo $elem done echo "----------" # split a string by a custom delimter IFS=':' read -a my_array2 <<< $var2 for elem in "${my_array2[@]}"; do echo $elem doneHarry Samantha Bart Amy ---------- green orange black purple
May 10, 2021 | www.xmodulo.com
Remove a Trailing Newline Character from a String in Bash
If you want to remove a trailing newline or carriage return character from a string, you can use the bash's parameter expansion in the following form.
${string%$var}This expression implies that if the "string" contains a trailing character stored in "var", the result of the expression will become the "string" without the character. For example:
# input string with a trailing newline character input_line=$'This is my example line\n' # define a trailing character. For carriage return, replace it with $'\r' character=$'\n' echo -e "($input_line)" # remove a trailing newline character input_line=${input_line%$character} echo -e "($input_line)"(This is my example line ) (This is my example line)Trim Leading/Trailing Whitespaces from a String in BashIf you want to remove whitespaces at the beginning or at the end of a string (also known as leading/trailing whitespaces) from a string, you can use
sed
command.my_str=" This is my example string " # original string with leading/trailing whitespaces echo -e "($my_str)" # trim leading whitespaces in a string my_str=$(echo "$my_str" | sed -e "s/^[[:space:]]*//") echo -e "($my_str)" # trim trailing whitespaces in a string my_str=$(echo "$my_str" | sed -e "s/[[:space:]]*$//") echo -e "($my_str)"( This is my example string ) (This is my example string ) ← leading whitespaces removed (This is my example string) ← trailing whitespaces removedIf you want to stick with bash's built-in mechanisms, the following bash function can get the job done.
trim() { local var="$*" # remove leading whitespace characters var="${var#"${var%%[![:space:]]*}"}" # remove trailing whitespace characters var="${var%"${var##*[![:space:]]}"}" echo "$var" } my_str=" This is my example string " echo "($my_str)" my_str=$(trim $my_str) echo "($my_str)"
May 10, 2021 | www.oreilly.com
Table 4-1. Substitution Operators
Operator Substitution $ { varname :- word } If varname exists and isn't null, return its value; otherwise return word .
Purpose : Returning a default value if the variable is undefined.
Example : ${count:-0} evaluates to 0 if count is undefined.
$ { varname := word } If varname exists and isn't null, return its value; otherwise set it to word and then return its value. Positional and special parameters cannot be assigned this way.
Purpose : Setting a variable to a default value if it is undefined.
Example : $ {count := 0} sets count to 0 if it is undefined.
$ { varname :? message } If varname exists and isn't null, return its value; otherwise print varname : followed by message , and abort the current command or script (non-interactive shells only). Omitting message produces the default message parameter null or not set .
Purpose : Catching errors that result from variables being undefined.
Example : {count :?" undefined! " } prints "count: undefined!" and exits if count is undefined.
$ { varname : + word } If varname exists and isn't null, return word ; otherwise return null.
Purpose : Testing for the existence of a variable.
Example : $ {count :+ 1} returns 1 (which could mean "true") if count is defined.
$ { varname : offset } $ { varname : offset : length } Performs substring expansion. a It returns the substring of $ varname starting at offset and up to length characters. The first character in $ varname is position 0. If length is omitted, the substring starts at offset and continues to the end of $ varname . If offset is less than 0 then the position is taken from the end of $ varname . If varname is @ , the length is the number of positional parameters starting at parameter offset .
Purpose : Returning parts of a string (substrings or slices ).
Example : If count is set to frogfootman , $ {count :4} returns footman . $ {count :4:4} returns foot .
[ 52 ]
Table 4-2. Pattern-Matching Operators
Operator Meaning $ { variable # pattern } If the pattern matches the beginning of the variable's value, delete the shortest part that matches and return the rest.
$ { variable ## pattern } If the pattern matches the beginning of the variable's value, delete the longest part that matches and return the rest.
$ { variable % pattern } If the pattern matches the end of the variable's value, delete the shortest part that matches and return the rest.
$ { variable %% pattern } If the pattern matches the end of the variable's value, delete the longest part that matches and return the rest.
$ { variable / pattern / string } $ { variable // pattern / string } The longest match to pattern in variable is replaced by string . In the first form, only the first match is replaced. In the second form, all matches are replaced. If the pattern is begins with a # , it must match at the start of the variable. If it begins with a % , it must match with the end of the variable. If string is null, the matches are deleted. If variable is @ or * , the operation is applied to each positional parameter in turn and the expansion is the resultant list. a
May 10, 2021 | linuxize.com
Another way of concatenating strings in bash is by appending variables or literal strings to a variable using the
+=
operator:VAR1="Hello, " VAR1+=" World" echo "$VAR1"CopyHello, World CopyThe following example is using the
languages.sh+=
operator to concatenate strings in bash for loop :VAR="" for ELEMENT in 'Hydrogen' 'Helium' 'Lithium' 'Beryllium'; do VAR+="${ELEMENT} " done echo "$VAR"CopyHydrogen Helium Lithium Beryllium
May 10, 2021 | sites.google.com
The curly-bracket syntax allows for the shell's string operators . String operators allow you to manipulate values of variables in various useful ways without having to write full-blown programs or resort to external UNIX utilities. You can do a lot with string-handling operators even if you haven't yet mastered the programming features we'll see in later chapters.
In particular, string operators let you do the following:
4.3.1 Syntax of String Operators
- Ensure that variables exist (i.e., are defined and have non-null values)
- Set default values for variables
- Catch errors that result from variables not being set
- Remove portions of variables' values that match patterns
The basic idea behind the syntax of string operators is that special characters that denote operations are inserted between the variable's name and the right curly brackets. Any argument that the operator may need is inserted to the operator's right.
The first group of string-handling operators tests for the existence of variables and allows substitutions of default values under certain conditions. These are listed in Table 4.1 . [6]
[6] The colon (
:
) in each of these operators is actually optional. If the colon is omitted, then change "exists and isn't null" to "exists" in each definition, i.e., the operator tests for existence only.
Table 4.1: Substitution Operators Operator Substitution ${ varname :- word } If varname exists and isn't null, return its value; otherwise return word . Purpose : Returning a default value if the variable is undefined. Example : ${count:-0} evaluates to 0 if count is undefined. ${ varname := word} If varname exists and isn't null, return its value; otherwise set it to word and then return its value.[7] Purpose : Setting a variable to a default value if it is undefined. Example : $
{count:=0} sets count to 0 if it is undefined.${ varname :?
message }If varname exists and isn't null, return its value; otherwise print varname : followed by message , and abort the current command or script. Omitting message produces the default message parameter null or not set . Purpose : Catching errors that result from variables being undefined. Example : {count :?"
undefined!"
} prints "count: undefined!" and exits if count is undefined.${ varname :+
word }If varname exists and isn't null, return word ; otherwise return null. Purpose : Testing for the existence of a variable. Example : ${count:+1} returns 1 (which could mean "true") if count is defined. [7] Pascal, Modula, and Ada programmers may find it helpful to recognize the similarity of this to the assignment operators in those languages.
The first two of these operators are ideal for setting defaults for command-line arguments in case the user omits them. We'll use the first one in our first programming task.
Task 4.1You have a large album collection, and you want to write some software to keep track of it. Assume that you have a file of data on how many albums you have by each artist. Lines in the file look like this:
14 Bach, J.S. 1 Balachander, S. 21 Beatles 6 Blakey, ArtWrite a program that prints the N highest lines, i.e., the N artists by whom you have the most albums. The default for N should be 10. The program should take one argument for the name of the input file and an optional second argument for how many lines to print.
By far the best approach to this type of script is to use built-in UNIX utilities, combining them with I/O redirectors and pipes. This is the classic "building-block" philosophy of UNIX that is another reason for its great popularity with programmers. The building-block technique lets us write a first version of the script that is only one line long:
sort -nr $1 | head -${2:-10}Here is how this works: the sort (1) program sorts the data in the file whose name is given as the first argument ( $1 ). The -n option tells sort to interpret the first word on each line as a number (instead of as a character string); the -r tells it to reverse the comparisons, so as to sort in descending order.
The output of sort is piped into the head (1) utility, which, when given the argument - N , prints the first N lines of its input on the standard output. The expression -${2:-10} evaluates to a dash ( - ) followed by the second argument if it is given, or to -10 if it's not; notice that the variable in this expression is 2 , which is the second positional parameter.
Assume the script we want to write is called highest . Then if the user types highest myfile , the line that actually runs is:
sort -nr myfile | head -10Or if the user types highest myfile 22 , the line that runs is:
sort -nr myfile | head -22Make sure you understand how the :- string operator provides a default value.
This is a perfectly good, runnable script-but it has a few problems. First, its one line is a bit cryptic. While this isn't much of a problem for such a tiny script, it's not wise to write long, elaborate scripts in this manner. A few minor changes will make the code more readable.
First, we can add comments to the code; anything between # and the end of a line is a comment. At a minimum, the script should start with a few comment lines that indicate what the script does and what arguments it accepts. Second, we can improve the variable names by assigning the values of the positional parameters to regular variables with mnemonic names. Finally, we can add blank lines to space things out; blank lines, like comments, are ignored. Here is a more readable version:
# # highest filename [howmany] # # Print howmany highest-numbered lines in file filename. # The input file is assumed to have lines that start with # numbers. Default for howmany is 10. # filename=$1 howmany=${2:-10} sort -nr $filename | head -$howmanyThe square brackets around howmany in the comments adhere to the convention in UNIX documentation that square brackets denote optional arguments.
The changes we just made improve the code's readability but not how it runs. What if the user were to invoke the script without any arguments? Remember that positional parameters default to null if they aren't defined. If there are no arguments, then $1 and $2 are both null. The variable howmany ( $2 ) is set up to default to 10, but there is no default for filename ( $1 ). The result would be that this command runs:
sort -nr | head -10As it happens, if sort is called without a filename argument, it expects input to come from standard input, e.g., a pipe (|) or a user's terminal. Since it doesn't have the pipe, it will expect the terminal. This means that the script will appear to hang! Although you could always type [CTRL-D] or [CTRL-C] to get out of the script, a naive user might not know this.
Therefore we need to make sure that the user supplies at least one argument. There are a few ways of doing this; one of them involves another string operator. We'll replace the line:
filename=$1with:
filename=${1:?"filename missing."}This will cause two things to happen if a user invokes the script without any arguments: first the shell will print the somewhat unfortunate message:
highest: 1: filename missing.to the standard error output. Second, the script will exit without running the remaining code.
With a somewhat "kludgy" modification, we can get a slightly better error message. Consider this code:
filename=$1 filename=${filename:?"missing."}This results in the message:
highest: filename: missing.(Make sure you understand why.) Of course, there are ways of printing whatever message is desired; we'll find out how in Chapter 5 .
Before we move on, we'll look more closely at the two remaining operators in Table 4.1 and see how we can incorporate them into our task solution. The := operator does roughly the same thing as :- , except that it has the "side effect" of setting the value of the variable to the given word if the variable doesn't exist.
Therefore we would like to use := in our script in place of :- , but we can't; we'd be trying to set the value of a positional parameter, which is not allowed. But if we replaced:
howmany=${2:-10}with just:
howmany=$2and moved the substitution down to the actual command line (as we did at the start), then we could use the := operator:
sort -nr $filename | head -${howmany:=10}Using := has the added benefit of setting the value of howmany to 10 in case we need it afterwards in later versions of the script.
The final substitution operator is :+ . Here is how we can use it in our example: Let's say we want to give the user the option of adding a header line to the script's output. If he or she types the option -h , then the output will be preceded by the line:
ALBUMS ARTISTAssume further that this option ends up in the variable header , i.e., $header is -h if the option is set or null if not. (Later we will see how to do this without disturbing the other positional parameters.)
The expression:
${header:+"ALBUMS ARTIST\n"}yields null if the variable header is null, or ALBUMS══ARTIST\n if it is non-null. This means that we can put the line:
print -n ${header:+"ALBUMS ARTIST\n"}right before the command line that does the actual work. The -n option to print causes it not to print a LINEFEED after printing its arguments. Therefore this print statement will print nothing-not even a blank line-if header is null; otherwise it will print the header line and a LINEFEED (\n).
4.3.2 Patterns and Regular ExpressionsWe'll continue refining our solution to Task 4-1 later in this chapter. The next type of string operator is used to match portions of a variable's string value against patterns . Patterns, as we saw in Chapter 1 are strings that can contain wildcard characters (
*
,?
, and [] for character sets and ranges).Wildcards have been standard features of all UNIX shells going back (at least) to the Version 6 Bourne shell. But the Korn shell is the first shell to add to their capabilities. It adds a set of operators, called regular expression (or regexp for short) operators, that give it much of the string-matching power of advanced UNIX utilities like awk (1), egrep (1) (extended grep (1)) and the emacs editor, albeit with a different syntax. These capabilities go beyond those that you may be used to in other UNIX utilities like grep , sed (1) and vi (1).
Advanced UNIX users will find the Korn shell's regular expression capabilities occasionally useful for script writing, although they border on overkill. (Part of the problem is the inevitable syntactic clash with the shell's myriad other special characters.) Therefore we won't go into great detail about regular expressions here. For more comprehensive information, the "last word" on practical regular expressions in UNIX is sed & awk , an O'Reilly Nutshell Handbook by Dale Dougherty. If you are already comfortable with awk or egrep , you may want to skip the following introductory section and go to "Korn Shell Versus awk/egrep Regular Expressions" below, where we explain the shell's regular expression mechanism by comparing it with the syntax used in those two utilities. Otherwise, read on.
4.3.2.1 Regular expression basicsThink of regular expressions as strings that match patterns more powerfully than the standard shell wildcard schema. Regular expressions began as an idea in theoretical computer science, but they have found their way into many nooks and crannies of everyday, practical computing. The syntax used to represent them may vary, but the concepts are very much the same.
A shell regular expression can contain regular characters, standard wildcard characters, and additional operators that are more powerful than wildcards. Each such operator has the form x ( exp ) , where x is the particular operator and exp is any regular expression (often simply a regular string). The operator determines how many occurrences of exp a string that matches the pattern can contain. See Table 4.2 and Table 4.3 .
Table 4.2: Regular Expression Operators Operator Meaning *
( exp )0 or more occurrences of exp +
( exp )1 or more occurrences of exp ?
( exp )0 or 1 occurrences of exp @ ( exp1 | exp2 |...) exp1 or exp2 or... ! ( exp ) Anything that doesn't match exp [8] [8] Actually, !( exp ) is not a regular expression operator by the standard technical definition, though it is a handy extension.
Table 4.3: Regular Expression Operator Examples Expression Matches x x *
( x )Null string, x , xx , xxx , ... +
( x )x , xx , xxx , ... ?
( x )Null string, x !
( x )Any string except x @
( x )x (see below) Regular expressions are extremely useful when dealing with arbitrary text, as you already know if you have used grep or the regular-expression capabilities of any UNIX editor. They aren't nearly as useful for matching filenames and other simple types of information with which shell users typically work. Furthermore, most things you can do with the shell's regular expression operators can also be done (though possibly with more keystrokes and less efficiency) by piping the output of a shell command through grep or egrep .
Nevertheless, here are a few examples of how shell regular expressions can solve filename-listing problems. Some of these will come in handy in later chapters as pieces of solutions to larger tasks.
- The emacs editor supports customization files whose names end in .el (for Emacs LISP) or .elc (for Emacs LISP Compiled). List all emacs customization files in the current directory.
- In a directory of C source code, list all files that are not necessary. Assume that "necessary" files end in .c or .h , or are named Makefile or README .
- Filenames in the VAX/VMS operating system end in a semicolon followed by a version number, e.g., fred.bob;23 . List all VAX/VMS-style filenames in the current directory.
Here are the solutions:
- In the first of these, we are looking for files that end in .el with an optional c . The expression that matches this is
*
.el?
(c) .- The second example depends on the four standard subexpressions
*
.c ,*
.h , Makefile , and README . The entire expression is !(*
.c|*
.h|Makefile|README) , which matches anything that does not match any of the four possibilities.- The solution to the third example starts with
*
\;
: the shell wildcard*
followed by a backslash-escaped semicolon. Then, we could use the regular expression +([0-9]) , which matches one or more characters in the range [0-9] , i.e., one or more digits. This is almost correct (and probably close enough), but it doesn't take into account that the first digit cannot be 0. Therefore the correct expression is*
\;[1-9]*
([0-9]) , which matches anything that ends with a semicolon, a digit from 1 to 9, and zero or more digits from 0 to 9.Regular expression operators are an interesting addition to the Korn shell's features, but you can get along well without them-even if you intend to do a substantial amount of shell programming.
In our opinion, the shell's authors missed an opportunity to build into the wildcard mechanism the ability to match files by type (regular, directory, executable, etc., as in some of the conditional tests we will see in Chapter 5 ) as well as by name component. We feel that shell programmers would have found this more useful than arcane regular expression operators.
The following section compares Korn shell regular expressions to analogous features in awk and egrep . If you aren't familiar with these, skip to the section entitled "Pattern-matching Operators."
4.3.2.2 Korn shell versus awk/egrep regular expressionsTable 4.4 is an expansion of Table 4.2 : the middle column shows the equivalents in awk / egrep of the shell's regular expression operators.
Table 4.4: Shell Versus egrep/awk Regular Expression Operators Korn Shell egrep/awk Meaning *
( exp )exp *
0 or more occurrences of exp +( exp ) exp + 1 or more occurrences of exp ?
( exp )exp ?
0 or 1 occurrences of exp @( exp1 | exp2 |...) exp1 | exp2 |... exp1 or exp2 or... ! ( exp ) (none) Anything that doesn't match exp These equivalents are close but not quite exact. Actually, an exp within any of the Korn shell operators can be a series of exp1 | exp2 |... alternates. But because the shell would interpret an expression like dave|fred|bob as a pipeline of commands, you must use @(dave|fred|bob) for alternates by themselves.
For example:
- @(dave|fred|bob) matches dave , fred , or bob .
*
(dave|fred|bob) means, "0 or more occurrences of dave , fred , or bob ". This expression matches strings like the null string, dave , davedave , fred , bobfred , bobbobdavefredbobfred , etc.- +(dave|fred|bob) matches any of the above except the null string.
- ?(dave|fred|bob) matches the null string, dave , fred , or bob .
- !(dave|fred|bob) matches anything except dave , fred , or bob .
It is worth re-emphasizing that shell regular expressions can still contain standard shell wildcards. Thus, the shell wildcard ? (match any single character) is the equivalent to . in egrep or awk , and the shell's character set operator [ ... ] is the same as in those utilities. [9] For example, the expression +([0-9]) matches a number, i.e., one or more digits. The shell wildcard character
*
is equivalent to the shell regular expression*
(?)
.[9] And, for that matter, the same as in grep , sed , ed , vi , etc.
A few egrep and awk regexp operators do not have equivalents in the Korn shell. These include:
- The beginning- and end-of-line operators ^ and $ .
- The beginning- and end-of-word operators \< and \> .
- Repeat factors like \{ N \} and \{ M , N \} .
The first two pairs are hardly necessary, since the Korn shell doesn't normally operate on text files and does parse strings into words itself.
4.3.3 Pattern-matching OperatorsTable 4.5 lists the Korn shell's pattern-matching operators.
Table 4.5: Pattern-matching Operators Operator Meaning $ { variable # pattern } If the pattern matches the beginning of the variable's value, delete the shortest part that matches and return the rest. $ { variable ## pattern } If the pattern matches the beginning of the variable's value, delete the longest part that matches and return the rest. $ { variable % pattern } If the pattern matches the end of the variable's value, delete the shortest part that matches and return the rest. $ { variable %% pattern } If the pattern matches the end of the variable's value, delete the longest part that matches and return the rest. These can be hard to remember, so here's a handy mnemonic device: # matches the front because number signs precede numbers; % matches the rear because percent signs follow numbers.
The classic use for pattern-matching operators is in stripping off components of pathnames, such as directory prefixes and filename suffixes. With that in mind, here is an example that shows how all of the operators work. Assume that the variable path has the value /home /billr/mem/long.file.name ; then:
Expression Result ${path##/*/} long.file.name ${path#/*/} billr/mem/long.file.name $path /home/billr/mem/long.file.name ${path%.*} /home/billr/mem/long.file ${path%%.*} /home/billr/mem/longThe two patterns used here are
/*/
, which matches anything between two slashes, and .*
, which matches a dot followed by anything.We will incorporate one of these operators into our next programming task.
Task 4.2You are writing a C compiler, and you want to use the Korn shell for your front-end.[10]
[10] Don't laugh-many UNIX compilers have shell scripts as front-ends.
Think of a C compiler as a pipeline of data processing components. C source code is input to the beginning of the pipeline, and object code comes out of the end; there are several steps in between. The shell script's task, among many other things, is to control the flow of data through the components and to designate output files.
You need to write the part of the script that takes the name of the input C source file and creates from it the name of the output object code file. That is, you must take a filename ending in .c and create a filename that is similar except that it ends in .o .
The task at hand is to strip the .c off the filename and append .o . A single shell statement will do it:
objname=${filename%.c}.oThis tells the shell to look at the end of filename for .c . If there is a match, return $filename with the match deleted. So if filename had the value fred.c , the expression ${filename%.c} would return fred . The .o is appended to make the desired fred.o , which is stored in the variable objname .
If filename had an inappropriate value (without .c ) such as fred.a , the above expression would evaluate to fred.a.o : since there was no match, nothing is deleted from the value of filename , and .o is appended anyway. And, if filename contained more than one dot-e.g., if it were the y.tab.c that is so infamous among compiler writers-the expression would still produce the desired y.tab.o . Notice that this would not be true if we used %% in the expression instead of % . The former operator uses the longest match instead of the shortest, so it would match .tab.o and evaluate to y.o rather than y.tab.o . So the single % is correct in this case.
A longest-match deletion would be preferable, however, in the following task.
Task 4.3You are implementing a filter that prepares a text file for printer output. You want to put the file's name-without any directory prefix-on the "banner" page. Assume that, in your script, you have the pathname of the file to be printed stored in the variable pathname .
Clearly the objective is to remove the directory prefix from the pathname. The following line will do it:
bannername=${pathname##*/}This solution is similar to the first line in the examples shown before. If pathname were just a filename, the pattern
*
/ (anything followed by a slash) would not match and the value of the expression would be pathname untouched. If pathname were something like fred/bob , the prefix fred/ would match the pattern and be deleted, leaving just bob as the expression's value. The same thing would happen if pathname were something like /dave/pete/fred/bob : since the ## deletes the longest match, it deletes the entire /dave/pete/fred/ .If we used #
*
/ instead of ##*
/ , the expression would have the incorrect value dave/pete/fred/bob , because the shortest instance of "anything followed by a slash" at the beginning of the string is just a slash ( / ).The construct $ { variable ##
4.3.4 Length Operator*
/} is actually equivalent to the UNIX utility basename (1). basename takes a pathname as argument and returns the filename only; it is meant to be used with the shell's command substitution mechanism (see below). basename is less efficient than $ { variable ##/*
} because it runs in its own separate process rather than within the shell. Another utility, dirname (1), does essentially the opposite of basename : it returns the directory prefix only. It is equivalent to the Korn shell expression $ { variable %/*
} and is less efficient for the same reason.There are two remaining operators on variables. One is $ {# varname }, which returns the length of the value of the variable as a character string. (In Chapter 6 we will see how to treat this and similar values as actual numbers so they can be used in arithmetic expressions.) For example, if filename has the value fred.c , then ${#filename} would have the value 6 . The other operator ( $ {# array [
*
]} ) has to do with array variables, which are also discussed in Chapter 6 .
Mar 29, 2021 | www.xmodulo.com
When you are writing a bash script, there are situations where you need to generate a sequence of numbers or strings . One common use of such sequence data is for loop iteration. When you iterate over a range of numbers, the range may be defined in many different ways (e.g., [0, 1, 2,..., 99, 100], [50, 55, 60,..., 75, 80], [10, 9, 8,..., 1, 0], etc). Loop iteration may not be just over a range of numbers. You may need to iterate over a sequence of strings with particular patterns (e.g., incrementing filenames; img001.jpg, img002.jpg, img003.jpg). For this type of loop control, you need to be able to generate a sequence of numbers and/or strings flexibly.
While you can use a dedicated tool like
Brace Expansionseq
to generate a range of numbers, it is really not necessary to add such external dependency in your bash script when bash itself provides a powerful built-in range function called brace expansion . In this tutorial, let's find out how to generate a sequence of data in bash using brace expansion and what are useful brace expansion examples .Bash's built-in range function is realized by so-called brace expansion . In a nutshell, brace expansion allows you to generate a sequence of strings based on supplied string and numeric input data. The syntax of brace expansion is the following.
{<string1>,<string2>,...,<stringN>} {<start-number>..<end-number>} {<start-number>..<end-number>..<increment>} <prefix-string>{......} {......}<suffix-string> <prefix-string>{......}<suffix-string>All these sequence expressions are iterable, meaning you can use them for while/for loops . In the rest of the tutorial, let's go over each of these expressions to clarify their use cases.
https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=5674857721&adk=3047986842&adf=3341013331&pi=t.ma~as.5674857721&w=1200&fwrn=4&lmt=1617109287&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Frange-sequence-expression-bash.html&flash=0&wgl=1&dt=1617311559984&bpp=49&bdt=419&idt=296&shv=r20210331&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280&correlator=486211930057&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617311560&ga_hid=1542200251&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=1350&biw=1519&bih=762&scr_x=0&scr_y=0&eid=42530672%2C44740079%2C44739537%2C44739387&oid=3&pvsid=2774697899597512&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=2&uci=a!2&btvi=1&fsb=1&xpc=Ug4rFEoUn3&p=https%3A//www.xmodulo.com&dtd=306
Use Case #1: List a Sequence of StringsThe first use case of brace expansion is a simple string list, which is a comma-separated list of string literals within the braces. Here we are not generating a sequence of data, but simply list a pre-defined sequence of string data.
{<string1>,<string2>,...,<stringN>}You can use this brace expansion to iterate over the string list as follows.
for fruit in {apple,orange,lemon}; do echo $fruit doneapple orange lemonThis expression is also useful to invoke a particular command multiple times with different parameters.
For example, you can create multiple subdirectories in one shot with:
$ mkdir -p /home/xmodulo/users/{dan,john,alex,michael,emma}To create multiple empty files:
$ touch /tmp/{1,2,3,4}.logUse Case #2: Define a Range of Numbershttps://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=1246960885&adk=2798017750&adf=1795540232&pi=t.ma~as.1246960885&w=1200&fwrn=4&lmt=1617109287&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Frange-sequence-expression-bash.html&flash=0&wgl=1&dt=1617311560086&bpp=3&bdt=522&idt=212&shv=r20210331&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280%2C1200x200&correlator=486211930057&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617311560&ga_hid=1542200251&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=2661&biw=1519&bih=762&scr_x=0&scr_y=0&eid=42530672%2C44740079%2C44739537%2C44739387&oid=3&pvsid=2774697899597512&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=3&uci=a!3&btvi=2&fsb=1&xpc=4Qr9I1IICq&p=https%3A//www.xmodulo.com&dtd=230
The most common use case of brace expansion is to define a range of numbers for loop iteration. For that, you can use the following expressions, where you specify the start/end of the range, as well as an optional increment value.
{<start-number>..<end-number>} {<start-number>..<end-number>..<increment>}To define a sequence of integers between 10 and 20:
echo {10..20} 10 11 12 13 14 15 16 17 18 19 20You can easily integrate this brace expansion in a loop:
for num in {10..20}; do echo $num doneTo generate a sequence of numbers with an increment of 2 between 0 and 20:
echo {0..20..2} 0 2 4 6 8 10 12 14 16 18 20You can generate a sequence of decrementing numbers as well:
echo {20..10} 20 19 18 17 16 15 14 13 12 11 10echo {20..10..-2} 20 18 16 14 12 10You can also pad the numbers with leading zeros, in case you need to use the same number of digits. For example:
echo {00..20..2} 00 02 04 06 08 10 12 14 16 18 20Use Case #3: Generate a Sequence of Charactershttps://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=1246960885&adk=2798017750&adf=2275625677&pi=t.ma~as.1246960885&w=1200&fwrn=4&lmt=1617109287&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Frange-sequence-expression-bash.html&flash=0&wgl=1&adsid=ChEI8N6VgwYQhfmhjs6mgZfVARIqAB-w9KHKYtk-pO1suXBsxL8W2AonVwnPmH2XuFwrRPO8MEEAXQpMrZaL&dt=1617311560089&bpp=13&bdt=524&idt=234&shv=r20210331&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280%2C1200x200%2C1200x200%2C0x0%2C1519x762&nras=2&correlator=486211930057&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617311560&ga_hid=1542200251&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=4285&biw=1519&bih=762&scr_x=0&scr_y=1242&eid=42530672%2C44740079%2C44739537%2C44739387&oid=3&psts=AGkb-H_lFqstnD2HWv6DycAKvGu9yoyyH3Im0lIwlWU9l6Uc-8KMKIFblasNhvUgGzV4BHfOo--XblJj_VswXA%2CAGkb-H9o5YtqjrXVMh6mfBSJzTIgoTV2500RL7u85T0dFqY9L2FCM8n5K3kCkE5gmmIGpZe6AF47pvNGmYctKA%2CAGkb-H-ww6bPiVlNqpc1PRrGrEXcujNuzAiKCh9dMztOCLvaTDy5GzZj2TpeUNENhbxuLuuOYYD5RgOfQA&pvsid=2774697899597512&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&jar=2021-04-01-17&ifi=4&uci=a!4&btvi=3&fsb=1&xpc=QImaZvyQly&p=https%3A//www.xmodulo.com&dtd=27097
Brace expansion can be used to generate not just a sequence of numbers, but also a sequence of characters.
{<start-character>..<end-character>}To generate a sequence of alphabet characters between 'd' and 'p':
echo {d..p} d e f g h i j k l m n o pYou can generate a sequence of upper-case alphabets as well.
for char1 in {A..B}; do for char2 in {A..B}; do echo "${char1}${char2}" done doneAA AB BA BBUse Case #4: Generate a Sequence of Strings with Prefix/SuffixIt's possible to add a prefix and/or a suffix to a given brace expression as follows.
<prefix-string>{......} {......}<suffix-string> <prefix-string>{......}<suffix-string>Using this feature, you can easily generate a list of sequentially numbered filenames:
# create incrementing filenames for filename in img_{00..5}.jpg; do echo $filename doneimg_00.jpg img_01.jpg img_02.jpg img_03.jpg img_04.jpg img_05.jpgUse Case #5: Combine Multiple Brace Expansionshttps://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=1246960885&adk=2798017750&adf=1069835252&pi=t.ma~as.1246960885&w=1200&fwrn=4&lmt=1617109287&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Frange-sequence-expression-bash.html&flash=0&wgl=1&adsid=ChEI8N6VgwYQhfmhjs6mgZfVARIqAB-w9KHKYtk-pO1suXBsxL8W2AonVwnPmH2XuFwrRPO8MEEAXQpMrZaL&dt=1617311560132&bpp=3&bdt=568&idt=193&shv=r20210331&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280%2C1200x200%2C1200x200%2C0x0%2C1519x762%2C1200x200&nras=2&correlator=486211930057&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617311560&ga_hid=1542200251&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=6156&biw=1519&bih=762&scr_x=0&scr_y=3151&eid=42530672%2C44740079%2C44739537%2C44739387&oid=3&psts=AGkb-H_lFqstnD2HWv6DycAKvGu9yoyyH3Im0lIwlWU9l6Uc-8KMKIFblasNhvUgGzV4BHfOo--XblJj_VswXA%2CAGkb-H9o5YtqjrXVMh6mfBSJzTIgoTV2500RL7u85T0dFqY9L2FCM8n5K3kCkE5gmmIGpZe6AF47pvNGmYctKA%2CAGkb-H-ww6bPiVlNqpc1PRrGrEXcujNuzAiKCh9dMztOCLvaTDy5GzZj2TpeUNENhbxuLuuOYYD5RgOfQA%2CAGkb-H_oWO6sMjx-sSACXECD6aXL8a7NcIP5miVIHjPj27ExAouRoqV1vRbD0UeQxrrlNTPAZbGg7YubopvUSA&pvsid=2774697899597512&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&jar=2021-04-01-17&ifi=5&uci=a!5&btvi=4&fsb=1&xpc=twNmeHYXl4&p=https%3A//www.xmodulo.com&dtd=41555
Finally, it's possible to combine multiple brace expansions, in which case the combined expressions will generate all possible combinations of sequence data produced by each expression.
For example, we have the following script that prints all possible combinations of two-character alphabet strings using double-loop iteration.
for char1 in {A..Z}; do for char2 in {A..Z}; do echo "${char1}${char2}" done doneBy combining two brace expansions, the following single loop can produce the same output as above.
for str in {A..Z}{A..Z}; do echo $str doneConclusionIn this tutorial, I described a bash's built-in mechanism called brace expansion, which allows you to easily generate a sequence of arbitrary strings in a single command line. Brace expansion is useful not just for a bash script, but also in your command line environment (e.g., when you need to run the same command multiple times with different arguments). If you know any useful brace expansion tips and use cases, feel free to share it in the comment.
If you find this tutorial helpful, I recommend you check out the series ofbash
shell scripting tutorials provided by Xmodulo.
Mar 30, 2021 | www.xmodulo.com
How to catch and handle errors in bash
Last updated on March 28, 2021 by Dan Nanni
https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=280&slotname=6357311593&adk=3477157422&adf=3251077269&pi=t.ma~as.6357311593&w=1200&fwrn=4&fwrnh=100&lmt=1617039750&rafmt=1&psa=1&format=1200x280&url=https%3A%2F%2Fwww.xmodulo.com%2Fcatch-handle-errors-bash.html&flash=0&fwr=0&fwrattr=true&rpe=1&resp_fmts=3&wgl=1&dt=1617150500578&bpp=19&bdt=670&idt=289&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&correlator=2807789420329&frm=20&pv=2&ga_vid=288434327.1614570002&ga_sid=1617150501&ga_hid=294975347&ga_fc=0&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=31&ady=254&biw=1519&bih=714&scr_x=0&scr_y=0&eid=42530672%2C44740079%2C44739387&oid=3&pvsid=3816417963868055&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C714&vis=1&rsz=%7C%7CeE%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=1&uci=a!1&fsb=1&xpc=FeLkc0yKaB&p=https%3A//www.xmodulo.com&dtd=346
In an ideal world, things always work as expected, but you know that's hardly the case. The same goes in the world of bash scripting. Writing a robust, bug-free bash script is always challenging even for a seasoned system administrator. Even if you write a perfect bash script, the script may still go awry due to external factors such as invalid input or network problems. While you cannot prevent all errors in your bash script, at least you should try to handle possible error conditions in a more predictable and controlled fashion.
That is easier said than done, especially since error handling in bash is notoriously difficult. The bash shell does not have any fancy exception swallowing mechanism like try/catch constructs. Some bash errors may be silently ignored but may have consequences down the line. The bash shell does not even have a proper debugger.
In this tutorial, I'll introduce basic tips to catch and handle errors in bash . Although the presented error handling techniques are not as fancy as those available in other programming languages, hopefully by adopting the practice, you may be able to handle potential bash errors more gracefully.
Bash Error Handling Tip #1: Check the Exit Statushttps://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=5674857721&adk=3047986842&adf=3341013331&pi=t.ma~as.5674857721&w=1200&fwrn=4&lmt=1617039750&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Fcatch-handle-errors-bash.html&flash=0&wgl=1&dt=1617150500597&bpp=37&bdt=688&idt=355&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280&correlator=2807789420329&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617150501&ga_hid=294975347&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=1003&biw=1519&bih=714&scr_x=0&scr_y=0&eid=42530672%2C44740079%2C44739387&oid=3&pvsid=3816417963868055&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C714&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=2&uci=a!2&btvi=1&fsb=1&xpc=R4Jgtckaf2&p=https%3A//www.xmodulo.com&dtd=373
As the first line of defense, it is always recommended to check the exit status of a command, as a non-zero exit status typically indicates some type of error. For example:
if ! some_command; then echo "some_command returned an error" fiAnother (more compact) way to trigger error handling based on an exit status is to use an OR list:
<command1> || <command2>With this OR statement, <command2> is executed if and only if <command1> returns a non-zero exit status. So you can replace <command2> with your own error handling routine. For example:
error_exit() { echo "Error: $1" exit 1 } run-some-bad-command || error_exit "Some error occurred"Bash provides a built-in variable called
$?
, which tells you the exit status of the last executed command. Note that when a bash function is called,$?
reads the exit status of the last command called inside the function. Since some non-zero exit codes have special meanings , you can handle them selectively. For example:# run some command status=$? if [ $status -eq 1 ]; then echo "General error" elif [ $status -eq 2 ]; then echo "Misuse of shell builtins" elif [ $status -eq 126 ]; then echo "Command invoked cannot execute" elif [ $status -eq 128 ]; then echo "Invalid argument" fiBash Error Handling Tip #2: Exit on Errors in Bashhttps://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=1246960885&adk=2798017750&adf=1795540232&pi=t.ma~as.1246960885&w=1200&fwrn=4&lmt=1617039750&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Fcatch-handle-errors-bash.html&flash=0&wgl=1&dt=1617150500635&bpp=53&bdt=726&idt=346&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280%2C1200x200&correlator=2807789420329&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617150501&ga_hid=294975347&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=2621&biw=1519&bih=714&scr_x=0&scr_y=0&eid=42530672%2C44740079%2C44739387&oid=3&pvsid=3816417963868055&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C714&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&ifi=3&uci=a!3&btvi=2&fsb=1&xpc=xlM0hGwtiw&p=https%3A//www.xmodulo.com&dtd=367
When you encounter an error in a bash script, by default, it throws an error message to
stderr
, but continues its execution in the rest of the script. In fact you see the same behavior in a terminal window; even if you type a wrong command by accident, it will not kill your terminal. You will just see the "command not found" error, but you terminal/bash session will still remain.This default shell behavior may not be desirable for some bash script. For example, if your script contains a critical code block where no error is allowed, you want your script to exit immediately upon encountering any error inside that code block. To activate this "exit-on-error" behavior in bash, you can use the
set
command as follows.set -e # # some critical code block where no error is allowed # set +eOnce called with
-e
option, theset
command causes the bash shell to exit immediately if any subsequent command exits with a non-zero status (caused by an error condition). The+e
option turns the shell back to the default mode.set -e
is equivalent toset -o errexit
. Likewise,set +e
is a shorthand command forset +o errexit
.However, one special error condition not captured by
set -e
is when an error occurs somewhere inside a pipeline of commands. This is because a pipeline returns a non-zero status only if the last command in the pipeline fails. Any error produced by previous command(s) in the pipeline is not visible outside the pipeline, and so does not kill a bash script. For example:set -e true | false | true echo "This will be printed" # "false" inside the pipeline not detectedIf you want any failure in pipelines to also exit a bash script, you need to add
-o pipefail
option. For example:set -o pipefail -e true | false | true # "false" inside the pipeline detected correctly echo "This will not be printed"Therefore, to protect a critical code block against any type of command errors or pipeline errors, use the following pair of
set
commands.set -o pipefail -e # # some critical code block where no error or pipeline error is allowed # set +o pipefail +eBash Error Handling Tip #3: Try and Catch Statements in Bashhttps://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=1246960885&adk=2798017750&adf=2275625677&pi=t.ma~as.1246960885&w=1200&fwrn=4&lmt=1617039750&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Fcatch-handle-errors-bash.html&flash=0&wgl=1&adsid=ChAI8JiLgwYQkvKD_-vdud51EioAsc7QJfPbVjxhaA0k3D4cZGdWuanTHT1OnZFf-sYZ_FlsHeNm-m93y6g&dt=1617150500736&bpp=3&bdt=827&idt=284&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280%2C1200x200%2C1200x200%2C0x0%2C1519x714&nras=2&correlator=2807789420329&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617150501&ga_hid=294975347&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=4322&biw=1519&bih=714&scr_x=0&scr_y=1473&eid=42530672%2C44740079%2C44739387&oid=3&psts=AGkb-H9kB9XBPoFQr4Nvbpzi-IDFo1H7_NaIL8M18sGGWSqpMo6EvnCzj-Qorx0rQkLTtpYfrxcistXQ3NLI%2CAGkb-H9NblhEl8n-XjoXLiznZ70w5Gvz_2AR1xlm3w9htg9Uoc9EqNnh-BnrA3HlHfn539NkqfOg0pb4UgvAzA%2CAGkb-H_8XpQQ502aEe7wRqWV9odZAPWfUTDNYIPLyzG6DAnUhxH_sAn3FM_H-EjHMVFKcfuXC1svgR-pJ4tNKQ&pvsid=3816417963868055&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C714&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&jar=2021-03-30-23&ifi=4&uci=a!4&btvi=3&fsb=1&xpc=v8JM1LJbyF&p=https%3A//www.xmodulo.com&dtd=7982
Although the
set
command allows you to terminate a bash script upon any error that you deem critical, this mechanism is often not sufficient in more complex bash scripts where different types of errors could happen.To be able to detect and handle different types of errors/exceptions more flexibly, you will need try/catch statements, which however are missing in bash. At least we can mimic the behaviors of try/catch as shown in this
trycatch.sh
script:function try() { [[ $- = *e* ]]; SAVED_OPT_E=$? set +e } function throw() { exit $1 } function catch() { export exception_code=$? (( $SAVED_OPT_E )) && set +e return $exception_code }Here we define several custom bash functions to mimic the semantic of try and catch statements. The
throw()
function is supposed to raise a custom (non-zero) exception. We needset +e
, so that the non-zero returned bythrow()
will not terminate a bash script. Insidecatch()
, we store the value of exception raised bythrow()
in a bash variableexception_code
, so that we can handle the exception in a user-defined fashion.Perhaps an example bash script will make it clear how
trycatch.sh
works. See the example below that utilizestrycatch.sh
.# Include trybatch.sh as a library source ./trycatch.sh # Define custom exception types export ERR_BAD=100 export ERR_WORSE=101 export ERR_CRITICAL=102 try ( echo "Start of the try block" # When a command returns a non-zero, a custom exception is raised. run-command || throw $ERR_BAD run-command2 || throw $ERR_WORSE run-command3 || throw $ERR_CRITICAL # This statement is not reached if there is any exception raised # inside the try block. echo "End of the try block" ) catch || { case $exception_code in $ERR_BAD) echo "This error is bad" ;; $ERR_WORSE) echo "This error is worse" ;; $ERR_CRITICAL) echo "This error is critical" ;; *) echo "Unknown error: $exit_code" throw $exit_code # re-throw an unhandled exception ;; esac }In this example script, we define three types of custom exceptions. We can choose to raise any of these exceptions depending on a given error condition. The OR list
<command> || throw <exception>
allows us to invokethrow()
function with a chosen <exception> value as a parameter, if <command> returns a non-zero exit status. If <command> is completed successfully,throw()
function will be ignored. Once an exception is raised, the raised exception can be handled accordingly inside the subsequent catch block. As you can see, this provides a more flexible way of handling different types of error conditions.https://googleads.g.doubleclick.net/pagead/ads?client=ca-pub-7245163904660683&output=html&h=200&slotname=1246960885&adk=2798017750&adf=1069835252&pi=t.ma~as.1246960885&w=1200&fwrn=4&lmt=1617039750&rafmt=11&psa=1&format=1200x200&url=https%3A%2F%2Fwww.xmodulo.com%2Fcatch-handle-errors-bash.html&flash=0&wgl=1&adsid=ChAI8JiLgwYQkvKD_-vdud51EioAsc7QJfPbVjxhaA0k3D4cZGdWuanTHT1OnZFf-sYZ_FlsHeNm-m93y6g&dt=1617150500740&bpp=33&bdt=832&idt=288&shv=r20210322&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Dc3dfa6581a6e36dd-22096420bac60000%3AT%3D1614570003%3ART%3D1614570003%3AS%3DALNI_MZbq_6NmD0W6EwR1pZiXu91X_Gmaw&prev_fmts=1200x280%2C1200x200%2C1200x200%2C0x0%2C1519x714%2C1200x200&nras=2&correlator=2807789420329&frm=20&pv=1&ga_vid=288434327.1614570002&ga_sid=1617150501&ga_hid=294975347&ga_fc=0&rplot=4&u_tz=-240&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=160&ady=6943&biw=1519&bih=714&scr_x=0&scr_y=4095&eid=42530672%2C44740079%2C44739387&oid=3&psts=AGkb-H9kB9XBPoFQr4Nvbpzi-IDFo1H7_NaIL8M18sGGWSqpMo6EvnCzj-Qorx0rQkLTtpYfrxcistXQ3NLI%2CAGkb-H9NblhEl8n-XjoXLiznZ70w5Gvz_2AR1xlm3w9htg9Uoc9EqNnh-BnrA3HlHfn539NkqfOg0pb4UgvAzA%2CAGkb-H_8XpQQ502aEe7wRqWV9odZAPWfUTDNYIPLyzG6DAnUhxH_sAn3FM_H-EjHMVFKcfuXC1svgR-pJ4tNKQ%2CAGkb-H_LZaKgZXHhi-mp793u920dtCBuBuOdBYfg8GxP5Yl69G1LrubEm-DNODFvz9VDpFX0r4wQgNJ9B_IZKQ&pvsid=3816417963868055&pem=502&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C714&vis=1&rsz=%7C%7CeEbr%7C&abl=CS&pfx=0&fu=8320&bc=31&jar=2021-03-30-23&ifi=5&uci=a!5&btvi=4&fsb=1&xpc=cNiz7hdTMs&p=https%3A//www.xmodulo.com&dtd=13575
Granted, this is not a full-blown try/catch constructs. One limitation of this approach is that the
Conclusiontry
block is executed in a sub-shell . As you may know, any variables defined in a sub-shell are not visible to its parent shell. Also, you cannot modify the variables that are defined in the parent shell inside thetry
block, as the parent shell and the sub-shell have separate scopes for variables.In this bash tutorial, I presented basic error handling tips that may come in handy when you want to write a more robust bash script. As expected these tips are not as sophisticated as the error handling constructs available in other programming language. If the bash script you are writing requires more advanced error handling than this, perhaps bash is not the right language for your task. You probably want to turn to other languages such as Python.
Let me conclude the tutorial by mentioning one essential tool that every shell script writer should be familiar with. ShellCheck is a static analysis tool for shell scripts. It can detect and point out syntax errors, bad coding practice and possible semantic issues in a shell script with much clarity. Definitely check it out if you haven't tried it.
If you find this tutorial helpful, I recommend you check out the series ofbash
shell scripting tutorials provided by Xmodulo.
Mar 24, 2021 | www.redhat.com
The idea was that sharing this would inspire others to improve their bashrc savviness. Take a look at what our Sudoers group shared and, please, borrow anything you like to make your sysadmin life easier.
[ You might also like: Parsing Bash history in Linux ]
Jonathan Roemer# Require confirmation before overwriting target files. This setting keeps me from deleting things I didn't expect to, etc alias cp='cp -i' alias mv='mv -i' alias rm='rm -i' # Add color, formatting, etc to ls without re-typing a bunch of options every time alias ll='ls -alhF' alias ls="ls --color" # So I don't need to remember the options to tar every time alias untar='tar xzvf' alias tarup='tar czvf' # Changing the default editor, I'm sure a bunch of people have this so they don't get dropped into vi instead of vim, etc. A lot of distributions have system default overrides for these, but I don't like relying on that being around alias vim='nvim' alias vi='nvim'Valentin BajramiHere are a few functions from my
~/.bashrc
file:# Easy copy the content of a file without using cat / selecting it etc. It requires xclip to be installed # Example: _cp /etc/dnsmasq.conf _cp() { local file="$1" local st=1 if [[ -f $file ]]; then cat "$file" | xclip -selection clipboard st=$? else printf '%s\n' "Make sure you are copying the content of a file" >&2 fi return $st } # This is the function to paste the content. The content is now in your buffer. # Example: _paste _paste() { xclip -selection cliboard -o } # Generate a random password without installing any external tooling genpw() { alphanum=( {a..z} {A..Z} {0..9} ); for((i=0;i<=${#alphanum[@]};i++)); do printf '%s' "${alphanum[@]:$((RANDOM%255)):1}"; done; echo } # See what command you are using the most (this parses the history command) cm() { history | awk ' { a[$4]++ } END { for ( i in a ) print a[i], i | "sort -rn | head -n10"}' | awk '$1 > max{ max=$1} { bar=""; i=s=10*$1/max;while(i-->0)bar=bar"#"; printf "%25s %15d %s %s", $2, $1,bar, "\n"; }' }Peter GervaseFor shutting down at night, I kill all SSH sessions and then kill any VPN connections:
#!/bin/bash /usr/bin/killall ssh /usr/bin/nmcli connection down "Raleigh (RDU2)" /usr/bin/nmcli connection down "Phoenix (PHX2)"Valentin Rothbergalias vim='nvim' alias l='ls -CF --color=always'' alias cd='cd -P' # follow symlinks alias gits='git status' alias gitu='git remote update' alias gitum='git reset --hard upstream/master'Steve Ovensalias nano='nano -wET 4' alias ls='ls --color=auto' PS1="\[\e[01;32m\]\u@\h \[\e[01;34m\]\w \[\e[01;34m\]$\[\e[00m\] " export EDITOR=nano export AURDEST=/var/cache/pacman/pkg PATH=$PATH:/home/stratus/.gem/ruby/2.7.0/bin alias mp3youtube='youtube-dl -x --audio-format mp3' alias grep='grep --color' alias best-youtube='youtube-dl -r 1M --yes-playlist -f 'bestvideo[ext=mp4]+bestaudio[ext=m4a]'' alias mv='mv -vv' shopt -s histappend HISTCONTROL=ignorebothJason HibbetsWhile my bashrc aliases aren't as sophisticated as the previous technologists, you can probably tell I really like shortcuts:
# User specific aliases and functions alias q='exit' alias h='cd ~/' alias c='clear' alias m='man' alias lsa='ls -al' alias s='sudo su -'More Linux resourcesBonus: Organizing bashrc files and cleaning up files
- Advanced Linux Commands Cheat Sheet for Developers
- Get Started with Red Hat Insights
- Download Now: Basic Linux Commands Cheat Sheet
- Linux System Administration Skills Assessment
We know many sysadmins like to script things to make their work more automated. Here are a few tips from our Sudoers that you might find useful.
Chris CollinsI don't know who I need to thank for this, some awesome woman on Twitter whose name I no longer remember, but it's changed the organization of my bash aliases and commands completely.
I have Ansible drop individual <something>.bashrc files into
~/.bashrc.d/
with any alias or command or shortcut I want, related to any particular technology or Ansible role, and can manage them all separately per host. It's been the best single trick I've learned for .bashrc files ever.Git stuff gets a
~/.bashrc.d/git.bashrc
, Kubernetes goes in~/.bashrc.d/kube.bashrc
.if [ -d ${HOME}/.bashrc.d ] then for file in ~/.bashrc.d/*.bashrc do source "${file}" done fiPeter GervaseThese aren't bashrc aliases, but I use them all the time. I wrote a little script named
clean
for getting rid of excess lines in files. For example, here'snsswitch.conf
with lots of comments and blank lines:[pgervase@pgervase etc]$ head authselect/nsswitch.conf # Generated by authselect on Sun Dec 6 22:12:26 2020 # Do not modify this file manually. # If you want to make changes to nsswitch.conf please modify # /etc/authselect/user-nsswitch.conf and run 'authselect apply-changes'. # # Note that your changes may not be applied as they may be # overwritten by selected profile. Maps set in the authselect # profile always take precedence and overwrites the same maps # set in the user file. Only maps that are not set by the profile [pgervase@pgervase etc]$ wc -l authselect/nsswitch.conf 80 authselect/nsswitch.conf [pgervase@pgervase etc]$ clean authselect/nsswitch.conf passwd: sss files systemd group: sss files systemd netgroup: sss files automount: sss files services: sss files shadow: files sss hosts: files dns myhostname bootparams: files ethers: files netmasks: files networks: files protocols: files rpc: files publickey: files aliases: files [pgervase@pgervase etc]$ cat `which clean` #! /bin/bash # /bin/cat $1 | /bin/sed 's/^[ \t]*//' | /bin/grep -v -e "^#" -e "^;" -e "^[[:space:]]*$" -e "^[ \t]+"[ Free online course: Red Hat Enterprise Linux technical overview . ]
Mar 24, 2021 | www.redhat.com
The following is the script I use to test the servers:
1 #!/bin/bash 2 3 input_file=hosts.csv 4 output_file=hosts_tested.csv 5 6 echo "ServerName,IP,PING,DNS,SSH" > "$output_file" 7 8 tail -n +2 "$input_file" | while IFS=, read -r host ip _ 9 do 10 if ping -c 3 "$ip" > /dev/null; then 11 ping_status="OK" 12 else 13 ping_status="FAIL" 14 fi 15 16 if nslookup "$host" > /dev/null; then 17 dns_status="OK" 18 else 19 dns_status="FAIL" 20 fi 21 22 if nc -z -w3 "$ip" 22 > /dev/null; then 23 ssh_status="OK" 24 else 25 ssh_status="FAIL" 26 fi 27 28 echo "Host = $host IP = $ip" PING_STATUS = $ping_status DNS_STATUS = $dns_status SSH_STATUS = $ssh_status 29 echo "$host,$ip,$ping_status,$dns_status,$ssh_status" >> $output_file 30 done
Mar 14, 2021 | www.redhat.com
while true do df -k | grep home sleep 1 doneIn this case, you're running the loop with a true condition, which means it will run forever or until you hit CTRL-C. Therefore, you need to keep an eye on it (otherwise, it will remain using the system's resources).
Note : If you use a loop like this, you need to include a command like
2. Waiting for a condition to become truesleep
to give the system some time to breathe between executions. Running anything non-stop could become a performance issue, especially if the commands inside the loop involve I/O operations.There are variations of this scenario. For example, you know that at some point, the process will create a directory, and you are just waiting for that moment to perform other validations.
You can have a
while
loop to keep checking for that directory's existence and only write a message while the directory does not exist.https://asciinema.org/a/BQN8CDagw6k8bSbGJPYi5kqpg/embed?
If you want to do something more elaborate, you could create a script and show a more clear indication that the loop condition became true:
#!/bin/bash while [ ! -d directory_expected ] do echo "`date` - Still waiting" sleep 1 done echo "DIRECTORY IS THERE!!!"More about automation3. Using a while loop to manipulate a file
- An introduction to Ansible
- 3 ways to try Ansible Tower free
- Free Ansible e-books
- Getting started with network automation
Another useful application of a
while
loop is to combine it with theread
command to have access to columns (or fields) quickly from a text file and perform some actions on them.In the following example, you are simply picking the columns from a text file with a predictable format and printing the values that you want to use to populate an
/etc/hosts
file.https://asciinema.org/a/2b1u28XqoC7j7Muhd5zXqHkYP/embed?
Here the assumption is that the file has columns delimited by spaces or tabs and that there are no spaces in the content of the columns. That could shift the content of the fields and not give you what you needed.
Notice that you're just doing a simple operation to extract and manipulate information and not concerned about the command's reusability. I would classify this as one of those "quick and dirty tricks."
Of course, if this was something that you would repeatedly do, you should run it from a script, use proper names for the variables, and all those good practices (including transforming the filename in an argument and defining where to send the output, but today, the topic is
while
loops).#!/bin/bash cat servers.txt | grep -v CPU | while read servername cpu ram ip do echo $ip $servername done
Jan 02, 2021 | www.redhat.com
In the Bash shell, file descriptors (FDs) are important in managing the input and output of commands. Many people have issues understanding file descriptors correctly. Each process has three default file descriptors, namely:
Code Meaning Location Description 0 Standard input /dev/stdin Keyboard, file, or some stream 1 Standard output /dev/stdout Monitor, terminal, display 2 Standard error /dev/stderr Non-zero exit codes are usually >FD2, display Now that you know what the default FDs do, let's see them in action. I start by creating a directory named
foo
, which containsfile1
.$> ls foo/ bar/ ls: cannot access 'bar/': No such file or directory foo/: file1The output No such file or directory goes to Standard Error (stderr) and is also displayed on the screen. I will run the same command, but this time use
2>
to omit stderr:$> ls foo/ bar/ 2>/dev/null foo/: file1It is possible to send the output of
foo
to Standard Output (stdout) and to a file simultaneously, and ignore stderr. For example:$> { ls foo bar | tee -a ls_out_file ;} 2>/dev/null foo: file1Then:
$> cat ls_out_file foo: file1The following command sends stdout to a file and stderr to
/dev/null
so that the error won't display on the screen:$> ls foo/ bar/ >to_stdout 2>/dev/null $> cat to_stdout foo/: file1The following command sends stdout and stderr to the same file:
$> ls foo/ bar/ >mixed_output 2>&1 $> cat mixed_output ls: cannot access 'bar/': No such file or directory foo/: file1This is what happened in the last example, where stdout and stderr were redirected to the same file:
ls foo/ bar/ >mixed_output 2>&1 | | | Redirect stderr to where stdout is sent | stdout is sent to mixed_outputAnother short trick (> Bash 4.4) to send both stdout and stderr to the same file uses the ampersand sign. For example:
$> ls foo/ bar/ &>mixed_outputHere is a more complex redirection:
exec 3>&1 >write_to_file; echo "Hello World"; exec 1>&3 3>&-This is what occurs:
- exec 3>&1 Copy stdout to file descriptor 3
- > write_to_file Make FD 1 to write to the file
- echo "Hello World" Go to file because FD 1 now points to the file
- exec 1>&3 Copy FD 3 back to 1 (swap)
- Three>&- Close file descriptor three (we don't need it anymore)
Often it is handy to group commands, and then send the Standard Output to a single file. For example:
$> { ls non_existing_dir; non_existing_command; echo "Hello world"; } 2> to_stderr Hello worldAs you can see, only "Hello world" is printed on the screen, but the output of the failed commands is written to the to_stderr file.
Dec 10, 2020 | linuxconfig.org
Bash allows two different subshell syntaxes, namely
$()
and back-tick surrounded statements. Let's look at some easy examples to start:$ echo '$(echo 'a')' $(echo a) $ echo "$(echo 'a')" a $ echo "a$(echo 'b')c" abc $ echo "a`echo 'b'`c" abc
SUBSCRIBE TO NEWSLETTER
Subscribe to Linux Career NEWSLETTER and receive latest Linux news, jobs, career advice and tutorials.
https://googleads.g.doubleclick.net/pagead/ads?guci=2.2.0.0.2.2.0.0&gdpr=0&us_privacy=1---&client=ca-pub-4906753266448300&output=html&h=189&slotname=5703296903&adk=1248373483&adf=1566064928&pi=t.ma~as.5703296903&w=754&fwrn=4&lmt=1606768699&rafmt=11&psa=1&format=754x189&url=https%3A%2F%2Flinuxconfig.org%2Flinux-subshells-for-beginners-with-examples&flash=0&wgl=1&tt_state=W3siaXNzdWVyT3JpZ2luIjoiaHR0cHM6Ly9hZHNlcnZpY2UuZ29vZ2xlLmNvbSIsInN0YXRlIjowfSx7Imlzc3Vlck9yaWdpbiI6Imh0dHBzOi8vYXR0ZXN0YXRpb24uYW5kcm9pZC5jb20iLCJzdGF0ZSI6MH1d&dt=1606768710648&bpp=17&bdt=1664&idt=-M&shv=r20201112&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Da3d6872c3b570d2f-2256edf9acc400fe%3AT%3D1604637667%3ART%3D1604637667%3AS%3DALNI_MboWqYGLjuR1MmbPrvzRe-G7T4AZw&correlator=5015138629854&frm=20&pv=2&ga_vid=1677892679.1604637667&ga_sid=1606768711&ga_hid=1690704763&ga_fc=0&iag=0&icsg=577243598424319&dssz=50&mdo=0&mso=0&rplot=4&u_tz=-300&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=185&ady=1663&biw=1519&bih=762&scr_x=0&scr_y=0&eid=42530671%2C21068083&oid=3&pvsid=3023641763965231&pem=477&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=o%7Co%7CoEebr%7C&abl=NS&pfx=0&fu=8320&bc=31&ifi=1&uci=a!1&btvi=1&fsb=1&xpc=1Bdmecc8YW&p=https%3A//linuxconfig.org&dtd=634
In the first command, as an example, we used
'
single quotes. This resulted in our subshell command, inside the single quotes, to be interpreted as literal text instead of a command. This is standard Bash:'
indicates literal,"
indicates that the string will be parsed for subshells and variables.In the second command we swap the
'
to"
and thus the string is parsed for actual commands and variables. The result is that a subshell is being started, thanks to our subshell syntax ($()
), and the command inside the subshell (echo 'a'
) is being executed literally, and thus ana
is produced, which is then inserted in the overarching / top levelecho
. The command at that stage can be read asecho "a"
and thus the output isa
.In the third command, we further expand this to make it clearer how subshells work in-context. We echo the letter
b
inside the subshell, and this is joined on the left and the right by the lettersa
andc
yielding the overall output to beabc
in a similar fashion to the second command.In the fourth and last command, we exemplify the alternative Bash subshell syntax of using back-ticks instead of
Example 2: A little more complex$()
. It is important to know that$()
is the preferred syntax, and that in some remote cases the back-tick based syntax may yield some parsing errors where the$()
does not. I would thus strongly encourage you to always use the$()
syntax for subshells, and this is also what we will be using in the following examples.$ touch a $ echo "-$(ls [a-z])" -a $ echo "-=-||$(ls [a-z] | xargs ls -l)||-=-" -=-||-rw-rw-r-- 1 roel roel 0 Sep 5 09:26 a||-=-Here, we first create an empty file by using the
touch a
command. Subsequently, we useecho
to output something which our subshell$(ls [a-z])
will generate. Sure, we can execute thels
directly and yield more or less the same result, but note how we are adding-
to the output as a prefix.In the final command, we insert some characters at the front and end of the
Example 3: Double quotes inside subshells and sub-subshells!echo
command which makes the output look a bit nicer. We use a subshell to first find thea
file we created earlier (ls [a-z]
) and then - still inside the subshell - pass the results of this command (which would be onlya
literally - i.e. the file we created in the first command) to thels -l
using the pipe (|
) and thexargs
command. For more information on xargs, please see our articles xargs for beginners with examples and multi threaded xargs with examples .echo "$(echo "$(echo "it works")" | sed 's|it|it surely|')" it surely works
https://googleads.g.doubleclick.net/pagead/ads?guci=2.2.0.0.2.2.0.0&gdpr=0&us_privacy=1---&client=ca-pub-4906753266448300&output=html&h=189&slotname=5703296903&adk=1248373483&adf=2724449972&pi=t.ma~as.5703296903&w=754&fwrn=4&lmt=1606768699&rafmt=11&psa=1&format=754x189&url=https%3A%2F%2Flinuxconfig.org%2Flinux-subshells-for-beginners-with-examples&flash=0&wgl=1&adsid=ChEIgM2S_gUQzN_42M_QwuOnARIqAL1WgU0IPKMTPLYMrAFnUAY1w18hzIzNy0CGR82uXn3xCpt9jLaEISQY&tt_state=W3siaXNzdWVyT3JpZ2luIjoiaHR0cHM6Ly9hZHNlcnZpY2UuZ29vZ2xlLmNvbSIsInN0YXRlIjowfSx7Imlzc3Vlck9yaWdpbiI6Imh0dHBzOi8vYXR0ZXN0YXRpb24uYW5kcm9pZC5jb20iLCJzdGF0ZSI6MH1d&dt=1606768710249&bpp=9&bdt=1264&idt=211&shv=r20201112&cbv=r20190131&ptt=9&saldr=aa&abxe=1&cookie=ID%3Da3d6872c3b570d2f-2256edf9acc400fe%3AT%3D1604637667%3ART%3D1604637667%3AS%3DALNI_MboWqYGLjuR1MmbPrvzRe-G7T4AZw&prev_fmts=754x189%2C0x0&nras=1&correlator=5015138629854&frm=20&pv=1&ga_vid=1677892679.1604637667&ga_sid=1606768711&ga_hid=1690704763&ga_fc=0&iag=0&icsg=2308974393696511&dssz=50&mdo=0&mso=0&rplot=4&u_tz=-300&u_his=1&u_java=0&u_h=864&u_w=1536&u_ah=864&u_aw=1536&u_cd=24&u_nplug=3&u_nmime=4&adx=185&ady=3548&biw=1519&bih=762&scr_x=0&scr_y=513&eid=42530671%2C21068083&oid=3&psts=AGkb-H_kCAb-qHdw3GuwXq6RB3MJbClRq9VISu7n8l1rpQZCm8sfL6sdfh-BMTltKIaB0w&pvsid=3023641763965231&pem=477&ref=https%3A%2F%2Fwww.linuxtoday.com%2F&rx=0&eae=0&fc=896&brdim=1536%2C0%2C1536%2C0%2C1536%2C0%2C1536%2C864%2C1536%2C762&vis=1&rsz=o%7Co%7CoEebr%7C&abl=NS&pfx=0&fu=8320&bc=31&jar=2020-11-30-05&ifi=2&uci=a!2&btvi=2&fsb=1&xpc=0ozDx6KTCC&p=https%3A//linuxconfig.org&dtd=4517
Cool, no? Here we see that double quotes can be used inside the subshell without generating any parsing errors. We also see how a subshell can be nested inside another subshell. Are you able to parse the syntax? The easiest way is to start "in the middle or core of all subshells" which is in this case would be the simple
echo "it works"
.This command will output
it works
as a result of the subshell call$(echo "it works")
. Pictureit works
in place of the subshell, i.e.echo "$(echo "it works" | sed 's|it|it surely|')" it surely worksThis looks simpler already. Next it is helpful to know that the
sed
command will do a substitute (thanks to thes
command just before the|
command separator) of the textit
toit surely
. You can read thesed
command asreplace __it__ with __it surely__. The output of the subshell will thus be
it surely works`, i.e.echo "it surely works" it surely worksConclusionIn this article, we have seen that subshells surely work (pun intended), and that they can be used in wide variety of circumstances, due to their ability to be inserted inline and within the context of the overarching command. Subshells are very powerful and once you start using them, well, there will likely be no stopping. Very soon you will be writing something like:
$ VAR="goodbye"; echo "thank $(echo "${VAR}" | sed 's|^| and |')" | sed 's|k |k you|'This one is for you to try and play around with! Thank you and goodbye
Jul 07, 2020 | www.redhat.com
Assume I have a file with a lot of IP addresses and want to operate on those IP addresses. For example, I want to run
dig
to retrieve reverse-DNS information for the IP addresses listed in the file. I also want to skip IP addresses that start with a comment (# or hashtag).I'll use fileA as an example. Its contents are:
10.10.12.13 some ip in dc1 10.10.12.14 another ip in dc2 #10.10.12.15 not used IP 10.10.12.16 another IPI could copy and paste each IP address, and then run
dig
manually:$> dig +short -x 10.10.12.13Or I could do this:
$> while read -r ip _; do [[ $ip == \#* ]] && continue; dig +short -x "$ip"; done < ipfileWhat if I want to swap the columns in fileA? For example, I want to put IP addresses in the right-most column so that fileA looks like this:
some ip in dc1 10.10.12.13 another ip in dc2 10.10.12.14 not used IP #10.10.12.15 another IP 10.10.12.16I run:
$> while read -r ip rest; do printf '%s %s\n' "$rest" "$ip"; done < fileA
Apr 19, 2020 | www.cyberciti.biz
PayPal / Bitcoin , or become a supporter using Patreon . Advertisements
Sep 05, 2020 | unix.stackexchange.com
What exit code should I use?
- 1 - Catchall for general errors. The exit code is
1
as the operation was not successful.- 2 - Misuse of shell builtins (according to Bash documentation)
- 126 - Command invoked cannot execute.
- 127 - "command not found".
- 128 - Invalid argument to exit.
- 128+n - Fatal error signal "n".
- 130 - Script terminated by Control-C.
- 255\* - Exit status out of range.
There is no "recipe" to get the meanings of an exit status of a given terminal command.
My first attempt would be the manpage:
user@host:~# man ls Exit status: 0 if OK, 1 if minor problems (e.g., cannot access subdirectory), 2 if serious trouble (e.g., cannot access command-line argument).Second : Google . See wget as an example.
Third : The exit statuses of the shell, for example bash. Bash and it's builtins may use values above 125 specially. 127 for command not found, 126 for command not executable. For more information see the bash exit codes .
Some list of sysexits on both Linux and BSD/OS X with preferable exit codes for programs (64-78) can be found in
/usr/include/sysexits.h
(or:man sysexits
on BSD):0 /* successful termination */ 64 /* base value for error messages */ 64 /* command line usage error */ 65 /* data format error */ 66 /* cannot open input */ 67 /* addressee unknown */ 68 /* host name unknown */ 69 /* service unavailable */ 70 /* internal software error */ 71 /* system error (e.g., can't fork) */ 72 /* critical OS file missing */ 73 /* can't create (user) output file */ 74 /* input/output error */ 75 /* temp failure; user is invited to retry */ 76 /* remote error in protocol */ 77 /* permission denied */ 78 /* configuration error */ /* maximum listed value */The above list allocates previously unused exit codes from 64-78. The range of unallotted exit codes will be further restricted in the future.
However above values are mainly used in sendmail and used by pretty much nobody else, so they aren't anything remotely close to a standard (as pointed by @Gilles ).
In shell the exit status are as follow (based on Bash):
1
-125
- Command did not complete successfully. Check the command's man page for the meaning of the status, few examples below:
1
- Catchall for general errorsMiscellaneous errors, such as "divide by zero" and other impermissible operations.
Example:
$ let "var1 = 1/0"; echo $? -bash: let: var1 = 1/0: division by 0 (error token is "0") 1
2
- Misuse of shell builtins (according to Bash documentation)Missing keyword or command, or permission problem (and diff return code on a failed binary file comparison).
Example:
empty_function() {}
6
- No such device or addressExample:
$ curl foo; echo $? curl: (6) Could not resolve host: foo 6
124
- command times out125
- if a command itself fails see: coreutils
126
- if command is found but cannot be invoked (e.g. is not executable)Permission problem or command is not an executable.
Example:
$ /dev/null $ /etc/hosts; echo $? -bash: /etc/hosts: Permission denied 126
127
- if a command cannot be found, the child process created to execute it returns that statusPossible problem with
$PATH
or a typo.Example:
$ foo; echo $? -bash: foo: command not found 127
128
- Invalid argument toexit
exit takes only integer args in the range 0 - 255.
Example:
$ exit 3.14159 -bash: exit: 3.14159: numeric argument required
128
-254
- fatal error signal "n" - command died due to receiving a signal. The signal code is added to 128 (128 + SIGNAL) to get the status (Linux:man 7 signal
, BSD:man signal
), few examples below:
130
- command terminated due to Ctrl-C being pressed, 130-128=2 (SIGINT)Example:
$ cat ^C $ echo $? 130
137
- if command is sent theKILL(9)
signal (128+9), the exit status of command otherwise
kill -9 $PPID
of script.
141
-SIGPIPE
- write on a pipe with no readerExample:
$ hexdump -n100000 /dev/urandom | tee &>/dev/null >(cat > file1.txt) >(cat > file2.txt) >(cat > file3.txt) >(cat > file4.txt) >(cat > file5.txt) $ find . -name '*.txt' -print0 | xargs -r0 cat | tee &>/dev/null >(head /dev/stdin > head.out) >(tail /dev/stdin > tail.out) xargs: cat: terminated by signal 13 $ echo ${PIPESTATUS[@]} 0 125 141
143
- command terminated by signal code 15 (128+15=143)Example:
$ sleep 5 && killall sleep & [1] 19891 $ sleep 100; echo $? Terminated: 15 143
255
* - exit status out of range.exit takes only integer args in the range 0 - 255.
Example:
$ sh -c 'exit 3.14159'; echo $? sh: line 0: exit: 3.14159: numeric argument required 255According to the above table, exit codes 1 - 2, 126 - 165, and 255 have special meanings, and should therefore be avoided for user-specified exit parameters.
Please note that out of range exit values can result in unexpected exit codes (e.g. exit 3809 gives an exit code of 225, 3809 % 256 = 225).
See:
- Appendix E. Exit Codes With Special Meanings at Advanced Bash-Scripting Guide
- Writing Better Shell Scripts – Part 2 at Innovationsts
You will have to look into the code/documentation. However the thing that comes closest to a "standardization" is errno.h share improve this answer follow answered Jan 22 '14 at 7:35 Thorsten Staerk 2,885 1 1 gold badge 17 17 silver badges 25 25 bronze badges
PSkocik ,
thanks for pointing the header file.. tried looking into the documentation of a few utils.. hard time finding the exit codes, seems most will be the stderrs... – precise Jan 22 '14 at 9:13
Jul 10, 2020 | www.tecmint.com
direnv is a nifty open-source extension for your shell on a UNIX operating system such as Linux and macOS. It is compiled into a single static executable and supports shells such as bash , zsh , tcsh , and fish .
The main purpose of direnv is to allow for project-specific environment variables without cluttering
~/.profile
or related shell startup files. It implements a new way to load and unload environment variables depending on the current directory.It is used to load 12factor apps (a methodology for building software-as-a-service apps) environment variables, create per-project isolated development environments, and also load secrets for deployment. Additionally, it can be used to build multi-version installation and management solutions similar to rbenv , pyenv , and phpenv .
So How Does direnv Works?Before the shell loads a command prompt, direnv checks for the existence of a
.envrc
file in the current (which you can display using the pwd command ) and parent directory. The checking process is swift and can't be noticed on each prompt.Once it finds the
.envrc
file with the appropriate permissions, it loads it into a bash sub-shell and it captures all exported variables and makes them available to the current shell.... ... ...
How to Use direnv in Linux ShellTo demonstrate how direnv works, we will create a new directory called
tecmint_projects
and move into it.$ mkdir ~/tecmint_projects $ cd tecmint_projects/Next, let's create a new variable called
TEST_VARIABLE
on the command line and when it is echoed , the value should be empty:$ echo $TEST_VARIABLENow we will create a new
.envrc
file that contains Bash code that will be loaded by direnv . We also try to add the line " export the TEST_VARIABLE=tecmint " in it using the echo command and the output redirection character(>)
:$ echo export TEST_VARIABLE=tecmint > .envrcBy default, the security mechanism blocks the loading of the
.envrc
file. Since we know it a secure file, we need to approve its content by running the following command:$ direnv allow .Now that the content of
.envrc
file has been allowed to load, let's check the value ofTEST_VARIABLE
that we set before:$ echo $TEST_VARIABLEWhen we exit the
tecmint_project
directory, the direnv will be unloaded and if we check the value ofTEST_VARIABLE
once more, it should be empty:$ cd .. $ echo $TEST_VARIABLEEvery time you move into the tecmint_projects directory, the
.envrc
file will be loaded as shown in the following screenshot:$ cd tecmint_projects/To revoke the authorization of a given
.envrc
, use the deny command.$ direnv deny . #in current directory OR $ direnv deny /path/to/.envrcFor more information and usage instructions, see the direnv man page:
$ man direnvAdditionally, direnv also uses a stdlib ( direnv-stdlib ) comes with several functions that allow you to easily add new directories to your PATH and do so much more.
Dec 21, 2019 | opensource.com
Get the highlights in your inbox every week.
In the first article in this series, you created your first, very small, one-line Bash script and explored the reasons for creating shell scripts. In the second article , you began creating a fairly simple template that can be a starting point for other Bash programs and began testing it. In the third article , you created and used a simple Help function and learned about using functions and how to handle command-line options such as -h .
This fourth and final article in the series gets into variables and initializing them as well as how to do a bit of sanity testing to help ensure the program runs under the proper conditions. Remember, the objective of this series is to build working code that will be used for a template for future Bash programming projects. The idea is to make getting started on new programming projects easy by having common elements already available in the template.
VariablesThe Bash shell, like all programming languages, can deal with variables. A variable is a symbolic name that refers to a specific location in memory that contains a value of some sort. The value of a variable is changeable, i.e., it is variable. If you are not familiar with using variables, read my article How to program with Bash: Syntax and tools before you go further.
Done? Great! Let's now look at some good practices when using variables.
More on BashI always set initial values for every variable used in my scripts. You can find this in your template script immediately after the procedures as the first part of the main program body, before it processes the options. Initializing each variable with an appropriate value can prevent errors that might occur with uninitialized variables in comparison or math operations. Placing this list of variables in one place allows you to see all of the variables that are supposed to be in the script and their initial values.
- Bash cheat sheet
- An introduction to programming with Bash
- A sysadmin's guide to Bash scripting
- Latest Bash articles
Your little script has only a single variable, $option , so far. Set it by inserting the following lines as shown:
# Main program ## Initialize variables
option = ""# Process the input options. Add options as needed. #
Test this to ensure that everything works as it should and that nothing has broken as the result of this change.
ConstantsConstants are variables, too -- at least they should be. Use variables wherever possible in command-line interface (CLI) programs instead of hard-coded values. Even if you think you will use a particular value (such as a directory name, a file name, or a text string) just once, create a variable and use it where you would have placed the hard-coded name.
For example, the message printed as part of the main body of the program is a string literal, echo "Hello world!" . Change that to a variable. First, add the following statement to the variable initialization section:
Msg="Hello world!"And now change the last line of the program from:
echo "Hello world!"to:
echo "$Msg"Test the results.
Sanity checksSanity checks are simply tests for conditions that need to be true in order for the program to work correctly, such as: the program must be run as the root user, or it must run on a particular distribution and release of that distro. Add a check for root as the running user in your simple program template.
Testing that the root user is running the program is easy because a program runs as the user that launches it.
The id command can be used to determine the numeric user ID (UID) the program is running under. It provides several bits of information when it is used without any options:
[ student @ testvm1 ~ ] $ id
uid = 1001 ( student ) gid = 1001 ( student ) groups = 1001 ( student ) , 5000 ( dev )Using the -u option returns just the user's UID, which is easily usable in your Bash program:
[ student @ testvm1 ~ ] $ id -u
1001
[ student @ testvm1 ~ ] $Add the following function to the program. I added it after the Help procedure, but you can place it anywhere in the procedures section. The logic is that if the UID is not zero, which is always the root user's UID, the program exits:
################################################################################
# Check for root. #
################################################################################
CheckRoot ()
{
if [ ` id -u ` ! = 0 ]
then
echo "ERROR: You must be root user to run this program"
exit
fi
}Now, add a call to the CheckRoot procedure just before the variable's initialization. Test this, first running the program as the student user:
[ student @ testvm1 ~ ] $ . / hello
ERROR: You must be root user to run this program
[ student @ testvm1 ~ ] $then as the root user:
[ root @ testvm1 student ] # ./hello
Hello world !
[ root @ testvm1 student ] #You may not always need this particular sanity test, so comment out the call to CheckRoot but leave all the code in place in the template. This way, all you need to do to use that code in a future program is to uncomment the call.
The codeAfter making the changes outlined above, your code should look like this:
#!/usr/bin/bash
################################################################################
# scriptTemplate #
# #
# Use this template as the beginning of a new program. Place a short #
# description of the script here. #
# #
# Change History #
# 11/11/2019 David Both Original code. This is a template for creating #
# new Bash shell scripts. #
# Add new history entries as needed. #
# #
# #
################################################################################
################################################################################
################################################################################
# #
# Copyright (C) 2007, 2019 David Both #
# [email protected] #
# #
# This program is free software; you can redistribute it and/or modify #
# it under the terms of the GNU General Public License as published by #
# the Free Software Foundation; either version 2 of the License, or #
# (at your option) any later version. #
# #
# This program is distributed in the hope that it will be useful, #
# but WITHOUT ANY WARRANTY; without even the implied warranty of #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
# GNU General Public License for more details. #
# #
# You should have received a copy of the GNU General Public License #
# along with this program; if not, write to the Free Software #
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA #
# #
################################################################################
################################################################################
################################################################################################################################################################
# Help #
################################################################################
Help ()
{
# Display Help
echo "Add description of the script functions here."
echo
echo "Syntax: scriptTemplate [-g|h|v|V]"
echo "options:"
echo "g Print the GPL license notification."
echo "h Print this Help."
echo "v Verbose mode."
echo "V Print software version and exit."
echo
}################################################################################
# Check for root. #
################################################################################
CheckRoot ()
{
# If we are not running as root we exit the program
if [ ` id -u ` ! = 0 ]
then
echo "ERROR: You must be root user to run this program"
exit
fi
}################################################################################
################################################################################
# Main program #
################################################################################
################################################################################################################################################################
# Sanity checks #
################################################################################
# Are we rnning as root?
# CheckRoot# Initialize variables
option = ""
Msg = "Hello world!"
################################################################################
# Process the input options. Add options as needed. #
################################################################################
# Get the options
while getopts ":h" option; do
case $option in
h ) # display Help
Help
exit ;;
\? ) # incorrect option
echo "Error: Invalid option"
exit ;;
esac
doneecho " $Msg " A final exercise
You probably noticed that the Help function in your code refers to features that are not in the code. As a final exercise, figure out how to add those functions to the code template you created.
SummaryIn this article, you created a couple of functions to perform a sanity test for whether your program is running as root. Your program is getting a little more complex, so testing is becoming more important and requires more test paths to be complete.
This series looked at a very minimal Bash program and how to build a script up a bit at a time. The result is a simple template that can be the starting point for other, more useful Bash scripts and that contains useful elements that make it easy to start new scripts.
By now, you get the idea: Compiled programs are necessary and fill a very important need. But for sysadmins, there is always a better way. Always use shell scripts to meet your job's automation needs. Shell scripts are open; their content and purpose are knowable. They can be readily modified to meet different requirements. I have never found anything that I need to do in my sysadmin role that cannot be accomplished with a shell script.
What you have created so far in this series is just the beginning. As you write more Bash programs, you will find more bits of code that you use frequently and should be included in your program template.
Resources
Dec 19, 2019 | opensource.com
In the first article in this series, you created a very small, one-line Bash script and explored the reasons for creating shell scripts and why they are the most efficient option for the system administrator, rather than compiled programs.
In this second article, you will begin creating a Bash script template that can be used as a starting point for other Bash scripts. The template will ultimately contain a Help facility, a licensing statement, a number of simple functions, and some logic to deal with those options and others that might be needed for the scripts that will be based on this template.
Why create a template? More on sysadminsLike automation in general, the idea behind creating a template is to be the " lazy sysadmin ." A template contains the basic components that you want in all of your scripts. It saves time compared to adding those components to every new script and makes it easy to start a new script.
- Enable Sysadmin blog
- The Automated Enterprise: a guide to managing IT with automation
- eBook: Ansible Automation for SysAdmins
- Latest sysadmin articles
Although it can be tempting to just throw a few command-line Bash statements together into a file and make it executable, that can be counterproductive in the long run. A well-written and well-commented Bash program with a Help facility and the capability to accept command-line options provides a good starting point for sysadmins who maintain the program, which includes the programs that you write and maintain.
The requirementsYou should always create a set of requirements for every project you do. This includes scripts, even if it is a simple list with only two or three items on it. I have been involved in many projects that either failed completely or failed to meet the customer's needs, usually due to the lack of a requirements statement or a poorly written one.
The requirements for this Bash template are pretty simple:
The basic structure
- Create a template that can be used as the starting point for future Bash programming projects.
- The template should follow standard Bash programming practices.
- It must include:
- A heading section that can be used to describe the function of the program and a changelog
- A licensing statement
- A section for functions
- A Help function
- A function to test whether the program user is root
- A method for evaluating command-line options
A basic Bash script has three sections. Bash has no way to delineate sections, but the boundaries between the sections are implicit.
- All scripts must begin with the shebang ( #! ), and this must be the first line in any Bash program.
- The functions section must begin after the shebang and before the body of the program. As part of my need to document everything, I place a comment before each function with a short description of what it is intended to do. I also include comments inside the functions to elaborate further. Short, simple programs may not need functions.
- The main part of the program comes after the function section. This can be a single Bash statement or thousands of lines of code. One of my programs has a little over 200 lines of code, not counting comments. That same program has more than 600 comment lines.
That is all there is -- just three sections in the structure of any Bash program.
Leading commentsI always add more than this for various reasons. First, I add a couple of sections of comments immediately after the shebang. These comment sections are optional, but I find them very helpful.
The first comment section is the program name and description and a change history. I learned this format while working at IBM, and it provides a method of documenting the long-term development of the program and any fixes applied to it. This is an important start in documenting your program.
The second comment section is a copyright and license statement. I use GPLv2, and this seems to be a standard statement for programs licensed under GPLv2. If you use a different open source license, that is fine, but I suggest adding an explicit statement to the code to eliminate any possible confusion about licensing. Scott Peterson's article The source code is the license helps explain the reasoning behind this.
So now the script looks like this:
#!/bin/bash
################################################################################
# scriptTemplate #
# #
# Use this template as the beginning of a new program. Place a short #
# description of the script here. #
# #
# Change History #
# 11/11/2019 David Both Original code. This is a template for creating #
# new Bash shell scripts. #
# Add new history entries as needed. #
# #
# #
################################################################################
################################################################################
################################################################################
# #
# Copyright (C) 2007, 2019 David Both #
# [email protected] #
# #
# This program is free software; you can redistribute it and/or modify #
# it under the terms of the GNU General Public License as published by #
# the Free Software Foundation; either version 2 of the License, or #
# (at your option) any later version. #
# #
# This program is distributed in the hope that it will be useful, #
# but WITHOUT ANY WARRANTY; without even the implied warranty of #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
# GNU General Public License for more details. #
# #
# You should have received a copy of the GNU General Public License #
# along with this program; if not, write to the Free Software #
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA #
# #
################################################################################
################################################################################
################################################################################echo "hello world!"
Run the revised program to verify that it still works as expected.
About testingNow is a good time to talk about testing.
" There is always one more bug."
-- Lubarsky's Law of Cybernetic EntomologyLubarsky -- whoever that might be -- is correct. You can never find all the bugs in your code. For every bug I find, there always seems to be another that crops up, usually at a very inopportune time.
Testing is not just about programs. It is also about verification that problems -- whether caused by hardware, software, or the seemingly endless ways users can find to break things -- that are supposed to be resolved actually are. Just as important, testing is also about ensuring that the code is easy to use and the interface makes sense to the user.
Following a well-defined process when writing and testing shell scripts can contribute to consistent and high-quality results. My process is simple:
The test plan
- Create a simple test plan.
- Start testing right at the beginning of development.
- Perform a final test when the code is complete.
- Move to production and test more.
There are lots of different formats for test plans. I have worked with the full range -- from having it all in my head; to a few notes jotted down on a sheet of paper; and all the way to a complex set of forms that require a full description of each test, which functional code it would test, what the test would accomplish, and what the inputs and results should be.
Speaking as a sysadmin who has been (but is not now) a tester, I try to take the middle ground. Having at least a short written test plan will ensure consistency from one test run to the next. How much detail you need depends upon how formal your development and test functions are.
The sample test plan documents I found using Google were complex and intended for large organizations with very formal development and test processes. Although those test plans would be good for people with "test" in their job title, they do not apply well to sysadmins' more chaotic and time-dependent working conditions. As in most other aspects of the job, sysadmins need to be creative. So here is a short list of things to consider including in your test plan. Modify it to suit your needs:
- The name and a short description of the software being tested
- A description of the software features to be tested
- The starting conditions for each test
- The functions to follow for each test
- A description of the desired outcome for each test
- Specific tests designed to test for negative outcomes
- Tests for how the program handles unexpected inputs
- A clear description of what constitutes pass or fail for each test
- Fuzzy testing, which is described below
This list should give you some ideas for creating your test plans. Most sysadmins should keep it simple and fairly informal.
Test early -- test oftenI always start testing my shell scripts as soon as I complete the first portion that is executable. This is true whether I am writing a short command-line program or a script that is an executable file.
I usually start creating new programs with the shell script template. I write the code for the Help function and test it. This is usually a trivial part of the process, but it helps me get started and ensures that things in the template are working properly at the outset. At this point, it is easy to fix problems with the template portions of the script or to modify it to meet needs that the standard template does not.
Once the template and Help function are working, I move on to creating the body of the program by adding comments to document the programming steps required to meet the program specifications. Now I start adding code to meet the requirements stated in each comment. This code will probably require adding variables that are initialized in that section of the template -- which is now becoming a shell script.
This is where testing is more than just entering data and verifying the results. It takes a bit of extra work. Sometimes I add a command that simply prints the intermediate result of the code I just wrote and verify that. For more complex scripts, I add a -t option for "test mode." In this case, the internal test code executes only when the -t option is entered on the command line.
Final testingAfter the code is complete, I go back to do a complete test of all the features and functions using known inputs to produce specific outputs. I also test some random inputs to see if the program can handle unexpected input.
Final testing is intended to verify that the program is functioning essentially as intended. A large part of the final test is to ensure that functions that worked earlier in the development cycle have not been broken by code that was added or changed later in the cycle.
If you have been testing the script as you add new code to it, you may think there should not be any surprises during the final test. Wrong! There are always surprises during final testing. Always. Expect those surprises, and be ready to spend time fixing them. If there were never any bugs discovered during final testing, there would be no point in doing a final test, would there?
Testing in productionHuh -- what?
"Not until a program has been in production for at least six months will the most harmful error be discovered."
-- Troutman's Programming PostulatesYes, testing in production is now considered normal and desirable. Having been a tester myself, this seems reasonable. "But wait! That's dangerous," you say. My experience is that it is no more dangerous than extensive and rigorous testing in a dedicated test environment. In some cases, there is no choice because there is no test environment -- only production.
Sysadmins are no strangers to the need to test new or revised scripts in production. Anytime a script is moved into production, that becomes the ultimate test. The production environment constitutes the most critical part of that test. Nothing that testers can dream up in a test environment can fully replicate the true production environment.
The allegedly new practice of testing in production is just the recognition of what sysadmins have known all along. The best test is production -- so long as it is not the only test.
Fuzzy testingThis is another of those buzzwords that initially caused me to roll my eyes. Its essential meaning is simple: have someone bang on the keys until something happens, and see how well the program handles it. But there really is more to it than that.
Fuzzy testing is a bit like the time my son broke the code for a game in less than a minute with random input. That pretty much ended my attempts to write games for him.
Most test plans utilize very specific input that generates a specific result or output. Regardless of whether the test defines a positive or negative outcome as a success, it is still controlled, and the inputs and results are specified and expected, such as a specific error message for a specific failure mode.
Fuzzy testing is about dealing with randomness in all aspects of the test, such as starting conditions, very random and unexpected input, random combinations of options selected, low memory, high levels of CPU contending with other programs, multiple instances of the program under test, and any other random conditions that you can think of to apply to the tests.
I try to do some fuzzy testing from the beginning. If the Bash script cannot deal with significant randomness in its very early stages, then it is unlikely to get better as you add more code. This is a good time to catch these problems and fix them while the code is relatively simple. A bit of fuzzy testing at each stage is also useful in locating problems before they get masked by even more code.
After the code is completed, I like to do some more extensive fuzzy testing. Always do some fuzzy testing. I have certainly been surprised by some of the results. It is easy to test for the expected things, but users do not usually do the expected things with a script.
Previews of coming attractionsThis article accomplished a little in the way of creating a template, but it mostly talked about testing. This is because testing is a critical part of creating any kind of program. In the next article in this series, you will add a basic Help function along with some code to detect and act on options, such as -h , to your Bash script template.
Resources
- How to program with Bash: Syntax and tools
- How to program with Bash: Logical operators and shell expansions
- How to program with Bash: Loops
This series of articles is partially based on Volume 2, Chapter 10 of David Both's three-part Linux self-study course, Using and Administering Linux -- Zero to SysAdmin .
Feb 21, 2019 | opensource.com
Software developers writing applications in languages such as Java, Ruby, and Python have sophisticated libraries to help them maintain their software's integrity over time. They create tests that run applications through a series of executions in structured environments to ensure all of their software's aspects work as expected.
These tests are even more powerful when they're automated in a continuous integration (CI) system, where every push to the source repository causes the tests to run, and developers are immediately notified when tests fail. This fast feedback increases developers' confidence in the functional integrity of their applications.
The Bash Automated Testing System ( BATS ) enables developers writing Bash scripts and libraries to apply the same practices used by Java, Ruby, Python, and other developers to their Bash code.
Installing BATSThe BATS GitHub page includes installation instructions. There are two BATS helper libraries that provide more powerful assertions or allow overrides to the Test Anything Protocol ( TAP ) output format used by BATS. These can be installed in a standard location and sourced by all scripts. It may be more convenient to include a complete version of BATS and its helper libraries in the Git repository for each set of scripts or libraries being tested. This can be accomplished using the git submodule system.
The following commands will install BATS and its helper libraries into the test directory in a Git repository.
git submodule init
git submodule add https: // github.com / sstephenson / bats test / libs / bats
git submodule add https: // github.com / ztombol / bats-assert test / libs / bats-assert
git submodule add https: // github.com / ztombol / bats-support test / libs / bats-support
git add .
git commit -m 'installed bats'To clone a Git repository and install its submodules at the same time, use the
--recurse-submodules flag to git clone .Each BATS test script must be executed by the bats executable. If you installed BATS into your source code repo's test/libs directory, you can invoke the test with:
./test/libs/bats/bin/bats <path to test script>Alternatively, add the following to the beginning of each of your BATS test scripts:
#!/usr/bin/env ./test/libs/bats/bin/bats
load 'libs/bats-support/load'
load 'libs/bats-assert/load'and chmod +x <path to test script> . This will a) make them executable with the BATS installed in ./test/libs/bats and b) include these helper libraries. BATS test scripts are typically stored in the test directory and named for the script being tested, but with the .bats extension. For example, a BATS script that tests bin/build should be called test/build.bats .
You can also run an entire set of BATS test files by passing a regular expression to BATS, e.g., ./test/lib/bats/bin/bats test/*.bats .
Organizing libraries and scripts for BATS coverageBash scripts and libraries must be organized in a way that efficiently exposes their inner workings to BATS. In general, library functions and shell scripts that run many commands when they are called or executed are not amenable to efficient BATS testing.
For example, build.sh is a typical script that many people write. It is essentially a big pile of code. Some might even put this pile of code in a function in a library. But it's impossible to run a big pile of code in a BATS test and cover all possible types of failures it can encounter in separate test cases. The only way to test this pile of code with sufficient coverage is to break it into many small, reusable, and, most importantly, independently testable functions.
It's straightforward to add more functions to a library. An added benefit is that some of these functions can become surprisingly useful in their own right. Once you have broken your library function into lots of smaller functions, you can source the library in your BATS test and run the functions as you would any other command to test them.
Bash scripts must also be broken down into multiple functions, which the main part of the script should call when the script is executed. In addition, there is a very useful trick to make it much easier to test Bash scripts with BATS: Take all the code that is executed in the main part of the script and move it into a function, called something like run_main . Then, add the following to the end of the script:
if [[ " ${BASH_SOURCE[0]} " == " ${0} " ]]
then
run_main
fiThis bit of extra code does something special. It makes the script behave differently when it is executed as a script than when it is brought into the environment with source . This trick enables the script to be tested the same way a library is tested, by sourcing it and testing the individual functions. For example, here is build.sh refactored for better BATS testability .
Writing and running testsAs mentioned above, BATS is a TAP-compliant testing framework with a syntax and output that will be familiar to those who have used other TAP-compliant testing suites, such as JUnit, RSpec, or Jest. Its tests are organized into individual test scripts. Test scripts are organized into one or more descriptive @test blocks that describe the unit of the application being tested. Each @test block will run a series of commands that prepares the test environment, runs the command to be tested, and makes assertions about the exit and output of the tested command. Many assertion functions are imported with the bats , bats-assert , and bats-support libraries, which are loaded into the environment at the beginning of the BATS test script. Here is a typical BATS test block:
@ test "requires CI_COMMIT_REF_SLUG environment variable" {
unset CI_COMMIT_REF_SLUG
assert_empty " ${CI_COMMIT_REF_SLUG} "
run some_command
assert_failure
assert_output --partial "CI_COMMIT_REF_SLUG"
}If a BATS script includes setup and/or teardown functions, they are automatically executed by BATS before and after each test block runs. This makes it possible to create environment variables, test files, and do other things needed by one or all tests, then tear them down after each test runs. Build.bats is a full BATS test of our newly formatted build.sh script. (The mock_docker command in this test will be explained below, in the section on mocking/stubbing.)
When the test script runs, BATS uses exec to run each @test block as a separate subprocess. This makes it possible to export environment variables and even functions in one @test without affecting other @test s or polluting your current shell session. The output of a test run is a standard format that can be understood by humans and parsed or manipulated programmatically by TAP consumers. Here is an example of the output for the CI_COMMIT_REF_SLUG test block when it fails:
✗ requires CI_COMMIT_REF_SLUG environment variable
( from function ` assert_output ' in file test/libs/bats-assert/src/assert.bash, line 231,
in test file test/ci_deploy.bats, line 26)
`assert_output --partial "CI_COMMIT_REF_SLUG"' failed-- output does not contain substring --
substring ( 1 lines ) :
CI_COMMIT_REF_SLUG
output ( 3 lines ) :
. / bin / deploy.sh: join_string_by: command not found
oc error
Could not login
--** Did not delete , as test failed **
1 test , 1 failure
Here is the output of a successful test:
✓ requires CI_COMMIT_REF_SLUG environment variableHelpersLike any shell script or library, BATS test scripts can include helper libraries to share common code across tests or enhance their capabilities. These helper libraries, such as bats-assert and bats-support , can even be tested with BATS.
Libraries can be placed in the same test directory as the BATS scripts or in the test/libs directory if the number of files in the test directory gets unwieldy. BATS provides the load function that takes a path to a Bash file relative to the script being tested (e.g., test , in our case) and sources that file. Files must end with the prefix .bash , but the path to the file passed to the load function can't include the prefix. build.bats loads the bats-assert and bats-support libraries, a small helpers.bash library, and a docker_mock.bash library (described below) with the following code placed at the beginning of the test script below the interpreter magic line:
load 'libs/bats-support/load'
load 'libs/bats-assert/load'
load 'helpers'
load 'docker_mock' Stubbing test input and mocking external callsThe majority of Bash scripts and libraries execute functions and/or executables when they run. Often they are programmed to behave in specific ways based on the exit status or output ( stdout , stderr ) of these functions or executables. To properly test these scripts, it is often necessary to make fake versions of these commands that are designed to behave in a specific way during a specific test, a process called "stubbing." It may also be necessary to spy on the program being tested to ensure it calls a specific command, or it calls a specific command with specific arguments, a process called "mocking." For more on this, check out this great discussion of mocking and stubbing in Ruby RSpec, which applies to any testing system.
The Bash shell provides tricks that can be used in your BATS test scripts to do mocking and stubbing. All require the use of the Bash export command with the -f flag to export a function that overrides the original function or executable. This must be done before the tested program is executed. Here is a simple example that overrides the cat executable:
function cat () { echo "THIS WOULD CAT ${*} " }
export -f catThis method overrides a function in the same manner. If a test needs to override a function within the script or library being tested, it is important to source the tested script or library before the function is stubbed or mocked. Otherwise, the stub/mock will be replaced with the actual function when the script is sourced. Also, make sure to stub/mock before you run the command you're testing. Here is an example from build.bats that mocks the raise function described in build.sh to ensure a specific error message is raised by the login fuction:
@ test ".login raises on oc error" {
source ${profile_script}
function raise () { echo " ${1} raised" ; }
export -f raise
run login
assert_failure
assert_output -p "Could not login raised"
}Normally, it is not necessary to unset a stub/mock function after the test, since export only affects the current subprocess during the exec of the current @test block. However, it is possible to mock/stub commands (e.g. cat , sed , etc.) that the BATS assert * functions use internally. These mock/stub functions must be unset before these assert commands are run, or they will not work properly. Here is an example from build.bats that mocks sed , runs the build_deployable function, and unsets sed before running any assertions:
@ test ".build_deployable prints information, runs docker build on a modified Dockerfile.production and publish_image when its not a dry_run" {
local expected_dockerfile = 'Dockerfile.production'
local application = 'application'
local environment = 'environment'
local expected_original_base_image = " ${application} "
local expected_candidate_image = " ${application} -candidate: ${environment} "
local expected_deployable_image = " ${application} : ${environment} "
source ${profile_script}
mock_docker build --build-arg OAUTH_CLIENT_ID --build-arg OAUTH_REDIRECT --build-arg DDS_API_BASE_URL -t " ${expected_deployable_image} " -
function publish_image () { echo "publish_image ${*} " ; }
export -f publish_image
function sed () {
echo "sed ${*} " >& 2 ;
echo "FROM application-candidate:environment" ;
}
export -f sed
run build_deployable " ${application} " " ${environment} "
assert_success
unset sed
assert_output --regexp "sed.* ${expected_dockerfile} "
assert_output -p "Building ${expected_original_base_image} deployable ${expected_deployable_image} FROM ${expected_candidate_image} "
assert_output -p "FROM ${expected_candidate_image} piped"
assert_output -p "build --build-arg OAUTH_CLIENT_ID --build-arg OAUTH_REDIRECT --build-arg DDS_API_BASE_URL -t ${expected_deployable_image} -"
assert_output -p "publish_image ${expected_deployable_image} "
}Sometimes the same command, e.g. foo, will be invoked multiple times, with different arguments, in the same function being tested. These situations require the creation of a set of functions:
- mock_foo: takes expected arguments as input, and persists these to a TMP file
- foo: the mocked version of the command, which processes each call with the persisted list of expected arguments. This must be exported with export -f.
- cleanup_foo: removes the TMP file, for use in teardown functions. This can test to ensure that a @test block was successful before removing.
Since this functionality is often reused in different tests, it makes sense to create a helper library that can be loaded like other libraries.
A good example is docker_mock.bash . It is loaded into build.bats and used in any test block that tests a function that calls the Docker executable. A typical test block using docker_mock looks like:
@ test ".publish_image fails if docker push fails" {
setup_publish
local expected_image = "image"
local expected_publishable_image = " ${CI_REGISTRY_IMAGE} / ${expected_image} "
source ${profile_script}
mock_docker tag " ${expected_image} " " ${expected_publishable_image} "
mock_docker push " ${expected_publishable_image} " and_fail
run publish_image " ${expected_image} "
assert_failure
assert_output -p "tagging ${expected_image} as ${expected_publishable_image} "
assert_output -p "tag ${expected_image} ${expected_publishable_image} "
assert_output -p "pushing image to gitlab registry"
assert_output -p "push ${expected_publishable_image} "
}This test sets up an expectation that Docker will be called twice with different arguments. With the second call to Docker failing, it runs the tested command, then tests the exit status and expected calls to Docker.
One aspect of BATS introduced by mock_docker.bash is the ${BATS_TMPDIR} environment variable, which BATS sets at the beginning to allow tests and helpers to create and destroy TMP files in a standard location. The mock_docker.bash library will not delete its persisted mocks file if a test fails, but it will print where it is located so it can be viewed and deleted. You may need to periodically clean old mock files out of this directory.
One note of caution regarding mocking/stubbing: The build.bats test consciously violates a dictum of testing that states: Don't mock what you don't own! This dictum demands that calls to commands that the test's developer didn't write, like docker , cat , sed , etc., should be wrapped in their own libraries, which should be mocked in tests of scripts that use them. The wrapper libraries should then be tested without mocking the external commands.
This is good advice and ignoring it comes with a cost. If the Docker CLI API changes, the test scripts will not detect this change, resulting in a false positive that won't manifest until the tested build.sh script runs in a production setting with the new version of Docker. Test developers must decide how stringently they want to adhere to this standard, but they should understand the tradeoffs involved with their decision.
ConclusionIntroducing a testing regime to any software development project creates a tradeoff between a) the increase in time and organization required to develop and maintain code and tests and b) the increased confidence developers have in the integrity of the application over its lifetime. Testing regimes may not be appropriate for all scripts and libraries.
In general, scripts and libraries that meet one or more of the following should be tested with BATS:
- They are worthy of being stored in source control
- They are used in critical processes and relied upon to run consistently for a long period of time
- They need to be modified periodically to add/remove/modify their function
- They are used by others
Once the decision is made to apply a testing discipline to one or more Bash scripts or libraries, BATS provides the comprehensive testing features that are available in other software development environments.
Acknowledgment: I am indebted to Darrin Mann for introducing me to BATS testing.
Jul 12, 2020 | opensource.com
How to add a Help facility to your Bash program In the third article in this series, learn about using functions as you create a simple Help facility for your Bash script. 20 Dec 2019 David Both (Correspondent) Feed 53 up Image by : Opensource.com x Subscribe now
Get the highlights in your inbox every week.
https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0
In the first article in this series, you created a very small, one-line Bash script and explored the reasons for creating shell scripts and why they are the most efficient option for the system administrator, rather than compiled programs. In the second article , you began the task of creating a fairly simple template that you can use as a starting point for other Bash programs, then explored ways to test it.
This third of the four articles in this series explains how to create and use a simple Help function. While creating your Help facility, you will also learn about using functions and how to handle command-line options such as -h .
Why Help? More on BashEven fairly simple Bash programs should have some sort of Help facility, even if it is fairly rudimentary. Many of the Bash shell programs I write are used so infrequently that I forget the exact syntax of the command I need. Others are so complex that I need to review the options and arguments even when I use them frequently.
- Bash cheat sheet
- An introduction to programming with Bash
- A sysadmin's guide to Bash scripting
- Latest Bash articles
Having a built-in Help function allows you to view those things without having to inspect the code itself. A good and complete Help facility is also a part of program documentation.
About functionsShell functions are lists of Bash program statements that are stored in the shell's environment and can be executed, like any other command, by typing their name at the command line. Shell functions may also be known as procedures or subroutines, depending upon which other programming language you are using.
Functions are called in scripts or from the command-line interface (CLI) by using their names, just as you would for any other command. In a CLI program or a script, the commands in the function execute when they are called, then the program flow sequence returns to the calling entity, and the next series of program statements in that entity executes.
The syntax of a function is:
FunctionName(){program statements}Explore this by creating a simple function at the CLI. (The function is stored in the shell environment for the shell instance in which it is created.) You are going to create a function called hw , which stands for "hello world." Enter the following code at the CLI and press Enter . Then enter hw as you would any other shell command:
[ student @ testvm1 ~ ] $ hw (){ echo "Hi there kiddo" ; }
[ student @ testvm1 ~ ] $ hw
Hi there kiddo
[ student @ testvm1 ~ ] $OK, so I am a little tired of the standard "Hello world" starter. Now, list all of the currently defined functions. There are a lot of them, so I am showing just the new hw function. When it is called from the command line or within a program, a function performs its programmed task and then exits and returns control to the calling entity, the command line, or the next Bash program statement in a script after the calling statement:
[ student @ testvm1 ~ ] $ declare -f | less
< snip >
hw ()
{
echo "Hi there kiddo"
}
< snip >Remove that function because you do not need it anymore. You can do that with the unset command:
[ student @ testvm1 ~ ] $ unset -f hw ; hw
bash: hw: command not found
[ student @ testvm1 ~ ] $ Creating the Help functionOpen the hello program in an editor and add the Help function below to the hello program code after the copyright statement but before the echo "Hello world!" statement. This Help function will display a short description of the program, a syntax diagram, and short descriptions of the available options. Add a call to the Help function to test it and some comment lines that provide a visual demarcation between the functions and the main portion of the program:
################################################################################
# Help #
################################################################################
Help ()
{
# Display Help
echo "Add description of the script functions here."
echo
echo "Syntax: scriptTemplate [-g|h|v|V]"
echo "options:"
echo "g Print the GPL license notification."
echo "h Print this Help."
echo "v Verbose mode."
echo "V Print software version and exit."
echo
}################################################################################
################################################################################
# Main program #
################################################################################
################################################################################Help
echo "Hello world!"The options described in this Help function are typical for the programs I write, although none are in the code yet. Run the program to test it:
[ student @ testvm1 ~ ] $ . / hello
Add description of the script functions here.Syntax: scriptTemplate [ -g | h | v | V ]
options:
g Print the GPL license notification.
h Print this Help.
v Verbose mode.
V Print software version and exit.Hello world !
[ student @ testvm1 ~ ] $Because you have not added any logic to display Help only when you need it, the program will always display the Help. Since the function is working correctly, read on to add some logic to display the Help only when the -h option is used when you invoke the program at the command line.
Handling optionsA Bash script's ability to handle command-line options such as -h gives some powerful capabilities to direct the program and modify what it does. In the case of the -h option, you want the program to print the Help text to the terminal session and then quit without running the rest of the program. The ability to process options entered at the command line can be added to the Bash script using the while command (see How to program with Bash: Loops to learn more about while ) in conjunction with the getops and case commands.
The getops command reads any and all options specified at the command line and creates a list of those options. In the code below, the while command loops through the list of options by setting the variable $options for each. The case statement is used to evaluate each option in turn and execute the statements in the corresponding stanza. The while statement will continue to evaluate the list of options until they have all been processed or it encounters an exit statement, which terminates the program.
Be sure to delete the Help function call just before the echo "Hello world!" statement so that the main body of the program now looks like this:
################################################################################
################################################################################
# Main program #
################################################################################
################################################################################
################################################################################
# Process the input options. Add options as needed. #
################################################################################
# Get the options
while getopts ":h" option; do
case $option in
h ) # display Help
Help
exit ;;
esac
doneecho "Hello world!"
Notice the double semicolon at the end of the exit statement in the case option for -h . This is required for each option added to this case statement to delineate the end of each option.
TestingTesting is now a little more complex. You need to test your program with a number of different options -- and no options -- to see how it responds. First, test with no options to ensure that it prints "Hello world!" as it should:
[ student @ testvm1 ~ ] $ . / hello
Hello world !That works, so now test the logic that displays the Help text:
[ student @ testvm1 ~ ] $ . / hello -h
Add description of the script functions here.Syntax: scriptTemplate [ -g | h | t | v | V ]
options:
g Print the GPL license notification.
h Print this Help.
v Verbose mode.
V Print software version and exit.That works as expected, so try some testing to see what happens when you enter some unexpected options:
[ student @ testvm1 ~ ] $ . / hello -x
Hello world !
[ student @ testvm1 ~ ] $ . / hello -q
Hello world !
[ student @ testvm1 ~ ] $ . / hello -lkjsahdf
Add description of the script functions here.Syntax: scriptTemplate [ -g | h | t | v | V ]
options:
g Print the GPL license notification.
h Print this Help.
v Verbose mode.
V Print software version and exit.[ student @ testvm1 ~ ] $
The program just ignores any options without specific responses without generating any errors. But notice the last entry (with -lkjsahdf for options): because there is an h in the list of options, the program recognizes it and prints the Help text. This testing has shown that the program doesn't have the ability to handle incorrect input and terminate the program if any is detected.
You can add another case stanza to the case statement to match any option that doesn't have an explicit match. This general case will match anything you have not provided a specific match for. The case statement now looks like this, with the catch-all match of \? as the last case. Any additional specific cases must precede this final one:
while getopts ":h" option; do
case $option in
h ) # display Help
Help
exit ;;
\? ) # incorrect option
echo "Error: Invalid option"
exit ;;
esac
doneTest the program again using the same options as before and see how it works now.
Where you areYou have accomplished a good amount in this article by adding the capability to process command-line options and a Help procedure. Your Bash script now looks like this:
#!/usr/bin/bash
################################################################################
# scriptTemplate #
# #
# Use this template as the beginning of a new program. Place a short #
# description of the script here. #
# #
# Change History #
# 11/11/2019 David Both Original code. This is a template for creating #
# new Bash shell scripts. #
# Add new history entries as needed. #
# #
# #
################################################################################
################################################################################
################################################################################
# #
# Copyright (C) 2007, 2019 David Both #
# [email protected] #
# #
# This program is free software; you can redistribute it and/or modify #
# it under the terms of the GNU General Public License as published by #
# the Free Software Foundation; either version 2 of the License, or #
# (at your option) any later version. #
# #
# This program is distributed in the hope that it will be useful, #
# but WITHOUT ANY WARRANTY; without even the implied warranty of #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
# GNU General Public License for more details. #
# #
# You should have received a copy of the GNU General Public License #
# along with this program; if not, write to the Free Software #
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA #
# #
################################################################################
################################################################################
################################################################################################################################################################
# Help #
################################################################################
Help ()
{
# Display Help
echo "Add description of the script functions here."
echo
echo "Syntax: scriptTemplate [-g|h|t|v|V]"
echo "options:"
echo "g Print the GPL license notification."
echo "h Print this Help."
echo "v Verbose mode."
echo "V Print software version and exit."
echo
}################################################################################
################################################################################
# Main program #
################################################################################
################################################################################
################################################################################
# Process the input options. Add options as needed. #
################################################################################
# Get the options
while getopts ":h" option; do
case $option in
h ) # display Help
Help
exit ;;
\? ) # incorrect option
echo "Error: Invalid option"
exit ;;
esac
doneecho "Hello world!"
Be sure to test this version of the program very thoroughly. Use random inputs and see what happens. You should also try testing valid and invalid options without using the dash ( - ) in front.
Next timeIn this article, you added a Help function as well as the ability to process command-line options to display it selectively. The program is getting a little more complex, so testing is becoming more important and requires more test paths in order to be complete.
The next article will look at initializing variables and doing a bit of sanity checking to ensure that the program will run under the correct set of conditions.
Jul 12, 2020 | opensource.com
Navigating the Bash shell with pushd and popd Pushd and popd are the fastest navigational commands you've never heard of. 07 Aug 2019 Seth Kenlon (Red Hat) Feed 71 up 7 comments Image by : Opensource.com x Subscribe now
Get the highlights in your inbox every week.
https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0
The pushd and popd commands are built-in features of the Bash shell to help you "bookmark" directories for quick navigation between locations on your hard drive. You might already feel that the terminal is an impossibly fast way to navigate your computer; in just a few key presses, you can go anywhere on your hard drive, attached storage, or network share. But that speed can break down when you find yourself going back and forth between directories, or when you get "lost" within your filesystem. Those are precisely the problems pushd and popd can help you solve.
pushdAt its most basic, pushd is a lot like cd . It takes you from one directory to another. Assume you have a directory called one , which contains a subdirectory called two , which contains a subdirectory called three , and so on. If your current working directory is one , then you can move to two or three or anywhere with the cd command:
$ pwd
one
$ cd two / three
$ pwd
threeYou can do the same with pushd :
$ pwd
one
$ pushd two / three
~ / one / two / three ~ / one
$ pwd
threeThe end result of pushd is the same as cd , but there's an additional intermediate result: pushd echos your destination directory and your point of origin. This is your directory stack , and it is what makes pushd unique.
StacksA stack, in computer terminology, refers to a collection of elements. In the context of this command, the elements are directories you have recently visited by using the pushd command. You can think of it as a history or a breadcrumb trail.
You can move all over your filesystem with pushd ; each time, your previous and new locations are added to the stack:
$ pushd four
~ / one / two / three / four ~ / one / two / three ~ / one
$ pushd five
~ / one / two / three / four / five ~ / one / two / three / four ~ / one / two / three ~ / one Navigating the stackOnce you've built up a stack, you can use it as a collection of bookmarks or fast-travel waypoints. For instance, assume that during a session you're doing a lot of work within the ~/one/two/three/four/five directory structure of this example. You know you've been to one recently, but you can't remember where it's located in your pushd stack. You can view your stack with the +0 (that's a plus sign followed by a zero) argument, which tells pushd not to change to any directory in your stack, but also prompts pushd to echo your current stack:
$ pushd + 0
~ / one / two / three / four ~ / one / two / three ~ / one ~ / one / two / three / four / fiveAlternatively, you can view the stack with the dirs command, and you can see the index number for each directory by using the -v option:
$ dirs -v
0 ~ / one / two / three / four
1 ~ / one / two / three
2 ~ / one
3 ~ / one / two / three / four / fiveThe first entry in your stack is your current location. You can confirm that with pwd as usual:
$ pwd
~ / one / two / three / fourStarting at 0 (your current location and the first entry of your stack), the second element in your stack is ~/one , which is your desired destination. You can move forward in your stack using the +2 option:
$ pushd + 2
~ / one ~ / one / two / three / four / five ~ / one / two / three / four ~ / one / two / three
$ pwd
~ / oneThis changes your working directory to ~/one and also has shifted the stack so that your new location is at the front.
You can also move backward in your stack. For instance, to quickly get to ~/one/two/three given the example output, you can move back by one, keeping in mind that pushd starts with 0:
$ pushd -0
~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two / three / four Adding to the stackYou can continue to navigate your stack in this way, and it will remain a static listing of your recently visited directories. If you want to add a directory, just provide the directory's path. If a directory is new to the stack, it's added to the list just as you'd expect:
$ pushd / tmp
/ tmp ~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two / three / fourBut if it already exists in the stack, it's added a second time:
$ pushd ~ / one
~ / one / tmp ~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two / three / fourWhile the stack is often used as a list of directories you want quick access to, it is really a true history of where you've been. If you don't want a directory added redundantly to the stack, you must use the +N and -N notation.
Removing directories from the stackYour stack is, obviously, not immutable. You can add to it with pushd or remove items from it with popd .
For instance, assume you have just used pushd to add ~/one to your stack, making ~/one your current working directory. To remove the first (or "zeroeth," if you prefer) element:
$ pwd
~ / one
$ popd + 0
/ tmp ~ / one / two / three ~ / one ~ / one / two / three / four / five ~ / one / two / three / four
$ pwd
~ / oneOf course, you can remove any element, starting your count at 0:
$ pwd ~ / one
$ popd + 2
/ tmp ~ / one / two / three ~ / one / two / three / four / five ~ / one / two / three / four
$ pwd ~ / oneYou can also use popd from the back of your stack, again starting with 0. For example, to remove the final directory from your stack:
$ popd -0
/ tmp ~ / one / two / three ~ / one / two / three / four / fiveWhen used like this, popd does not change your working directory. It only manipulates your stack.
Navigating with popdThe default behavior of popd , given no arguments, is to remove the first (zeroeth) item from your stack and make the next item your current working directory.
This is most useful as a quick-change command, when you are, for instance, working in two different directories and just need to duck away for a moment to some other location. You don't have to think about your directory stack if you don't need an elaborate history:
$ pwd
~ / one
$ pushd ~ / one / two / three / four / five
$ popd
$ pwd
~ / oneYou're also not required to use pushd and popd in rapid succession. If you use pushd to visit a different location, then get distracted for three hours chasing down a bug or doing research, you'll find your directory stack patiently waiting (unless you've ended your terminal session):
$ pwd ~ / one
$ pushd / tmp
$ cd { / etc, / var, / usr } ; sleep 2001
[ ... ]
$ popd
$ pwd
~ / one Pushd and popd in the real worldThe pushd and popd commands are surprisingly useful. Once you learn them, you'll find excuses to put them to good use, and you'll get familiar with the concept of the directory stack. Getting comfortable with pushd was what helped me understand git stash , which is entirely unrelated to pushd but similar in conceptual intangibility.
Using pushd and popd in shell scripts can be tempting, but generally, it's probably best to avoid them. They aren't portable outside of Bash and Zsh, and they can be obtuse when you're re-reading a script ( pushd +3 is less clear than cd $HOME/$DIR/$TMP or similar).
Aside from these warnings, if you're a regular Bash or Zsh user, then you can and should try pushd and popd . Bash prompt tips and tricks Here are a few hidden treasures you can use to customize your Bash prompt. Dave Neary (Red Hat) Topics Bash Linux Command line About the author Seth Kenlon - Seth Kenlon is an independent multimedia artist, free culture advocate, and UNIX geek. He has worked in the film and computing industry, often at the same time. He is one of the maintainers of the Slackware-based multimedia production project, http://slackermedia.info More about me Recommended reading
Add videos as wallpaper on your Linux desktop
Use systemd timers instead of cronjobs
Why I stick with xterm
Customizing my Linux terminal with tmux and Git
Back up your phone's storage with this Linux utility
Read and write data from anywhere with redirection in the Linux terminal 7 Comments
matt on 07 Aug 2019
Seth Kenlon on 07 Aug 2019Thank you for the write up for pushd and popd. I gotta remember to use these when I'm jumping around directories a lot. I got a hung up on a pushd example because my development work using arrays differentiates between the index and the count. In my experience, a zero-based array of A, B, C; C has an index of 2 and also is the third element. C would not be considered the second element cause that would be confusing it's index and it's count.
Greg Pittman on 07 Aug 2019Interesting point, Matt. The difference between count and index had not occurred to me, but I'll try to internalise it. It's a great distinction, so thanks for bringing it up!
Seth Kenlon on 07 Aug 2019This looks like a recipe for confusing myself.
Jake on 07 Aug 2019It can be, but start out simple: use pushd to change to one directory, and then use popd to go back to the original. Sort of a single-use bookmark system.
Then, once you're comfortable with pushd and popd, branch out and delve into the stack.
A tcsh shell I used at an old job didn't have pushd and popd, so I used to have functions in my .cshrc to mimic just the back-and-forth use.
Seth Kenlon on 07 Aug 2019"dirs" can be also used to view the stack. "dirs -v" helpfully numbers each directory with its index.
other_Stu on 11 Aug 2019Thanks for that tip, Jake. I arguably should have included that in the article, but I wanted to try to stay focused on just the two {push,pop}d commands. Didn't occur to me to casually mention one use of dirs as you have here, so I've added it for posterity.
There's so much in the Bash man and info pages to talk about!
I use "pushd ." (dot for current directory) quite often. Like a working directory bookmark when you are several subdirectories deep somewhere, and need to cd to couple of other places to do some work or check something.
And you can use the cd command with your DIRSTACK as well, thanks to tilde expansion.
cd ~+3 will take you to the same directory as pushd +3 would.
Jul 12, 2020 | opensource.com
An introduction to parameter expansion in Bash Get started with this quick how-to guide on expansion modifiers that transform Bash variables and other parameters into powerful tools beyond simple value stores. 13 Jun 2017 James Pannacciulli Feed 366 up 4 comments Image by : Opensource.com x Subscribe now
Get the highlights in your inbox every week.
https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0
In Bash, entities that store values are known as parameters. Their values can be strings or arrays with regular syntax, or they can be integers or associative arrays when special attributes are set with the declare built-in. There are three types of parameters: positional parameters, special parameters, and variables.
More Linux resources
- Linux commands cheat sheet
- Advanced Linux commands cheat sheet
- Free online course: RHEL Technical Overview
- Linux networking cheat sheet
- SELinux cheat sheet
- Linux common commands cheat sheet
- What are Linux containers?
- Our latest Linux articles
For the sake of brevity, this article will focus on a few classes of expansion methods available for string variables, though these methods apply equally to other types of parameters.
Variable assignment and unadulterated expansionWhen assigning a variable, its name must be comprised solely of alphanumeric and underscore characters, and it may not begin with a numeral. There may be no spaces around the equal sign; the name must immediately precede it and the value immediately follow:
$ variable_1="my content"Storing a value in a variable is only useful if we recall that value later; in Bash, substituting a parameter reference with its value is called expansion. To expand a parameter, simply precede the name with the $ character, optionally enclosing the name in braces:
$ echo $variable_1 ${variable_1} my content my contentCrucially, as shown in the above example, expansion occurs before the command is called, so the command never sees the variable name, only the text passed to it as an argument that resulted from the expansion. Furthermore, parameter expansion occurs before word splitting; if the result of expansion contains spaces, the expansion should be quoted to preserve parameter integrity, if desired:
$ printf "%s\n" ${variable_1} my content $ printf "%s\n" "${variable_1}" my contentParameter expansion modifiersParameter expansion goes well beyond simple interpolation, however. Inside the braces of a parameter expansion, certain operators, along with their arguments, may be placed after the name, before the closing brace. These operators may invoke conditional, subset, substring, substitution, indirection, prefix listing, element counting, and case modification expansion methods, modifying the result of the expansion. With the exception of the reassignment operators ( = and := ), these operators only affect the expansion of the parameter without modifying the parameter's value for subsequent expansions.
About conditional, substring, and substitution parameter expansion operators Conditional parameter expansionConditional parameter expansion allows branching on whether the parameter is unset, empty, or has content. Based on these conditions, the parameter can be expanded to its value, a default value, or an alternate value; throw a customizable error; or reassign the parameter to a default value. The following table shows the conditional parameter expansions -- each row shows a parameter expansion using an operator to potentially modify the expansion, with the columns showing the result of that expansion given the parameter's status as indicated in the column headers. Operators with the ':' prefix treat parameters with empty values as if they were unset.
parameter expansion unset var var="" var="gnu" ${var-default} default -- gnu ${var:-default} default default gnu ${var+alternate} -- alternate alternate ${var:+alternate} -- -- alternate ${var?error} error -- gnu ${var:?error} error error gnu The = and := operators in the table function identically to - and :- , respectively, except that the = variants rebind the variable to the result of the expansion.
As an example, let's try opening a user's editor on a file specified by the OUT_FILE variable. If either the EDITOR environment variable or our OUT_FILE variable is not specified, we will have a problem. Using a conditional expansion, we can ensure that when the EDITOR variable is expanded, we get the specified value or at least a sane default:
$ echo ${EDITOR} /usr/bin/vi $ echo ${EDITOR:-$(which nano)} /usr/bin/vi $ unset EDITOR $ echo ${EDITOR:-$(which nano)} /usr/bin/nanoBuilding on the above, we can run the editor command and abort with a helpful error at runtime if there's no filename specified:
$ ${EDITOR:-$(which nano)} ${OUT_FILE:?Missing filename} bash: OUT_FILE: Missing filenameSubstring parameter expansionParameters can be expanded to just part of their contents, either by offset or by removing content matching a pattern. When specifying a substring offset, a length may optionally be specified. If running Bash version 4.2 or greater, negative numbers may be used as offsets from the end of the string. Note the parentheses used around the negative offset, which ensure that Bash does not parse the expansion as having the conditional default expansion operator from above:
$ location="CA 90095" $ echo "Zip Code: ${location:3}" Zip Code: 90095 $ echo "Zip Code: ${location:(-5)}" Zip Code: 90095 $ echo "State: ${location:0:2}" State: CAAnother way to take a substring is to remove characters from the string matching a pattern, either from the left edge with the # and ## operators or from the right edge with the % and % operators. A useful mnemonic is that # appears left of a comment and % appears right of a number. When the operator is doubled, it matches greedily, as opposed to the single version, which removes the most minimal set of characters matching the pattern.
var="open source" parameter expansion offset of 5
length of 4${var:offset} source ${var:offset:length} sour pattern of *o? ${var#pattern} en source ${var##pattern} rce pattern of ?e* ${var%pattern} open sour ${var%pattern} o The pattern-matching used is the same as with filename globbing: * matches zero or more of any character, ? matches exactly one of any character, [...] brackets introduce a character class match against a single character, supporting negation ( ^ ), as well as the posix character classes, e.g. [[:alnum:]] . By excising characters from our string in this manner, we can take a substring without first knowing the offset of the data we need:
$ echo $PATH /usr/local/bin:/usr/bin:/bin $ echo "Lowest priority in PATH: ${PATH##*:}" Lowest priority in PATH: /bin $ echo "Everything except lowest priority: ${PATH%:*}" Everything except lowest priority: /usr/local/bin:/usr/bin $ echo "Highest priority in PATH: ${PATH%:*}" Highest priority in PATH: /usr/local/binSubstitution in parameter expansionThe same types of patterns are used for substitution in parameter expansion. Substitution is introduced with the / or // operators, followed by two arguments separated by another / representing the pattern and the string to substitute. The pattern matching is always greedy, so the doubled version of the operator, in this case, causes all matches of the pattern to be replaced in the variable's expansion, while the singleton version replaces only the leftmost.
var="free and open" parameter expansion pattern of [[:space:]]
string of _${var/pattern/string} free_and open ${var//pattern/string} free_and_open The wealth of parameter expansion modifiers transforms Bash variables and other parameters into powerful tools beyond simple value stores. At the very least, it is important to understand how parameter expansion works when reading Bash scripts, but I suspect that not unlike myself, many of you will enjoy the conciseness and expressiveness that these expansion modifiers bring to your scripts as well as your interactive sessions. Topics Linux About the author James Pannacciulli - James Pannacciulli is an advocate for software freedom & user autonomy with an MA in Linguistics. Employed as a Systems Engineer in Los Angeles, in his free time he occasionally gives talks on bash usage at various conferences. James likes his beers sour and his nettles stinging. More from James may be found on his home page . He has presented at conferences including SCALE ,...
Jul 12, 2020 | opensource.com
Use aliases
... ... ...
Make your root prompt stand out... ... ...
Control your historyYou probably know that when you press the Up arrow key in Bash, you can see and reuse all (well, many) of your previous commands. That is because those commands have been saved to a file called .bash_history in your home directory. That history file comes with a bunch of settings and commands that can be very useful.
First, you can view your entire recent command history by typing
history
, or you can limit it to your last 30 commands by typinghistory 30
. But that's pretty vanilla. You have more control over what Bash saves and how it saves it.For example, if you add the following to your .bashrc, any commands that start with a space will not be saved to the history list:
HISTCONTROL=ignorespaceThis can be useful if you need to pass a password to a command in plaintext. (Yes, that is horrible, but it still happens.)
If you don't want a frequently executed command to show up in your history, use:
HISTCONTROL=ignorespace:erasedupsWith this, every time you use a command, all its previous occurrences are removed from the history file, and only the last invocation is saved to your history list.
A history setting I particularly like is the
HISTTIMEFORMAT
setting. This will prepend all entries in your history file with a timestamp. For example, I use:HISTTIMEFORMAT="%F %T "When I type
1009 2018 -06- 11 22 : 34 : 38 cat / etc / hostshistory 5
, I get nice, complete information, like this:
1010 2018 -06- 11 22 : 34 : 40 echo $foo
1011 2018 -06- 11 22 : 34 : 42 echo $bar
1012 2018 -06- 11 22 : 34 : 44 ssh myhost
1013 2018 -06- 11 22 : 34 : 55 vim .bashrcThat makes it a lot easier to browse my command history and find the one I used two days ago to set up an SSH tunnel to my home lab (which I forget again, and again, and again ).
Best Bash practicesI'll wrap this up with my top 11 list of the best (or good, at least; I don't claim omniscience) practices when writing Bash scripts.
Bash scripts can become complicated and comments are cheap. If you wonder whether to add a comment, add a comment. If you return after the weekend and have to spend time figuring out what you were trying to do last Friday, you forgot to add a comment.
- Wrap all your variable names in curly braces, like
${myvariable}
. Making this a habit makes things like${variable}_suffix
possible and improves consistency throughout your scripts.
- Do not use backticks when evaluating an expression; use the
$()
syntax instead. So use:for file in $(ls); donotfor file in `ls`; doThe former option is nestable, more easily readable, and keeps the general sysadmin population happy. Do not use backticks.
- Consistency is good. Pick one style of doing things and stick with it throughout your script. Obviously, I would prefer if people picked the
$()
syntax over backticks and wrapped their variables in curly braces. I would prefer it if people used two or four spaces -- not tabs -- to indent, but even if you choose to do it wrong, do it wrong consistently.
- Use the proper shebang for a Bash script. As I'm writing Bash scripts with the intention of only executing them with Bash, I most often use
#!/usr/bin/bash
as my shebang. Do not use#!/bin/sh
or#!/usr/bin/sh
. Your script will execute, but it'll run in compatibility mode -- potentially with lots of unintended side effects. (Unless, of course, compatibility mode is what you want.)
- When comparing strings, it's a good idea to quote your variables in if-statements, because if your variable is empty, Bash will throw an error for lines like these: if [ ${myvar} == "foo" ] ; then
echo "bar"
fi And will evaluate to false for a line like this: if [ " ${myvar} " == "foo" ] ; then
echo "bar"
fi Also, if you are unsure about the contents of a variable (e.g., when you are parsing user input), quote your variables to prevent interpretation of some special characters and make sure the variable is considered a single word, even if it contains whitespace.
- This is a matter of taste, I guess, but I prefer using the double equals sign (
==
) even when comparing strings in Bash. It's a matter of consistency, and even though -- for string comparisons only -- a single equals sign will work, my mind immediately goes "single equals is an assignment operator!"
- Use proper exit codes. Make sure that if your script fails to do something, you present the user with a written failure message (preferably with a way to fix the problem) and send a non-zero exit code: # we have failed
echo "Process has failed to complete, you need to manually restart the whatchamacallit"
exit 1 This makes it easier to programmatically call your script from yet another script and verify its successful completion.
- Use Bash's built-in mechanisms to provide sane defaults for your variables or throw errors if variables you expect to be defined are not defined: # this sets the value of $myvar to redhat, and prints 'redhat'
echo ${myvar:=redhat} # this throws an error reading 'The variable myvar is undefined, dear reader' if $myvar is undefined
${myvar:?The variable myvar is undefined, dear reader}
- Especially if you are writing a large script, and especially if you work on that large script with others, consider using the
local
keyword when defining variables inside functions. Thelocal
keyword will create a local variable, that is one that's visible only within that function. This limits the possibility of clashing variables.
- Every sysadmin must do it sometimes: debug something on a console, either a real one in a data center or a virtual one through a virtualization platform. If you have to debug a script that way, you will thank yourself for remembering this: Do not make the lines in your scripts too long!
On many systems, the default width of a console is still 80 characters. If you need to debug a script on a console and that script has very long lines, you'll be a sad panda. Besides, a script with shorter lines -- the default is still 80 characters -- is a lot easier to read and understand in a normal editor, too!
I truly love Bash. I can spend hours writing about it or exchanging nice tricks with fellow enthusiasts. Make sure you drop your favorites in the comments!
Jan 09, 2020 | opensource.com
Get the highlights in your inbox every week.
When you work with computers all day, it's fantastic to find repeatable commands and tag them for easy use later on. They all sit there, tucked away in ~/.bashrc (or ~/.zshrc for Zsh users ), waiting to help improve your day!
In this article, I share some of my favorite of these helper commands for things I forget a lot, in hopes that they will save you, too, some heartache over time.
Say when it's overWhen I'm using longer-running commands, I often multitask and then have to go back and check if the action has completed. But not anymore, with this helpful invocation of say (this is on MacOS; change for your local equivalent):
function looooooooong {
START=$(date +%s.%N)
$*
EXIT_CODE=$?
END=$(date +%s.%N)
DIFF=$(echo "$END - $START" | bc)
RES=$(python -c "diff = $DIFF; min = int(diff / 60); print('%s min' % min)")
result="$1 completed in $RES, exit code $EXIT_CODE."
echo -e "\n⏰ $result"
( say -r 250 $result 2>&1 > /dev/null & )
}This command marks the start and end time of a command, calculates the minutes it takes, and speaks the command invoked, the time taken, and the exit code. I find this super helpful when a simple console bell just won't do.
... ... ...
There are many Docker commands, but there are even more docker compose commands. I used to forget the --rm flags, but not anymore with these useful aliases:
alias dc = "docker-compose"
alias dcr = "docker-compose run --rm"
alias dcb = "docker-compose run --rm --build" gcurl helper for Google CloudThis one is relatively new to me, but it's heavily documented . gcurl is an alias to ensure you get all the correct flags when using local curl commands with authentication headers when working with Google Cloud APIs.
Git and ~/.gitignoreI work a lot in Git, so I have a special section dedicated to Git helpers.
One of my most useful helpers is one I use to clone GitHub repos. Instead of having to run:
git clone [email protected]:org/repo /Users/glasnt/git/org/repoI set up a clone function:
clone(){
echo Cloning $1 to ~/git/$1
cd ~/git
git clone [email protected]:$1 $1
cd $1
}... ... ...
Jul 09, 2020 | zwischenzugs.com
TL;DR
These commands can tell you what key bindings you have in your bash shell by default.
bind -P | grep 'can be' stty -a | grep ' = ..;'BackgroundI'd aways wondered what key strokes did what in bash – I'd picked up some well-known ones (CTRL-r, CTRL-v, CTRL-d etc) from bugging people when I saw them being used, but always wondered whether there was a list of these I could easily get and comprehend. I found some, but always forgot where it was when I needed them, and couldn't remember many of them anyway.
Then debugging a problem tab completion in 'here' documents, I stumbled across bind.
bind and stty'bind' is a bash builtin, which means it's not a program like awk or grep, but is picked up and handled by the bash program itself.
It manages the various key bindings in the bash shell, covering everything from autocomplete to transposing two characters on the command line. You can read all about it in the bash man page (in the builtins section, near the end).
Bind is not responsible for all the key bindings in your shell – running the stty will show the ones that apply to the terminal:
stty -a | grep ' = ..;'These take precedence and can be confusing if you've tried to bind the same thing in your shell! Further confusion is caused by the fact that '^D' means 'CTRL and d pressed together whereas in bind output, it would be 'C-d'.
edit: am indebted to joepvd from hackernews for this beauty
$ stty -a | awk 'BEGIN{RS="[;n]+ ?"}; /= ..$/' intr = ^C quit = ^ erase = ^? kill = ^U eof = ^D swtch = ^Z susp = ^Z rprnt = ^R werase = ^W lnext = ^V flush = ^OBreaking Down the Commandbind -P | grep canCan be considered (almost) equivalent to a more instructive command:
bind -l | sed 's/.*/bind -q /' | /bin/bash 2>&1 | grep -v warning: | grep can'bind -l' lists all the available keystroke functions. For example, 'complete' is the auto-complete function normally triggered by hitting 'tab' twice. The output of this is passed to a sed command which passes each function name to 'bind -q', which queries the bindings.
sed 's/.*/bind -q /'The output of this is passed for running into /bin/bash.
/bin/bash 2>&1 | grep -v warning: | grep 'can be'Note that this invocation of bash means that locally-set bindings will revert to the default bash ones for the output.
The '2>&1' puts the error output (the warnings) to the same output channel, filtering out warnings with a 'grep -v' and then filtering on output that describes how to trigger the function.
In the output of bind -q, 'C-' means 'the ctrl key and'. So 'C-c' is the normal. Similarly, 't' means 'escape', so 'tt' means 'autocomplete', and 'e' means escape:
$ bind -q complete complete can be invoked via "C-i", "ee".and is also bound to 'C-i' (though on my machine I appear to need to press it twice – not sure why).
Add to bashrcI added this alias as 'binds' in my bashrc so I could easily get hold of this list in the future.
alias binds="bind -P | grep 'can be'"Now whenever I forget a binding, I type 'binds', and have a read :)
[adinserter block="1″]
The ZingerBrowsing through the bash manual, I noticed that an option to bind enables binding to
-x keyseq:shell-commandSo now all I need to remember is one shortcut to get my list (CTRL-x, then CTRL-o):
bind -x '"C-xC-o":bind -P | grep can'Of course, you can bind to a single key if you want, and any command you want. You could also use this for practical jokes on your colleagues
Now I'm going to sort through my history to see what I type most often :)
This post is based on material from Docker in Practice , available on Manning's Early Access Program. Get 39% off with the code: 39miell
Jul 09, 2020 | zwischenzugs.com
Why
strace
?I'm often asked in my technical troubleshooting job to solve problems that development teams can't solve. Usually these do not involve knowledge of API calls or syntax, rather some kind of insight into what the right tool to use is, and why and how to use it. Probably because they're not taught in college, developers are often unaware that these tools exist, which is a shame, as playing with them can give a much deeper understanding of what's going on and ultimately lead to better code.
My favourite secret weapon in this path to understanding is strace.
strace
(or its Solaris equivalents,trussdtruss
is a tool that tells you which operating system (OS) calls your program is making.An OS call (or just "system call") is your program asking the OS to provide some service for it. Since this covers a lot of the things that cause problems not directly to do with the domain of your application development (I/O, finding files, permissions etc) its use has a very high hit rate in resolving problems out of developers' normal problem space.
Usage Patternsstrace is useful in all sorts of contexts. Here's a couple of examples garnered from my experience.
My Netcat Server Won't Start!Imagine you're trying to start an executable, but it's failing silently (no log file, no output at all). You don't have the source, and even if you did, the source code is neither readily available, nor ready to compile, nor readily comprehensible.
Simply running through strace will likely give you clues as to what's gone on.
$ nc -l localhost 80 nc: Permission deniedLet's say someone's trying to run this and doesn't understand why it's not working (let's assume manuals are unavailable).
Simply put
strace
at the front of your command. Note that the following output has been heavily edited for space reasons (deep breath):$ strace nc -l localhost 80 execve("/bin/nc", ["nc", "-l", "localhost", "80"], [/* 54 vars */]) = 0 brk(0) = 0x1e7a000 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f751c9c0000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/usr/local/lib/tls/x86_64/libglib-2.0.so.0", O_RDONLY) = -1 ENOENT (No such file or directory) stat("/usr/local/lib/tls/x86_64", 0x7fff5686c240) = -1 ENOENT (No such file or directory) [...] open("libglib-2.0.so.0", O_RDONLY) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=179820, ...}) = 0 mmap(NULL, 179820, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f751c994000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/x86_64-linux-gnu/libglib-2.0.so.0", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\3>\1\320k\1"..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=975080, ...}) = 0 mmap(NULL, 3072520, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f751c4b3000 mprotect(0x7f751c5a0000, 2093056, PROT_NONE) = 0 mmap(0x7f751c79f000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xec000) = 0x7f751c79f000 mmap(0x7f751c7a1000, 520, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f751c7a1000 close(3) = 0 open("/usr/local/lib/libc.so.6", O_RDONLY) = -1 ENOENT (No such file or directory) [...] mmap(NULL, 179820, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f751c994000 close(3) = 0 access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory) open("/lib/x86_64-linux-gnu/libnss_files.so.2", O_RDONLY) = 3 read(3, "\177ELF\2\1\1\3>\1\20\""..., 832) = 832 fstat(3, {st_mode=S_IFREG|0644, st_size=51728, ...}) = 0 mmap(NULL, 2148104, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f751b8b0000 mprotect(0x7f751b8bc000, 2093056, PROT_NONE) = 0 mmap(0x7f751babb000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xb000) = 0x7f751babb000 close(3) = 0 mprotect(0x7f751babb000, 4096, PROT_READ) = 0 munmap(0x7f751c994000, 179820) = 0 open("/etc/hosts", O_RDONLY|O_CLOEXEC) = 3 fcntl(3, F_GETFD) = 0x1 (flags FD_CLOEXEC) fstat(3, {st_mode=S_IFREG|0644, st_size=315, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f751c9bf000 read(3, "127.0.0.1\tlocalhost\n127.0.1.1\tal"..., 4096) = 315 read(3, "", 4096) = 0 close(3) = 0 munmap(0x7f751c9bf000, 4096) = 0 open("/etc/gai.conf", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=3343, ...}) = 0 fstat(3, {st_mode=S_IFREG|0644, st_size=3343, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f751c9bf000 read(3, "# Configuration for getaddrinfo("..., 4096) = 3343 read(3, "", 4096) = 0 close(3) = 0 munmap(0x7f751c9bf000, 4096) = 0 futex(0x7f751c4af460, FUTEX_WAKE_PRIVATE, 2147483647) = 0 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 3 connect(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = 0 getsockname(3, {sa_family=AF_INET, sin_port=htons(58567), sin_addr=inet_addr("127.0.0.1")}, [16]) = 0 close(3) = 0 socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 3 connect(3, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = 0 getsockname(3, {sa_family=AF_INET6, sin6_port=htons(42803), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, [28]) = 0 close(3) = 0 socket(PF_INET6, SOCK_STREAM, IPPROTO_TCP) = 3 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 bind(3, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 EACCES (Permission denied) close(3) = 0 socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 3 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 bind(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EACCES (Permission denied) close(3) = 0 write(2, "nc: ", 4nc: ) = 4 write(2, "Permission denied\n", 18Permission denied ) = 18 exit_group(1) = ?To most people that see this flying up their terminal this initially looks like gobbledygook, but it's really quite easy to parse when a few things are explained.
For each line:
- the first entry on the left is the system call being performed
- the bit in the parentheses are the arguments to the system call
- the right side of the equals sign is the return value of the system call
open("/etc/gai.conf", O_RDONLY) = 3Therefore for this particular line, the system call is
open
, the arguments are the string/etc/gai.conf
and the constantO_RDONLY
, and the return value was3
.How to make sense of this?
Some of these system calls can be guessed or enough can be inferred from context. Most readers will figure out that the above line is the attempt to open a file with read-only permission.
In the case of the above failure, we can see that before the program calls exit_group, there is a couple of calls to bind that return "Permission denied":
bind(3, {sa_family=AF_INET6, sin6_port=htons(80), inet_pton(AF_INET6, "::1", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 EACCES (Permission denied) close(3) = 0 socket(PF_INET, SOCK_STREAM, IPPROTO_TCP) = 3 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0 bind(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 EACCES (Permission denied) close(3) = 0 write(2, "nc: ", 4nc: ) = 4 write(2, "Permission denied\n", 18Permission denied ) = 18 exit_group(1) = ?We might therefore want to understand what "bind" is and why it might be failing.
You need to get a copy of the system call's documentation. On ubuntu and related distributions of linux, the documentation is in the
manpages-dev
package, and can be invoked by eg man 2 bind
(I just usedstrace
to determine which fileman 2 bind
opened and then did adpkg -S
to determine from which package it came!). You can also look up online if you have access, but if you can auto-install via a package manager you're more likely to get docs that match your installation.Right there in my
man 2 bind
page it says:ERRORS EACCES The address is protected, and the user is not the superuser.So there is the answer – we're trying to bind to a port that can only be bound to if you are the super-user.
My Library Is Not Loading!
Imagine a situation where developer A's perl script is working fine, but not on developer B's identical one is not (again, the output has been edited).
In this case, we strace the output on developer B's computer to see how it's working:$ strace perl a.pl execve("/usr/bin/perl", ["perl", "a.pl"], [/* 57 vars */]) = 0 brk(0) = 0xa8f000 [...]fcntl(3, F_SETFD, FD_CLOEXEC) = 0 fstat(3, {st_mode=S_IFREG|0664, st_size=14, ...}) = 0 rt_sigaction(SIGCHLD, NULL, {SIG_DFL, [], 0}, 8) = 0 brk(0xad1000) = 0xad1000 read(3, "use blahlib;\n\n", 4096) = 14 stat("/space/myperllib/blahlib.pmc", 0x7fffbaf7f3d0) = -1 ENOENT (No such file or directory) stat("/space/myperllib/blahlib.pm", {st_mode=S_IFREG|0644, st_size=7692, ...}) = 0 open("/space/myperllib/blahlib.pm", O_RDONLY) = 4 ioctl(4, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fffbaf7f090) = -1 ENOTTY (Inappropriate ioctl for device) [...]mmap(0x7f4c45ea8000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 5, 0x4000) = 0x7f4c45ea8000 close(5) = 0 mprotect(0x7f4c45ea8000, 4096, PROT_READ) = 0 brk(0xb55000) = 0xb55000 read(4, "swrite($_[0], $_[1], $_[2], $_[3"..., 4096) = 3596 brk(0xb77000) = 0xb77000 read(4, "", 4096) = 0 close(4) = 0 read(3, "", 4096) = 0 close(3) = 0 exit_group(0) = ?We observe that the file is found in what looks like an unusual place.
open("/space/myperllib/blahlib.pm", O_RDONLY) = 4Inspecting the environment, we see that:
$ env | grep myperl PERL5LIB=/space/myperllibSo the solution is to set the same env variable before running:
export PERL5LIB=/space/myperllibGet to know the internals bit by bitIf you do this a lot, or idly run
strace
on various commands and peruse the output, you can learn all sorts of things about the internals of your OS. If you're like me, this is a great way to learn how things work. For example, just now I've had a look at the file/etc/gai.conf
, which I'd never come across before writing this.Once your interest has been piqued, I recommend getting a copy of "Advanced Programming in the Unix Environment" by Stevens & Rago, and reading it cover to cover. Not all of it will go in, but as you use
Gotchasstrace
more and more, and (hopefully) browse C code more and more understanding will grow.If you're running a program that calls other programs, it's important to run with the -f flag, which "follows" child processes and straces them. -ff creates a separate file with the pid suffixed to the name.
If you're on solaris, this program doesn't exist – you need to use truss instead.
Many production environments will not have this program installed for security reasons. strace doesn't have many library dependencies (on my machine it has the same dependencies as 'echo'), so if you have permission, (or are feeling sneaky) you can just copy the executable up.
Other useful tidbitsYou can attach to running processes (can be handy if your program appears to hang or the issue is not readily reproducible) with
-p
.If you're looking at performance issues, then the time flags (
vasudevram February 11, 2018 at 5:29 pm-t
,-tt
,-ttt
, and-T
) can help significantly.Interesting post. One point: The errors start earlier than what you said.There is a call to access() near the top of the strace output, which fails:
access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or directory)
vasudevram February 11, 2018 at 5:29 pm
I guess that could trigger the other errors.
Benji Wiebe February 11, 2018 at 7:30 pm
A failed access or open system call is not usually an error in the context of launching a program. Generally it is merely checking if a config file exists.
vasudevram February 11, 2018 at 8:24 pm
>A failed access or open system call is not usually an error in the context of launching a program.
Yes, good point, that could be so, if the programmer meant to ignore the error, and if it was not an issue to do so.
>Generally it is merely checking if a config file exists.
The file name being access'ed is "/etc/ld.so.nohwcap" – not sure if it is a config file or not.
Jul 08, 2020 | www.tldp.org
Appendix E. Exit Codes With Special Meanings Table E-1. Reserved Exit Codes
Exit Code Number Meaning Example Comments 1 Catchall for general errors let "var1 = 1/0" Miscellaneous errors, such as "divide by zero" and other impermissible operations 2 Misuse of shell builtins (according to Bash documentation) empty_function() {} Missing keyword or command, or permission problem (and diff return code on a failed binary file comparison ). 126 Command invoked cannot execute /dev/null Permission problem or command is not an executable 127 "command not found" illegal_command Possible problem with $PATH or a typo 128 Invalid argument to exit exit 3.14159 exit takes only integer args in the range 0 - 255 (see first footnote) 128+n Fatal error signal "n" kill -9 $PPID of script $? returns 137 (128 + 9) 130 Script terminated by Control-C Ctl-C Control-C is fatal error signal 2 , (130 = 128 + 2, see above) 255* Exit status out of range exit -1 exit takes only integer args in the range 0 - 255 According to the above table, exit codes 1 - 2, 126 - 165, and 255 [1] have special meanings, and should therefore be avoided for user-specified exit parameters. Ending a script with exit 127 would certainly cause confusion when troubleshooting (is the error code a "command not found" or a user-defined one?). However, many scripts use an exit 1 as a general bailout-upon-error. Since exit code 1 signifies so many possible errors, it is not particularly useful in debugging.
There has been an attempt to systematize exit status numbers (see /usr/include/sysexits.h ), but this is intended for C and C++ programmers. A similar standard for scripting might be appropriate. The author of this document proposes restricting user-defined exit codes to the range 64 - 113 (in addition to 0 , for success), to conform with the C/C++ standard. This would allot 50 valid codes, and make troubleshooting scripts more straightforward. [2] All user-defined exit codes in the accompanying examples to this document conform to this standard, except where overriding circumstances exist, as in Example 9-2 .
Notes
Issuing a $? from the command-line after a shell script exits gives results consistent with the table above only from the Bash or sh prompt. Running the C-shell or tcsh may give different values in some cases.
[1] Out of range exit values can result in unexpected exit codes. An exit value greater than 255 returns an exit code modulo 256 . For example, exit 3809 gives an exit code of 225 (3809 % 256 = 225). [2] An update of /usr/include/sysexits.h allocates previously unused exit codes from 64 - 78 . It may be anticipated that the range of unallotted exit codes will be further restricted in the future. The author of this document will not do fixups on the scripting examples to conform to the changing standard. This should not cause any problems, since there is no overlap or conflict in usage of exit codes between compiled C/C++ binaries and shell scripts.
Jul 08, 2020 | zwischenzugs.com
Not everyone knows that every time you run a shell command in bash, an 'exit code' is returned to bash.
Generally, if a command 'succeeds' you get an error code of
0
. If it doesn't succeed, you get a non-zero code.
1
is a 'general error', and others can give you more information (e.g. which signal killed it, for example). 255 is upper limit and is "internal error"grep joeuser /etc/passwd # in case of success returns 0, otherwise 1or
grep not_there /dev/null echo $?
$?
is a special bash variable that's set to the exit code of each command after it runs.Grep uses exit codes to indicate whether it matched or not. I have to look up every time which way round it goes: does finding a match or not return
0
?
Sep 11, 2009 | www.linuxjournal.com
Bash functions, unlike functions in most programming languages do not allow you to return a value to the caller. When a bash function ends its return value is its status: zero for success, non-zero for failure. To return values, you can set a global variable with the result, or use command substitution, or you can pass in the name of a variable to use as the result variable. The examples below describe these different mechanisms.
Although bash has a return statement, the only thing you can specify with it is the function's status, which is a numeric value like the value specified in an exit statement. The status value is stored in the $? variable. If a function does not contain a return statement, its status is set based on the status of the last statement executed in the function. To actually return arbitrary values to the caller you must use other mechanisms.
The simplest way to return a value from a bash function is to just set a global variable to the result. Since all variables in bash are global by default this is easy:
function myfunc() { myresult='some value' } myfunc echo $myresultThe code above sets the global variable myresult to the function result. Reasonably simple, but as we all know, using global variables, particularly in large programs, can lead to difficult to find bugs.
A better approach is to use local variables in your functions. The problem then becomes how do you get the result to the caller. One mechanism is to use command substitution:
function myfunc() { local myresult='some value' echo "$myresult" } result=$(myfunc) # or result=`myfunc` echo $resultHere the result is output to the stdout and the caller uses command substitution to capture the value in a variable. The variable can then be used as needed.
The other way to return a value is to write your function so that it accepts a variable name as part of its command line and then set that variable to the result of the function:
function myfunc() { local __resultvar=$1 local myresult='some value' eval $__resultvar="'$myresult'" } myfunc result echo $resultSince we have the name of the variable to set stored in a variable, we can't set the variable directly, we have to use eval to actually do the setting. The eval statement basically tells bash to interpret the line twice, the first interpretation above results in the string result='some value' which is then interpreted once more and ends up setting the caller's variable.
When you store the name of the variable passed on the command line, make sure you store it in a local variable with a name that won't be (unlikely to be) used by the caller (which is why I used __resultvar rather than just resultvar ). If you don't, and the caller happens to choose the same name for their result variable as you use for storing the name, the result variable will not get set. For example, the following does not work:
function myfunc() { local result=$1 local myresult='some value' eval $result="'$myresult'" } myfunc result echo $resultThe reason it doesn't work is because when eval does the second interpretation and evaluates result='some value' , result is now a local variable in the function, and so it gets set rather than setting the caller's result variable.
For more flexibility, you may want to write your functions so that they combine both result variables and command substitution:
function myfunc() { local __resultvar=$1 local myresult='some value' if [[ "$__resultvar" ]]; then eval $__resultvar="'$myresult'" else echo "$myresult" fi } myfunc result echo $result result2=$(myfunc) echo $result2Here, if no variable name is passed to the function, the value is output to the standard output.
Mitch Frazier is an embedded systems programmer at Emerson Electric Co. Mitch has been a contributor to and a friend of Linux Journal since the early 2000s.
David Krmpotic • 6 years ago • edited ,lxw David Krmpotic • 6 years ago ,This is the best way: http://stackoverflow.com/a/... return by reference:
function pass_back_a_string() {
eval "$1='foo bar rab oof'"
}return_var=''
pass_back_a_string return_var
echo $return_varphil • 6 years ago ,I agree. After reading this passage, the same idea with yours occurred to me.
lxw • 6 years ago ,Since this page is a top hit on google:
The only real issue I see with returning via echo is that forking the process means no longer allowing it access to set 'global' variables. They are still global in the sense that you can retrieve them and set them within the new forked process, but as soon as that process is done, you will not see any of those changes.
e.g.
#!/bin/bashmyGlobal="very global"
call1() {
myGlobal="not so global"
echo "${myGlobal}"
}tmp=$(call1) # keep in mind '$()' starts a new process
echo "${tmp}" # prints "not so global"
echo "${myGlobal}" # prints "very global"code_monk • 6 years ago • edited ,Hello everyone,
In the 3rd method, I don't think the local variable __resultvar is necessary to use. Any problems with the following code?
function myfunc()
{
local myresult='some value'
eval "$1"="'$myresult'"
}myfunc result
echo $resultEmil Vikström code_monk • 5 years ago ,i would caution against returning integers with "return $int". My code was working fine until it came across a -2 (negative two), and treated it as if it were 254, which tells me that bash functions return 8-bit unsigned ints that are not protected from overflow
A function behaves as any other Bash command, and indeed POSIX processes. That is, they can write to stdout, read from stdin and have a return code. The return code is, as you have already noticed, a value between 0 and 255. By convention 0 means success while any other return code means failure.
This is also why Bash "if" statements treat 0 as success and non+zero as failure (most other programming languages do the opposite).
Jul 07, 2020 | zwischenzugs.com
The Missing Readline Primer zwischenzugs Uncategorized April 23, 2019 7 Minutes
Readline is one of those technologies that is so commonly used many users don't realise it's there.
I went looking for a good primer on it so I could understand it better, but failed to find one. This is an attempt to write a primer that may help users get to grips with it, based on what I've managed to glean as I've tried to research and experiment with it over the years.
Bash Without ReadlineFirst you're going to see what bash looks like without readline.
In your 'normal' bash shell, hit the
TAB
key twice. You should see something like this:Display all 2335 possibilities? (y or n)That's because bash normally has an 'autocomplete' function that allows you to see what commands are available to you if you tap tab twice.
Hit
n
to get out of that autocomplete.Another useful function that's commonly used is that if you hit the up arrow key a few times, then the previously-run commands should be brought back to the command line.
Now type:
$ bash --noeditingThe
--noediting
flag starts up bash without the readline library enabled.If you hit
TAB
twice now you will see something different: the shell no longer 'sees' your tab and just sends a tab direct to the screen, moving your cursor along. Autocomplete has gone.Autocomplete is just one of the things that the readline library gives you in the terminal. You might want to try hitting the up or down arrows as you did above to see that that no longer works as well.
Hit return to get a fresh command line, and exit your non-readline-enabled bash shell:
$ exitOther ShortcutsThere are a great many shortcuts like autocomplete available to you if readline is enabled. I'll quickly outline four of the most commonly-used of these before explaining how you can find out more.
$ echo 'some command'There should not be many surprises there. Now if you hit the 'up' arrow, you will see you can get the last command back on your line. If you like, you can re-run the command, but there are other things you can do with readline before you hit return.
If you hold down the
ctrl
key and then hita
at the same time your cursor will return to the start of the line. Another way of representing this 'multi-key' way of inputting is to write it like this:\C-a
. This is one conventional way to represent this kind of input. The\C
represents the control key, and the-a
represents that thea
key is depressed at the same time.Now if you hit
\C-e
(ctrl
ande
) then your cursor has moved to the end of the line. I use these two dozens of times a day.Another frequently useful one is
\C-l
, which clears the screen, but leaves your command line intact.The last one I'll show you allows you to search your history to find matching commands while you type. Hit
\C-r
, and then typeec
. You should see theecho
command you just ran like this:(reverse-i-search)`ec': echo echoThen do it again, but keep hitting
\C-r
over and over. You should see all the commands that have `ec` in them that you've input before (if you've only got oneecho
command in your history then you will only see one). As you see them you are placed at that point in your history and you can move up and down from there or just hit return to re-run if you want.There are many more shortcuts that you can use that readline gives you. Next I'll show you how to view these. Using `bind` to Show Readline Shortcuts
If you type:
$ bind -pYou will see a list of bindings that readline is capable of. There's a lot of them!
Have a read through if you're interested, but don't worry about understanding them all yet.
If you type:
$ bind -p | grep C-ayou'll pick out the 'beginning-of-line' binding you used before, and see the
\C-a
notation I showed you before.As an exercise at this point, you might want to look for the
\C-e
and\C-r
bindings we used previously.If you want to look through the entirety of the
bind -p
output, then you will want to know that\M
refers to theMeta
key (which you might also know as theAlt
key), and\e
refers to theEsc
key on your keyboard. The 'escape' key bindings are different in that you don't hit it and another key at the same time, rather you hit it, and then hit another key afterwards. So, for example, typing theEsc
key, and then the?
key also tries to auto-complete the command you are typing. This is documented as:"\e?": possible-completionsin the
Readline and Terminal Optionsbind -p
output.If you've looked over the possibilities that readline offers you, you might have seen the
\C-r
binding we looked at earlier:"\C-r": reverse-search-historyYou might also have seen that there is another binding that allows you to search forward through your history too:
"\C-s": forward-search-historyWhat often happens to me is that I hit
\C-r
over and over again, and then go too fast through the history and fly past the command I was looking for. In these cases I might try to hit\C-s
to search forward and get to the one I missed.Watch out though! Hitting
\C-s
to search forward through the history might well not work for you.Why is this, if the binding is there and readline is switched on?
It's because something picked up the
\C-s
before it got to the readline library: the terminal settings.The terminal program you are running in may have standard settings that do other things on hitting some of these shortcuts before readline gets to see it.
If you type:
$ stty -eyou should get output similar to this:
speed 9600 baud; 47 rows; 202 columns; lflags: icanon isig iexten echo echoe -echok echoke -echonl echoctl -echoprt -altwerase -noflsh -tostop -flusho pendin -nokerninfo -extproc iflags: -istrip icrnl -inlcr -igncr ixon -ixoff ixany imaxbel -iutf8 -ignbrk brkint -inpck -ignpar -parmrk oflags: opost onlcr -oxtabs -onocr -onlret cflags: cread cs8 -parenb -parodd hupcl -clocal -cstopb -crtscts -dsrflow -dtrflow -mdmbuf discard dsusp eof eol eol2 erase intr kill lnext ^O ^Y ^D <undef> <undef> ^? ^C ^U ^V min quit reprint start status stop susp time werase 1 ^\ ^R ^Q ^T ^S ^Z 0 ^WYou can see on the last four lines (
discard dsusp [...]
) there is a table of key bindings that your terminal will pick up before readline sees them. The^
character (known as the 'caret') here represents thectrl
key that we previously represented with a\C
.If you think this is confusing I won't disagree. Unfortunately in the history of Unix and Linux documenters did not stick to one way of describing these key combinations.
If you encounter a problem where the terminal options seem to catch a shortcut key binding before it gets to readline, then you can use the
stty
program to unset that binding. In this case, we want to unset the 'stop' binding.If you are in the same situation, type:
$ stty stop undefNow, if you re-run
stty -e
, the last two lines might look like this:[...] min quit reprint start status stop susp time werase 1 ^\ ^R ^Q ^T <undef> ^Z 0 ^Wwhere the
stop
entry now has<undef>
underneath it.Strangely, for me
C-r
is also bound to 'reprint' above (^R
).But (on my terminals at least) that gets to readline without issue as I search up the history. Why this is the case I haven't been able to figure out. I suspect that reprint is ignored by modern terminals that don't need to 'reprint' the current line.
While we are looking at this table:
discard dsusp eof eol eol2 erase intr kill lnext ^O ^Y ^D <undef> <undef> ^? ^C ^U ^V min quit reprint start status stop susp time werase 1 ^\ ^R ^Q ^T <undef> ^Z 0 ^Wit's worth noting a few other key bindings that are used regularly.
First, one you may well already be familiar with is
\C-c
, which interrupts a program, terminating it:$ sleep 99 [[Hit \C-c]] ^C $Similarly,
\C-z
suspends a program, allowing you to 'foreground' it again and continue with thefg
builtin.$ sleep 10 [[ Hit \C-z]] ^Z [1]+ Stopped sleep 10 $ fg sleep 10
\C-d
sends an 'end of file' character. It's often used to indicate to a program that input is over. If you type it on a bash shell, the bash shell you are in will close.Finally,
\C-w
deletes the word before the cursorThese are the most commonly-used shortcuts that are picked up by the terminal before they get to the readline library.
Daz April 29, 2019 at 11:15 pm
Hi Ian,
What OS are you running because stty -e gives the following on Centos 6.x and Ubuntu 18.04.2
Leon May 14, 2019 at 5:12 amstty -e
stty: invalid argument '-e'
Try 'stty –help' for more information. Replyyachris May 16, 2019 at 4:40 pm`stty -a` works for me (Ubuntu 14)
AriSweedler May 17, 2019 at 4:50 amYou might want to check out the 'rlwrap' program. It allows you to have readline behavior on programs that don't natively support readline, but which have a 'type in a command' type interface. For instance, we use Oracle here (alas :-) ) and the 'sqlplus' program, that lets you type SQL commands to an Oracle instance does not have anything like readline built into it, so you can't go back to edit previous commands. But running 'rlwrap sqlplus' gives me readline behavior in sqlplus! It's fantastic to have.
I was told to use this in a class, and I didn't understand what I did. One rabbit hole later, I was shocked and amazed at how advanced the readline library is. One thing I'd like to add is that you can write a '~/.inputrc' file and have those readline commands sourced at startup!
I do not know exactly when or how the inputrc is read.
Most of what I learned about inputrc stuff is from https://www.topbug.net/blog/2017/07/31/inputrc-for-humans/ .
Here is my inputrc, if anyone wants: https://github.com/AriSweedler/dotfiles/blob/master/.inputrc .
Jul 02, 2020 | www.redhat.com
These tips and tricks will make your Linux command line experience easier and more efficient.More Linux resources
- Download Now: Linux Commands Cheat Sheet
- Advanced Linux Commands Cheat Sheet for Developers
- Download Red Hat Enterprise Linux Server 8 Trial
- Linux System Administration Skills Assessment
This blog post is the second of two covering some practical tips and tricks to get the most out of the Bash shell. In part one , I covered history, last argument, working with files and directories, reading files, and Bash functions. In this segment, I cover shell variables, find, file descriptors, and remote operations.
Use shell variablesThe Bash variables are set by the shell when invoked. Why would I use
hostname
when I can use $HOSTNAME, or why would I usewhoami
when I can use $USER? Bash variables are very fast and do not require external applications.These are a few frequently-used variables:
$PATH $HOME $USER $HOSTNAME $PS1 .. $PS4Use the
echo
command to expand variables. For example, the $PATH shell variable can be expanded by running:$> echo $PATH[ Download now: A sysadmin's guide to Bash scripting . ]
Use the find commandThe
find
command is probably one of the most used tools within the Linux operating system. It is extremely useful in interactive shells. It is also used in scripts. Withfind
I can list files older or newer than a specific date, delete them based on that date, change permissions of files or directories, and so on.Let's get more familiar with this command.
To list files older than 30 days, I simply run:
$> find /tmp -type f -mtime +30To delete files older than 30 days, run:
$> find /tmp -type f -mtime +30 -exec rm -rf {} \;or
$> find /tmp -type f -mtime +30 -exec rm -rf {} +While the above commands will delete files older than 30 days, as written, they fork the
rm
command each time they find a file. This search can be written more efficiently by usingxargs
:$> find /tmp -name '*.tmp' -exec printf '%s\0' {} \; | xargs -0 rmI can use
find
to listsha256sum
files only by running:$> find . -type f -exec sha256sum {} +And now to search for and get rid of duplicate .jpg files:
$> find . -type f -name '*.jpg' -exec sha256sum {} + | sort -uk1,1Reference file descriptorsIn the Bash shell, file descriptors (FDs) are important in managing the input and output of commands. Many people have issues understanding file descriptors correctly. Each process has three default file descriptors, namely:
Code Meaning Location Description 0 Standard input /dev/stdin Keyboard, file, or some stream 1 Standard output /dev/stdout Monitor, terminal, display 2 Standard error /dev/stderr Non-zero exit codes are usually >FD2, display Now that you know what the default FDs do, let's see them in action. I start by creating a directory named
foo
, which containsfile1
.$> ls foo/ bar/ ls: cannot access 'bar/': No such file or directory foo/: file1The output No such file or directory goes to Standard Error (stderr) and is also displayed on the screen. I will run the same command, but this time use
2>
to omit stderr:$> ls foo/ bar/ 2>/dev/null foo/: file1It is possible to send the output of
foo
to Standard Output (stdout) and to a file simultaneously, and ignore stderr. For example:$> { ls foo bar | tee -a ls_out_file ;} 2>/dev/null foo: file1Then:
$> cat ls_out_file foo: file1The following command sends stdout to a file and stderr to
/dev/null
so that the error won't display on the screen:$> ls foo/ bar/ >to_stdout 2>/dev/null $> cat to_stdout foo/: file1The following command sends stdout and stderr to the same file:
$> ls foo/ bar/ >mixed_output 2>&1 $> cat mixed_output ls: cannot access 'bar/': No such file or directory foo/: file1This is what happened in the last example, where stdout and stderr were redirected to the same file:
ls foo/ bar/ >mixed_output 2>&1 | | | Redirect stderr to where stdout is sent | stdout is sent to mixed_outputAnother short trick (> Bash 4.4) to send both stdout and stderr to the same file uses the ampersand sign. For example:
$> ls foo/ bar/ &>mixed_outputHere is a more complex redirection:
exec 3>&1 >write_to_file; echo "Hello World"; exec 1>&3 3>&-This is what occurs:
- exec 3>&1 Copy stdout to file descriptor 3
- > write_to_file Make FD 1 to write to the file
- echo "Hello World" Go to file because FD 1 now points to the file
- exec 1>&3 Copy FD 3 back to 1 (swap)
- Three>&- Close file descriptor three (we don't need it anymore)
Often it is handy to group commands, and then send the Standard Output to a single file. For example:
$> { ls non_existing_dir; non_existing_command; echo "Hello world"; } 2> to_stderr Hello worldAs you can see, only "Hello world" is printed on the screen, but the output of the failed commands is written to the to_stderr file.
Execute remote operationsI use Telnet, netcat, Nmap, and other tools to test whether a remote service is up and whether I can connect to it. These tools are handy, but they aren't installed by default on all systems.
Fortunately, there is a simple way to test a connection without using external tools. To see if a remote server is running a web, database, SSH, or any other service, run:
$> timeout 3 bash -c '</dev/tcp/remote_server/remote_port' || echo "Failed to connect"For example, to see if serverA is running the MariaDB service:
$> timeout 3 bash -c '</dev/tcp/serverA/3306' || echo "Failed to connect"If the connection fails, the Failed to connect message is displayed on your screen.
Assume serverA is behind a firewall/NAT. I want to see if the firewall is configured to allow a database connection to serverA , but I haven't installed a database server yet. To emulate a database port (or any other port), I can use the following:
[serverA ~]# nc -l 3306On clientA , run:
[clientA ~]# timeout 3 bash -c '</dev/tcp/serverA/3306' || echo "Failed"While I am discussing remote connections, what about running commands on a remote server over SSH? I can use the following command:
$> ssh remotehost <<EOF # Press the Enter key here > ls /etc EOFThis command runs
ls /etc
on the remote host.I can also execute a local script on the remote host without having to copy the script over to the remote server. One way is to enter:
$> ssh remote_host 'bash -s' < local_scriptAnother example is to pass environment variables locally to the remote server and terminate the session after execution.
$> exec ssh remote_host ARG1=FOO ARG2=BAR 'bash -s' <<'EOF' > printf %s\\n "$ARG1" "$ARG2" > EOF Password: FOO BAR Connection to remote_host closed.There are many other complex actions I can perform on the remote host.
Wrap upThere is certainly more to Bash than I was able to cover in this two-part blog post. I am sharing what I know and what I deal with daily. The idea is to familiarize you with a few techniques that could make your work less error-prone and more fun.
[ Want to test your sysadmin skills? Take a skills assessment today. ] Valentin Bajrami
Valentin is a system engineer with more than six years of experience in networking, storage, high-performing clusters, and automation. He is involved in different open source projects like bash, Fedora, Ceph, FreeBSD and is a member of Red Hat Accelerators. More about me
Jul 07, 2020 | www.redhat.com
Reference file descriptors
In the Bash shell, file descriptors (FDs) are important in managing the input and output of commands. Many people have issues understanding file descriptors correctly. Each process has three default file descriptors, namely:
Code Meaning Location Description 0 Standard input /dev/stdin Keyboard, file, or some stream 1 Standard output /dev/stdout Monitor, terminal, display 2 Standard error /dev/stderr Non-zero exit codes are usually >FD2, display Now that you know what the default FDs do, let's see them in action. I start by creating a directory named
foo
, which containsfile1
.$> ls foo/ bar/ ls: cannot access 'bar/': No such file or directory foo/: file1The output No such file or directory goes to Standard Error (stderr) and is also displayed on the screen. I will run the same command, but this time use
2>
to omit stderr:$> ls foo/ bar/ 2>/dev/null foo/: file1It is possible to send the output of
foo
to Standard Output (stdout) and to a file simultaneously, and ignore stderr. For example:$> { ls foo bar | tee -a ls_out_file ;} 2>/dev/null foo: file1Then:
$> cat ls_out_file foo: file1The following command sends stdout to a file and stderr to
/dev/null
so that the error won't display on the screen:$> ls foo/ bar/ >to_stdout 2>/dev/null $> cat to_stdout foo/: file1The following command sends stdout and stderr to the same file:
$> ls foo/ bar/ >mixed_output 2>&1 $> cat mixed_output ls: cannot access 'bar/': No such file or directory foo/: file1This is what happened in the last example, where stdout and stderr were redirected to the same file:
ls foo/ bar/ >mixed_output 2>&1 | | | Redirect stderr to where stdout is sent | stdout is sent to mixed_outputAnother short trick (> Bash 4.4) to send both stdout and stderr to the same file uses the ampersand sign. For example:
$> ls foo/ bar/ &>mixed_outputHere is a more complex redirection:
exec 3>&1 >write_to_file; echo "Hello World"; exec 1>&3 3>&-This is what occurs:
- exec 3>&1 Copy stdout to file descriptor 3
- > write_to_file Make FD 1 to write to the file
- echo "Hello World" Go to file because FD 1 now points to the file
- exec 1>&3 Copy FD 3 back to 1 (swap)
- Three>&- Close file descriptor three (we don't need it anymore)
Often it is handy to group commands, and then send the Standard Output to a single file. For example:
$> { ls non_existing_dir; non_existing_command; echo "Hello world"; } 2> to_stderr Hello worldAs you can see, only "Hello world" is printed on the screen, but the output of the failed commands is written to the to_stderr file.
Jun 06, 2020 | www.cyberciti.biz
... ... ...
Redirecting the standard error stream to a fileThe following will redirect program error message to a file called error.log:
$ program-name 2> error.log
$ command1 2> error.log
For example, use the grep command for recursive search in the $HOME directory and redirect all errors (stderr) to a file name grep-errors.txt as follows:
$ grep -R 'MASTER' $HOME 2> /tmp/grep-errors.txt
$ cat /tmp/grep-errors.txt
Sample outputs:grep: /home/vivek/.config/google-chrome/SingletonSocket: No such device or address grep: /home/vivek/.config/google-chrome/SingletonCookie: No such file or directory grep: /home/vivek/.config/google-chrome/SingletonLock: No such file or directory grep: /home/vivek/.byobu/.ssh-agent: No such device or addressRedirecting the standard error (stderr) and stdout to fileUse the following syntax:
Redirecting stderr to stdout to a file or another command
$ command-name &>file
We can als use the following syntax:
$ command > file-name 2>&1
We can write both stderr and stdout to two different files too. Let us try out our previous grep command example:
$ grep -R 'MASTER' $HOME 2> /tmp/grep-errors.txt 1> /tmp/grep-outputs.txt
$ cat /tmp/grep-outputs.txtHere is another useful example where both stderr and stdout sent to the more command instead of a file:
Redirect stderr to stdout
# find /usr/home -name .profile 2>&1 | more
Use the command as follows:
How to redirect stderr to stdout in Bash script
$ command-name 2>&1
$ command-name > file.txt 2>&1
## bash only ##
$ command2 &> filename
$ sudo find / -type f -iname ".env" &> /tmp/search.txt
Redirection takes from left to right. Hence, order matters. For example:
command-name 2>&1 > file.txt ## wrong ##
command-name > file.txt 2>&1 ## correct ##A sample shell script used to update VM when created in the AWS/Linode server:
#!/usr/bin/env bash # Author - nixCraft under GPL v2.x+ # Debian/Ubuntu Linux script for EC2 automation on first boot # ------------------------------------------------------------ # My log file - Save stdout to $LOGFILE LOGFILE="/root/logs.txt" # My error file - Save stderr to $ERRFILE ERRFILE="/root/errors.txt" # Start it printf "Starting update process ... \n" 1>"${LOGFILE}" # All errors should go to error file apt-get -y update 2>"${ERRFILE}" apt-get -y upgrade 2>>"${ERRFILE}" printf "Rebooting cloudserver ... \n" 1>>"${LOGFILE}" shutdown -r now 2>>"${ERRFILE}"Our last example uses the exec command and FDs along with trap and custom bash functions:
#!/bin/bash # Send both stdout/stderr to a /root/aws-ec2-debian.log file # Works with Ubuntu Linux too. # Use exec for FD and trap it using the trap # See bash man page for more info # Author: nixCraft under GPL v2.x+ # --------------------------------------------- exec 3>&1 4>&2 trap 'exec 2>&4 1>&3' 0 1 2 3 exec 1>/root/aws-ec2-debian.log 2>&1 # log message log(){ local m="$@" echo "" echo "*** ${m} ***" echo "" } log "$(date) @ $(hostname)" ## Install stuff ## log "Updating up all packages" export DEBIAN_FRONTEND=noninteractive apt-get -y clean apt-get -y update apt-get -y upgrade apt-get -y --purge autoremove ## Update sshd config ## log "Configuring sshd_config" sed -i'.BAK' -e 's/PermitRootLogin yes/PermitRootLogin no/g' -e 's/#PasswordAuthentication yes/PasswordAuthentication no/g' /etc/ssh/sshd_config ## Hide process from other users ## log "Update /proc/fstab to hide process from each other" echo 'proc /proc proc defaults,nosuid,nodev,noexec,relatime,hidepid=2 0 0' >> /etc/fstab ## Install LXD and stuff ## log "Installing LXD/wireguard/vnstat and other packages on this box" apt-get -y install lxd wireguard vnstat expect mariadb-server log "Configuring mysql with mysql_secure_installation" SECURE_MYSQL_EXEC=$(expect -c " set timeout 10 spawn mysql_secure_installation expect \"Enter current password for root (enter for none):\" send \"$MYSQL\r\" expect \"Change the root password?\" send \"n\r\" expect \"Remove anonymous users?\" send \"y\r\" expect \"Disallow root login remotely?\" send \"y\r\" expect \"Remove test database and access to it?\" send \"y\r\" expect \"Reload privilege tables now?\" send \"y\r\" expect eof ") # log to file # echo " $SECURE_MYSQL_EXEC " # We no longer need expect apt-get -y remove expect # Reboot the EC2 VM log "END: Rebooting requested @ $(date) by $(hostname)" rebootWANT BOTH STDERR AND STDOUT TO THE TERMINAL AND A LOG FILE TOO?Try the tee command as follows:
command1 2>&1 | tee filename
Here is how to use it insider shell script too:Conclusion
#!/usr/bin/env bash { command1 command2 | do_something } 2>&1 | tee /tmp/outputs.logIn this quick tutorial, you learned about three file descriptors, stdin, stdout, and stderr. We can use these Bash descriptors to redirect stdout/stderr to a file or vice versa. See bash man page here :
Operator Description Examples command>filename Redirect stdout to file "filename." date > output.txt command>>filename Redirect and append stdout to file "filename." ls -l >> dirs.txt command 2>filename Redirect stderr to file "filename." du -ch /snaps/ 2> space.txt command 2>>filename Redirect and append stderr to file "filename." awk '{ print $4}' input.txt 2>> data.txt command &>filename
command >filename 2>&1Redirect both stdout and stderr to file "filename." grep -R foo /etc/ &>out.txt command &>>filename
command >>filename 2>&1Redirect both stdout and stderr append to file "filename." whois domain &>>log.txt Vivek Gite is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter . RELATED TUTORIALS
- Matt Kukowski says: January 29, 2014 at 6:33 pm
In pre-bash4 days you HAD to do it this way:
cat file > file.txt 2>&1
now with bash 4 and greater versions you can still do it the old way but
cat file &> file.txt
The above is bash4+ some OLD distros may use prebash4 but I think they are alllong gone by now. Just something to keep in mind.
- iamfrankenstein says: June 12, 2014 at 8:35 pm
I really love: " command2>&1 | tee logfile.txt "
because tee log's everything and prints to stdout . So you stil get to see everything! You can even combine sudo to downgrade to a log user account and add date's subject and store it in a default log directory :)
Jul 05, 2020 | leanpub.com
skeptic 5.0 out of 5 stars Reviewed in the United States on July 2, 2020A short (160 pages) book that covers some difficult aspects of bash needed to customize bash env.
Whether we want it or not, bash is the shell you face in Linux, and unfortunately, it is often misunderstood and misused. Issues related to creating your bash environment are not well addressed in existing books. This book fills the gap.
Few authors understand that bash is a complex, non-orthogonal language operating in a complex Linux environment. To make things worse, bash is an evolution of Unix shell and is a rather old language with warts and all. Using it properly as a programming language requires a serious study, not just an introduction to the basic concepts. Even issues related to customization of dotfiles are far from trivial, and you need to know quite a bit to do it properly.
At the same time, proper customization of bash environment does increase your productivity (or at least lessens the frustration of using Linux on the command line ;-)
The author covered the most important concepts related to this task, such as bash history, functions, variables, environment inheritance, etc. It is really sad to watch like the majorly of Linux users do not use these opportunities and forever remain on the "level zero" using default dotfiles with bare minimum customization.
This book contains some valuable tips even for a seasoned sysadmin (for example, the use of !& in pipes), and as such, is worth at least double of suggested price. It allows you intelligently customize your bash environment after reading just 160 pages and doing the suggested exercises.
Contents:
- Foreword
- Learn Bash the Hard Way
- Introduction
- Structure
- Part I - Core Bash
- What is Bash?
- Unpicking the Shell: Globbing and Quoting
- Variables in Bash
- Functions in Bash
- Pipes and redirects
- Scripts and Startups
- Part II - Scripting Bash
- Command Substitution and Evaluation
- Exit Codes
- Tests
- Loops
- The
set
Builtin- Process Substitution
- Subshells
- Internal Field Separator
- Part III - Bash Features
- Readline
- Terminal Codes and Non-Standard Characters
- The Prompt
- Here Documents
- History
- Bash in Practice
- Part IV - Advanced Bash
- Job Control
- Traps and Signals
- Advanced Variables
- String Manipulation
- Debugging Bash Scripts
- Autocomplete
- Example Bash Script
- Finished!
Jul 04, 2020 | zwischenzugs.com
Here are some tips that might help you be more productive with bash.
1) ^x^y^A gem I use all the time.
Ever typed anything like this?
$ grp somestring somefile -bash: grp: command not foundSigh. Hit 'up', 'left' until at the 'p' and type 'e' and return.
Or do this:
$ ^rp^rep^ grep 'somestring' somefile $One subtlety you may want to note though is:
$ grp rp somefile $ ^rp^rep^ $ grep rp somefileIf you wanted
rep
to be searched for, then you'll need to dig into the man page and use a more powerful history command:$ grp rp somefile $ !!:gs/rp/rep grep rep somefile $... ... ...
Material here based on material from my book
Learn Bash the Hard Way .
Free preview available here .
3)shopt
vsset
This one bothered me for a while.
What's the difference between
set
andshopt
?
set
s we saw before , butshopt
s look very similar. Just inputtingshopt
shows a bunch of options:$ shopt cdable_vars off cdspell on checkhash off checkwinsize on cmdhist on compat31 off dotglob offI found a set of answers here . Essentially, it looks like it's a consequence of bash (and other shells) being built on sh, and adding
4) Here Docs and Here Stringsshopt
as another way to set extra shell options. But I'm still unsure if you know the answer, let me know.'Here docs' are files created inline in the shell.
The 'trick' is simple. Define a closing word, and the lines between that word and when it appears alone on a line become a file.
Type this:
$ cat > afile << SOMEENDSTRING > here is a doc > it has three lines > SOMEENDSTRING alone on a line will save the doc > SOMEENDSTRING $ cat afile here is a doc it has three lines SOMEENDSTRING alone on a line will save the docNotice that:
- the string could be included in the file if it was not 'alone' on the line
- the string
SOMEENDSTRING
is more normallyEND
, but that is just conventionLesser known is the 'here string':
$ cat > asd <<< 'This file has one line'5) String Variable ManipulationYou may have written code like this before, where you use tools like
sed
to manipulate strings:$ VAR='HEADERMy voice is my passwordFOOTER' $ PASS="$(echo $VAR | sed 's/^HEADER(.*)FOOTER/1/')" $ echo $PASSBut you may not be aware that this is possible natively in bash .
This means that you can dispense with lots of sed and awk shenanigans.
One way to rewrite the above is:
$ VAR='HEADERMy voice is my passwordFOOTER' $ PASS="${VAR#HEADER}" $ PASS="${PASS%FOOTER}" $ echo $PASS
- The
#
means 'match and remove the following pattern from the start of the string'- The
%
means 'match and remove the following pattern from the end of the stringThe second method is twice as fast as the first on my machine. And (to my surprise), it was roughly the same speed as a similar python script .
If you want to use glob patterns that are greedy (see globbing here ) then you double up:
$ VAR='HEADERMy voice is my passwordFOOTER' $ echo ${VAR##HEADER*} $ echo ${VAR%%*FOOTER}6) Variable DefaultsThese are very handy when you're knocking up scripts quickly.
If you have a variable that's not set, you can 'default' them by using this. Create a file called
default.sh
with these contents#!/bin/bash FIRST_ARG="${1:-no_first_arg}" SECOND_ARG="${2:-no_second_arg}" THIRD_ARG="${3:-no_third_arg}" echo ${FIRST_ARG} echo ${SECOND_ARG} echo ${THIRD_ARG}Now run
chmod +x default.sh
and run the script with./default.sh first second
.Observer how the third argument's default has been assigned, but not the first two.
You can also assign directly with
7) Traps${VAR: = defaultval}
(equals sign, not dash) but note that this won't work with positional variables in scripts or functions. Try changing the above script to see how it fails.RANDOMThe
trap
built-in can be used to 'catch' when a signal is sent to your script.Here's an example I use in my own
cheapci
script:function cleanup() { rm -rf "${BUILD_DIR}" rm -f "${LOCK_FILE}" # get rid of /tmp detritus, leaving anything accessed 2 days ago+ find "${BUILD_DIR_BASE}"/* -type d -atime +1 | rm -rf echo "cleanup done" } trap cleanup TERM INT QUITAny attempt to
CTRL-C
,CTRL-
or terminate the program using theTERM
signal will result in cleanup being called first.Be aware:
- Trap logic can get very tricky (eg handling signal race conditions)
- The KILL signal can't be trapped in this way
But mostly I've used this for 'cleanups' like the above, which serve their purpose.
8) Shell VariablesIt's well worth getting to know the standard shell variables available to you . Here are some of my favourites:
Don't rely on this for your cryptography stack, but you can generate random numbers eg to create temporary files in scripts:
$ echo ${RANDOM} 16313 $ # Not enough digits? $ echo ${RANDOM}${RANDOM} 113610703 $ NEWFILE=/tmp/newfile_${RANDOM} $ touch $NEWFILEREPLYNo need to give a variable name for
read
$ read my input $ echo ${REPLY}LINENO and SECONDSHandy for debugging
$ echo ${LINENO} 115 $ echo ${SECONDS}; sleep 1; echo ${SECONDS}; echo $LINENO 174380 174381 116Note that there are two 'lines' above, even though you used
TMOUT;
to separate the commands.You can timeout reads, which can be really handy in some scripts
#!/bin/bash TMOUT=5 echo You have 5 seconds to respond... read echo ${REPLY:-noreply}... ... ...
10) Associative ArraysTalking of moving to other languages, a rule of thumb I use is that if I need arrays then I drop bash to go to python (I even created a Docker container for a tool to help with this here ).
What I didn't know until I read up on it was that you can have associative arrays in bash.
Type this out for a demo:
$ declare -A MYAA=([one]=1 [two]=2 [three]=3) $ MYAA[one]="1" $ MYAA[two]="2" $ echo $MYAA $ echo ${MYAA[one]} $ MYAA[one]="1" $ WANT=two $ echo ${MYAA[$WANT]}Note that this is only available in bashes 4.x+.
... ... ...
Jul 04, 2020 | zwischenzugs.com
... ... ... Managing Variables
Variables are a core part of most serious bash scripts (and even one-liners!), so managing them is another important way to reduce the possibility of your script breaking.
Change your script to add the 'set' line immediately after the first line and see what happens:
#!/bin/bash set -o nounset A="some value" echo "${A}" echo "${B}"...I always set
Tracing Variablesnounset
on my scripts as a habit. It can catch many problems before they become serious.If you are working with a particularly complex script, then you can get to the point where you are unsure what happened to a variable.
Try running this script and see what happens:
#!/bin/bash set -o nounset declare A="some value" function a { echo "${BASH_SOURCE}>A A=${A} LINENO:${1}" } trap "a $LINENO" DEBUG B=value echo "${A}" A="another value" echo "${A}" echo "${B}"There's a problem with this code. The output is slightly wrong. Can you work out what is going on? If so, try and fix it.
You may need to refer to the bash man page, and make sure you understand quoting in bash properly.
It's quite a tricky one to fix 'properly', so if you can't fix it, or work out what's wrong with it, then ask me directly and I will help.
Profiling Bash ScriptsReturning to the
xtrace
(orset -x
flag), we can exploit its use of aPS
variable to implement the profiling of a script:#!/bin/bash set -o nounset set -o xtrace declare A="some value" PS4='$(date "+%s%N => ")' B= echo "${A}" A="another value" echo "${A}" echo "${B}" ls pwd curl -q bbc.co.ukFrom this you should be able to tell what
PS4
does. Have a play with it, and read up and experiment with the otherPS
variables to get familiar with what they do.NOTE: If you are on a Mac, then you might only get second-level granularity on the date!
Linting with ShellcheckFinally, here is a very useful tip for understanding bash more deeply and improving any bash scripts you come across.
Shellcheck is a website and a package available on most platforms that gives you advice to help fix and improve your shell scripts. Very often, its advice has prompted me to research more deeply and understand bash better.
Here is some example output from a script I found on my laptop:
$ shellcheck shrinkpdf.sh In shrinkpdf.sh line 44: -dColorImageResolution=$3 \ ^-- SC2086: Double quote to prevent globbing and word splitting. In shrinkpdf.sh line 46: -dGrayImageResolution=$3 \ ^-- SC2086: Double quote to prevent globbing and word splitting. In shrinkpdf.sh line 48: -dMonoImageResolution=$3 \ ^-- SC2086: Double quote to prevent globbing and word splitting. In shrinkpdf.sh line 57: if [ ! -f "$1" -o ! -f "$2" ]; then ^-- SC2166: Prefer [ p ] || [ q ] as [ p -o q ] is not well defined. In shrinkpdf.sh line 60: ISIZE="$(echo $(wc -c "$1") | cut -f1 -d\ )" ^-- SC2046: Quote this to prevent word splitting. ^-- SC2005: Useless echo? Instead of 'echo $(cmd)', just use 'cmd'. In shrinkpdf.sh line 61: OSIZE="$(echo $(wc -c "$2") | cut -f1 -d\ )" ^-- SC2046: Quote this to prevent word splitting. ^-- SC2005: Useless echo? Instead of 'echo $(cmd)', just use 'cmd'.The most common reminders are regarding potential quoting issues, but you can see other useful tips in the above output, such as preferred arguments to the
Exercisetest
construct, and advice on "useless"echo
s.1) Find a large bash script on a social coding site such as GitHub, and run
shellcheck
over it. Contribute back any improvements you find.
Oct 02, 2019 | opensource.com
7 Bash history shortcuts you will actually use Save time on the command line with these essential Bash shortcuts. 02 Oct 2019 Ian 205 up 12 comments Image by : Opensource.com x Subscribe now
Most guides to Bash history shortcuts exhaustively list every single one available. The problem with that is I would use a shortcut once, then glaze over as I tried out all the possibilities. Then I'd move onto my working day and completely forget them, retaining only the well-known !! trick I learned when I first started using Bash.
So most of them were never committed to memory.
More on BashThis article outlines the shortcuts I actually use every day. It is based on some of the contents of my book, Learn Bash the hard way ; (you can read a preview of it to learn more).
- Bash cheat sheet
- An introduction to programming with Bash
- A sysadmin's guide to Bash scripting
- Latest Bash articles
When people see me use these shortcuts, they often ask me, "What did you do there!?" There's minimal effort or intelligence required, but to really learn them, I recommend using one each day for a week, then moving to the next one. It's worth taking your time to get them under your fingers, as the time you save will be significant in the long run.
1. The "last argument" one: !$2. The " n th argument" one: !:2If you only take one shortcut from this article, make it this one. It substitutes in the last argument of the last command into your line.
Consider this scenario:
$ mv / path / to / wrongfile / some / other / place
mv: cannot stat '/path/to/wrongfile' : No such file or directoryAch, I put the wrongfile filename in my command. I should have put rightfile instead.
You might decide to retype the last command and replace wrongfile with rightfile completely. Instead, you can type:
$ mv / path / to / rightfile ! $
mv / path / to / rightfile / some / other / placeand the command will work.
There are other ways to achieve the same thing in Bash with shortcuts, but this trick of reusing the last argument of the last command is one I use the most.
Ever done anything like this?
$ tar -cvf afolder afolder.tar
tar: failed to openLike many others, I get the arguments to tar (and ln ) wrong more often than I would like to admit.
tar_2x.pngWhen you mix up arguments like that, you can run:
$ ! : 0 ! : 1 ! : 3 ! : 2
tar -cvf afolder.tar afolderand your reputation will be saved.
The last command's items are zero-indexed and can be substituted in with the number after the !: .
Obviously, you can also use this to reuse specific arguments from the last command rather than all of them.
3. The "all the arguments": !*4. The "last but n " : !-2:$Imagine I run a command like:
$ grep '(ping|pong)' afileThe arguments are correct; however, I want to match ping or pong in a file, but I used grep rather than egrep .
I start typing egrep , but I don't want to retype the other arguments. So I can use the !:1$ shortcut to ask for all the arguments to the previous command from the second one (remember they're zero-indexed) to the last one (represented by the $ sign).
$ egrep ! : 1 -$
egrep '(ping|pong)' afile
pingYou don't need to pick 1-$ ; you can pick a subset like 1-2 or 3-9 (if you had that many arguments in the previous command).
The shortcuts above are great when I know immediately how to correct my last command, but often I run commands after the original one, which means that the last command is no longer the one I want to reference.
For example, using the mv example from before, if I follow up my mistake with an ls check of the folder's contents:
$ mv / path / to / wrongfile / some / other / place
mv: cannot stat '/path/to/wrongfile' : No such file or directory
$ ls / path / to /
rightfileI can no longer use the !$ shortcut.
In these cases, I can insert a - n : (where n is the number of commands to go back in the history) after the ! to grab the last argument from an older command:
$ mv / path / to / rightfile ! - 2 :$
mv / path / to / rightfile / some / other / placeAgain, once you learn it, you may be surprised at how often you need it.
5. The "get me the folder" one: !$:hThis one looks less promising on the face of it, but I use it dozens of times daily.
Imagine I run a command like this:
$ tar -cvf system.tar / etc / system
tar: / etc / system: Cannot stat: No such file or directory
tar: Error exit delayed from previous errors.The first thing I might want to do is go to the /etc folder to see what's in there and work out what I've done wrong.
I can do this at a stroke with:
$ cd ! $:h
cd / etcThis one says: "Get the last argument to the last command ( /etc/system ) and take off its last filename component, leaving only the /etc ."
6. The "the current line" one: !#:1For years, I occasionally wondered if I could reference an argument on the current line before finally looking it up and learning it. I wish I'd done so a long time ago. I most commonly use it to make backup files:
$ cp / path / to / some / file ! #:1.bak
cp / path / to / some / file / path / to / some / file.bakbut once under the fingers, it can be a very quick alternative to
7. The "search and replace" one: !!:gsThis one searches across the referenced command and replaces what's in the first two / characters with what's in the second two.
Say I want to tell the world that my s key does not work and outputs f instead:
$ echo my f key doef not work
my f key doef not workThen I realize that I was just hitting the f key by accident. To replace all the f s with s es, I can type:
$ !! :gs / f / s /
echo my s key does not work
my s key does not workIt doesn't work only on single characters; I can replace words or sentences, too:
$ !! :gs / does / did /
echo my s key did not work
my s key did not work Test them outJust to show you how these shortcuts can be combined, can you work out what these toenail clippings will output?
$ ping ! #:0:gs/i/o
$ vi / tmp /! : 0 .txt
$ ls ! $:h
$ cd ! - 2 :h
$ touch ! $! - 3 :$ !! ! $.txt
$ cat ! : 1 -$ ConclusionBash can be an elegant source of shortcuts for the day-to-day command-line user. While there are thousands of tips and tricks to learn, these are my favorites that I frequently put to use.
If you want to dive even deeper into all that Bash can teach you, pick up my book, Learn Bash the hard way or check out my online course, Master the Bash shell .
This article was originally posted on Ian's blog, Zwischenzugs.com , and is reused with permission.
Orr, August 25, 2019 at 10:39 pm
BTW you inspired me to try and understand how to repeat the nth command entered on command line. For example I type 'ls' and then accidentally type 'clear'. !! will retype clear again but I wanted to retype ls instead using a shortcut.
Bash doesn't accept ':' so !:2 didn't work. !-2 did however, thank you!Dima August 26, 2019 at 7:40 am
Nice article! Just another one cool and often used command: i.e.: !vi opens the last vi command with their arguments.
cbarrick on 03 Oct 2019
Your "current line" example is too contrived. Your example is copying to a backup like this:
$ cp /path/to/some/file !#:1.bakBut a better way to write that is with filename generation:
$ cp /path/to/some/file{,.bak}That's not a history expansion though... I'm not sure I can come up with a good reason to use `!#:1`.
Darryl Martin August 26, 2019 at 4:41 pm
I seldom get anything out of these "bash commands you didn't know" articles, but you've got some great tips here. I'm writing several down and sticking them on my terminal for reference.
A couple additions I'm sure you know.
- I use "!*" for "all arguments". It doesn't have the flexibility of your approach but it's faster for my most common need.
- I recently started using Alt-. as a substitute for "!$" to get the last argument. It expands the argument on the line, allowing me to modify it if necessary.
Ricardo J. Barberis on 06 Oct 2019
The problem with bash's history shorcuts for me is... that I never had the need to learn them.
Provided that your shell is readline-enabled, I find it much easier to use the arrow keys and modifiers to navigate through history than type !:1 (or having to remeber what it means).
Examples:
Ctrl+R for a Reverse search
Ctrl+A to move to the begnining of the line (Home key also)
Ctrl+E to move to the End of the line (End key also)
Ctrl+K to Kill (delete) text from the cursor to the end of the line
Ctrl+U to kill text from the cursor to the beginning of the line
Alt+F to move Forward one word (Ctrl+Right arrow also)
Alt+B to move Backward one word (Ctrl+Left arrow also)
etc.YMMV of course.
Jul 02, 2020 | zwischenzugs.com
2)
|&
You may already be familiar with
2>&1
, which redirects standard error to standard output, but until I stumbled on it in the manual, I had no idea that you can pipe both standard output and standard error into the next stage of the pipeline like this:if doesnotexist |& grep 'command not found' >/dev/null then echo oops fi3)$''
This construct allows you to specify specific bytes in scripts without fear of triggering some kind of encoding problem. Here's a command that will
grep
through files looking for UK currency ('£') signs in hexadecimal recursively:grep -r $'\xc2\xa3' *You can also use octal:
grep -r $'\302\243' *4)HISTIGNORE
If you are concerned about security, and ever type in commands that might have sensitive data in them, then this one may be of use.
This environment variable does not put the commands specified in your history file if you type them in. The commands are separated by colons:
HISTIGNORE="ls *:man *:history:clear:AWS_KEY*"You have to specify the whole line, so a glob character may be needed if you want to exclude commands and their arguments or flags.
5)fc
If readline key bindings aren't under your fingers, then this one may come in handy.
It calls up the last command you ran, and places it into your preferred editor (specified by the EDITOR variable). Once edited, it re-runs the command.
6)((i++))
If you can't be bothered with faffing around with variables in bash with the
$[]
construct, you can use the C-style compound command.So, instead of:
A=1 A=$[$A+1] echo $Ayou can do:
A=1 ((A++)) echo $Awhich, especially with more complex calculations, might be easier on the eye.
7)caller
Another builtin bash command,
caller
gives context about the context of your shell's
SHLVL
is a related shell variable which gives the level of depth of the calling stack.This can be used to create stack traces for more complex bash scripts.
Here's a
die
function, adapted from the bash hackers' wiki that gives a stack trace up through the calling frames:#!/bin/bash die() { local frame=0 ((FRAMELEVEL=SHLVL - frame)) echo -n "${FRAMELEVEL}: " while caller $frame; do ((frame++)); ((FRAMELEVEL=SHLVL - frame)) if [[ ${FRAMELEVEL} -gt -1 ]] then echo -n "${FRAMELEVEL}: " fi done echo "$*" exit 1 }which outputs:
3: 17 f1 ./caller.sh 2: 18 f2 ./caller.sh 1: 19 f3 ./caller.sh 0: 20 main ./caller.sh *** an error occured ***8)/dev/tcp/host/port
This one can be particularly handy if you find yourself on a container running within a Kubernetes cluster service mesh without any network tools (a frustratingly common experience).
Bash provides you with some virtual files which, when referenced, can create socket connections to other servers.
This snippet, for example, makes a web request to a site and returns the output.
exec 9<>/dev/tcp/brvtsdflnxhkzcmw.neverssl.com/80 echo -e "GET /online HTTP/1.1\r\nHost: brvtsdflnxhkzcmw.neverssl.com\r\n\r\n" >&9 cat <&9The first line opens up file descriptor
9
to the hostbrvtsdflnxhkzcmw.neverssl.com
on port80
for reading and writing. Line two sends the raw HTTP request to that socket connection's file descriptor. The final line retrieves the response.Obviously, this doesn't handle SSL for you, so its use is limited now that pretty much everyone is running on https, but when running from application containers within a service mesh can still prove invaluable, as requests there are initiated using HTTP.
9) Co-processesSince version 4 of
bash
it has offered the capability to run named coprocesses.It seems to be particularly well-suited to managing the inputs and outputs to other processes in a fine-grained way. Here's an annotated and trivial example:
coproc testproc ( i=1 while true do echo "iteration:${i}" ((i++)) read -r aline echo "${aline}" done )This sets up the coprocess as a subshell with the name
testproc
.Within the subshell, there's a never-ending while loop that counts its own iterations with the
i
variable. It outputs two lines: the iteration number, and a line read in from standard input.After creating the coprocess, bash sets up an array with that name with the file descriptor numbers for the standard input and standard output. So this:
echo "${testproc[@]}"in my terminal outputs:
63 60Bash also sets up a variable with the process identifier for the coprocess, which you can see by echoing it:
echo "${testproc_PID}"You can now input data to the standard input of this coprocess at will like this:
echo input1 >&"${testproc[1]}"In this case, the command resolves to:
echo input1 >&60
, and the>&[INTEGER]
construct ensures the redirection goes to the coprocess's standard input.Now you can read the output of the coprocess's two lines in a similar way, like this:
read -r output1a <&"${testproc[0]}" read -r output1b <&"${testproc[0]}"You might use this to create an expect -like script if you were so inclined, but it could be generally useful if you want to manage inputs and outputs. Named pipes are another way to achieve a similar result.
Here's a complete listing for those who want to cut and paste:
!/bin/bash coproc testproc ( i=1 while true do echo "iteration:${i}" ((i++)) read -r aline echo "${aline}" done ) echo "${testproc[@]}" echo "${testproc_PID}" echo input1 >&"${testproc[1]}" read -r output1a <&"${testproc[0]}" read -r output1b <&"${testproc[0]}" echo "${output1a}" echo "${output1b}" echo input2 >&"${testproc[1]}" read -r output2a <&"${testproc[0]}" read -r output2b <&"${testproc[0]}" echo "${output2a}" echo "${output2b}"
Apr 02, 2020 | opensource.com
Originally from: Get started with Bash scripting for sysadmins - Opensource.com
Most shells offer the ability to create, manipulate, and query indexed arrays. In plain English, an indexed array is a list of things prefixed with a number. This list of things, along with their assigned number, is conveniently wrapped up in a single variable, which makes it easy to "carry" it around in your code.
Bash, however, includes the ability to create associative arrays and treats these arrays the same as any other array. An associative array lets you create lists of key and value pairs, instead of just numbered values.
The nice thing about associative arrays is that keys can be arbitrary:
$ declare -A userdata
$ userdata [ name ] =seth
$ userdata [ pass ] =8eab07eb620533b083f241ec4e6b9724
$ userdata [ login ] = ` date --utc + % s `Query any key:
$ echo " ${userdata[name]} "
seth
$ echo " ${userdata[login]} "
1583362192Most of the usual array operations you'd expect from an array are available.
Resources
- How to program with Bash: Syntax and tools
- How to program with Bash: Logical operators and shell expansions
- How to program with Bash: Loops
Jun 12, 2020 | opensource.com
Source is like a Python import or a Java include. Learn it to expand your Bash prowess. Seth Kenlon (Red Hat) Feed 25 up 2 comments Image by : Opensource.com x Subscribe nowWhen you log into a Linux shell, you inherit a specific working environment. An environment , in the context of a shell, means that there are certain variables already set for you, which ensures your commands work as intended. For instance, the PATH environment variable defines where your shell looks for commands. Without it, nearly everything you try to do in Bash would fail with a command not found error. Your environment, while mostly invisible to you as you go about your everyday tasks, is vitally important.
There are many ways to affect your shell environment. You can make modifications in configuration files, such as
Add to your environment with source~/.bashrc
and~/.profile
, you can run services at startup, and you can create your own custom commands or script your own Bash functions .Bash (along with some other shells) has a built-in command called
source
. And here's where it can get confusing:source
performs the same function as the command.
(yes, that's but a single dot), and it's not the samesource
as theTcl
command (which may come up on your screen if you typeman source
). The built-insource
command isn't in yourPATH
at all, in fact. It's a command that comes included as a part of Bash, and to get further information about it, you can typehelp source
.The
More on Bash.
command is POSIX -compliant. Thesource
command is not defined by POSIX but is interchangeable with the.
command.According to Bash
- Bash cheat sheet
- An introduction to programming with Bash
- A sysadmin's guide to Bash scripting
- Latest Bash articles
help
, thesource
command executes a file in your current shell. The clause "in your current shell" is significant, because it means it doesn't launch a sub-shell; therefore, whatever you execute withsource
happens within and affects your current environment.Before exploring how
#!/usr/bin/env bashsource
can affect your environment, trysource
on a test file to ensure that it executes code as expected. First, create a simple Bash script and save it as a file calledhello.sh
:
echo "hello world"Using
$ source hello.shsource
, you can run this script even without setting the executable bit:
hello worldYou can also use the built-in
$ . hello.sh.
command for the same results:
hello worldThe
Set variables and import functionssource
and.
commands successfully execute the contents of the test file.You can use
source
to "import" a file into your shell environment, just as you might use theinclude
keyword in C or C++ to reference a library or theimport
keyword in Python to bring in a module. This is one of the most common uses forsource
, and it's a common default inclusion in.bashrc
files tosource
a file called.bash_aliases
so that any custom aliases you define get imported into your environment when you log in.Here's an example of importing a Bash function. First, create a function in a file called
function myip () {myfunctions
. This prints your public IP address and your local IP address:
curl http: // icanhazip.comip addr | grep inet $IP | \
cut -d "/" -f 1 | \
grep -v 127 \.0 | \
grep -v \:\: 1 | \
awk '{$1=$1};1'
}Import the function into your shell:
$ source myfunctionsTest your new function:
$ myip
93.184.216.34
inet 192.168.0.23
inet6 fbd4:e85f:49c: 2121 :ce12:ef79:0e77:59d1
inet 10.8.42.38 Search for sourceWhen you use
source
in Bash, it searches your current directory for the file you reference. This doesn't happen in all shells, so check your documentation if you're not using Bash.If Bash can't find the file to execute, it searches your
PATH
instead. Again, this isn't the default for all shells, so check your documentation if you're not using Bash.These are both nice convenience features in Bash. This behavior is surprisingly powerful because it allows you to store common functions in a centralized location on your drive and then treat your environment like an integrated development environment (IDE). You don't have to worry about where your functions are stored, because you know they're in your local equivalent of
/usr/include
, so no matter where you are when you source them, Bash finds them.For instance, you could create a directory called
~/.local/include
as a storage area for common functions and then put this block of code into your.bashrc
file:for i in $HOME / .local / include /* ; do source $i
doneThis "imports" any file containing custom functions in
~/.local/include
into your shell environment.Bash is the only shell that searches both the current directory and your
Using source for open sourcePATH
when you use either thesource
or the.
command.Using
source
or.
to execute files can be a convenient way to affect your environment while keeping your alterations modular. The next time you're thinking of copying and pasting big blocks of code into your.bashrc
file, consider placing related functions or groups of aliases into dedicated files, and then usesource
to ingest them.Get started with Bash scripting for sysadmins Learn the commands and features that make Bash one of the most powerful shells available.
Seth Kenlon (Red Hat) Introduction to automation with Bash scripts In the first article in this four-part series, learn how to create a simple shell script and why they are the best way to automate tasks.
David Both (Correspondent) Bash cheat sheet: Key combos and special syntax Download our new cheat sheet for Bash commands and shortcuts you need to talk to your computer.
Jul 01, 2020 | www.redhat.com
See also Bash bang commands- A must-know trick for the Linux command line - Enable Sysadmin
Let's say I run the following command:
$> sudo systemctl status sshdBash tells me the sshd service is not running, so the next thing I want to do is start the service. I had checked its status with my previous command. That command was saved in
history
, so I can reference it. I simply run:$> !!:s/status/start/ sudo systemctl start sshdThe above expression has the following content:
- !! - repeat the last command from history
- :s/status/start/ - substitute status with start
The result is that the sshd service is started.
Next, I increase the default HISTSIZE value from 500 to 5000 by using the following command:
$> echo "HISTSIZE=5000" >> ~/.bashrc && source ~/.bashrcWhat if I want to display the last three commands in my history? I enter:
$> history 3 1002 ls 1003 tail audit.log 1004 history 3I run
tail
onaudit.log
by referring to the history line number. In this case, I use line 1003:$> !1003 tail audit.logReference the last argument of the previous commandWhen I want to list directory contents for different directories, I may change between directories quite often. There is a nice trick you can use to refer to the last argument of the previous command. For example:
$> pwd /home/username/$> ls some/very/long/path/to/some/directory foo-file bar-file baz-fileIn the above example,
/some/very/long/path/to/some/directory
is the last argument of the previous command.If I want to
cd
(change directory) to that location, I enter something like this:$> cd $_ $> pwd /home/username/some/very/long/path/to/some/directoryNow simply use a dash character to go back to where I was:
$> cd - $> pwd /home/username/
Jun 28, 2020 | itsfoss.com
Top Free Resources to Learn Shell Scripting
Don't have Linux installed on your system? No, worries. There are various ways of using Linux terminal on Windows . You may also use online Linux terminals in some cases to practice shell scripting.
1. Learn Shell [Interactive web portal]If you're looking for an interactive web portal to learn shell scripting and also try it online, Learn Shell is a great place to start.
It covers the basics and offers some advanced exercises as well. The content is usually brief and to the point hence, I'd recommend you to check this out.
Learn Shell 2. Shell Scripting Tutorial [Web portal]Shell scripting tutorial is web resource that's completely dedicated for shell scripting. You can choose to read the resource for free or can opt to purchase the PDF, book, or the e-book to support it.
Of course, paying for the paperback edition or the e-book is optional. But, the resource should come in handy for free.
Shell Scripting Tutorial 3. Shell Scripting Udemy (Free video course)Udemy is unquestionably one of the most popular platforms for online courses. And, in addition to the paid certified courses, it also offers some free stuff that does not include certifications.
Shell Scripting is one of the most recommended free course available on Udemy for free. You can enroll in it without spending anything.
Shell Scripting Udemy 4. Bash Shell Scripting Udemy (Free video course)Yet another interesting free course focused on bash shell scripting on Udemy. Compared to the previous one, this resource seems to be more popular. So, you can enroll in it and see what it has to offer.
Not to forget that the free Udemy course does not offer any certifications. But, it's indeed an impressive free shell scripting learning resource.
5. Bash Academy [online portal with interactive game]As the name suggests, the bash academy is completely focused on educating the users about bash shell.
It's suitable for both beginners and experienced users even though it does not offer a lot of content. Not just limited to the guide -- but it also used to offer an interactive game to practice which no longer works.
Hence, if this is interesting enough, you can also check out its GitHub page and fork it to improve the existing resources if you want.
Bash Academy 6. Bash Scripting LinkedIn Learning (Free video course)LinkedIn offers a number of free courses to help you improve your skills and get ready for more job opportunities. You will also find a couple of courses focused on shell scripting to brush up some basic skills or gain some advanced knowledge in the process.
Here, I've linked a course for bash scripting, you can find some other similar courses for free as well.
Bash Scripting (LinkedIn Learning) 7. Advanced Bash Scripting Guide [Free PDF book]An impressive advanced bash scripting guide available in the form of PDF for free. This PDF resource does not enforce any copyrights and is completely free in the public domain.
Even though the resource is focused on providing advanced insights. It's also suitable for beginners to refer this resource and start to learn shell scripting.
Advanced Bash Scripting Guide [PDF] 8. Bash Notes for Professionals [Free PDF book]This is good reference guide if you are already familiar with Bash Shell scripting or if you just want a quick summary.
This free downloadable book runs over 100 pages and covers a wide variety of scripting topics with the help of brief description and quick examples.
Download Bash Notes for Professional 9. Tutorialspoint [Web portal]Tutorialspoint is a quite popular web portal to learn a variety of programming languages . I would say this is quite good for starters to learn the fundamentals and the basics.
This may not be suitable as a detailed resource -- but it should be a useful one for free.
Tutorialspoint 10. City College of San Francisco Online Notes [Web portal]This may not be the best free resource there is -- but if you're ready to explore every type of resource to learn shell scripting, why not refer to the online notes of City College of San Francisco?
I came across this with a random search on the Internet about shell scripting resources.
Again, it's important to note that the online notes could be a bit dated. But, it should be an interesting resource to explore.
City College of San Francisco Notes Honorable mention: Linux Man PageNot to forget, the man page for bash should also be a fantastic free resource to explore more about the commands and how it works.
Even if it's not tailored as something that lets you master shell scripting, it is still an important web resource that you can use for free. You can either choose to visit the man page online or just head to the terminal and type the following command to get help:
man bashWrapping UpThere are also a lot of popular paid resources just like some of the best Linux books available out there. It's easy to start learning about shell scripting using some free resources available across the web.
In addition to the ones I've mentioned, I'm sure there must be numerous other resources available online to help you learn shell scripting.
Do you like the resources mentioned above? Also, if you're aware of a fantastic free resource that I possibly missed, feel free to tell me about it in the comments below.
Skip AdLike what you read? Please share it with others.
28SharesFiled Under: List Tagged With: resources , shell
<img alt='' src='https://secure.gravatar.com/avatar/d098097d2a43d2fc1f0d31327f8288a6?s=90&d=mm&r=g' srcset='https://secure.gravatar.com/avatar/d098097d2a43d2fc1f0d31327f8288a6?s=180&d=mm&r=g 2x' class='avatar avatar-90 photo' height='90' width='90' />About Ankush DasA passionate technophile who also happens to be a Computer Science graduate. You will usually see cats dancing to the beautiful tunes sung by him. comment_count comments Newest Newest Oldest Most Liked Comment as a guest:
Nov 24, 2008 | www.linux.com
Author: Ben Martin
The Bash Debugger Project (bashdb) lets you set breakpoints, inspect variables, perform a backtrace, and step through a bash script line by line. In other words, it provides the features you expect in a C/C++ debugger to anyone programming a bash script.To see if your standard bash executable has bashdb support, execute the command shown below; if you are not taken to a bashdb prompt then you'll have to install bashdb yourself.
$ bash --debugger -c "set|grep -i dbg" ... bashdbThe Ubuntu Intrepid repository contains a package for bashdb, but there is no special bashdb package in the openSUSE 11 or Fedora 9 repositories. I built from source using version 4.0-0.1 of bashdb on a 64-bit Fedora 9 machine, using the normal
./configure; make; sudo make install
commands.You can start the Bash Debugger using the
bash --debugger foo.sh
syntax or thebashdb foo.sh
command. The former method is recommended except in cases where I/O redirection might cause issues, and it's what I used. You can also use bashdb through ddd or from an Emacs buffer.The syntax for many of the commands in bashdb mimics that of gdb, the GNU debugger. You can
step
into functions, usenext
to execute the next line without stepping into any functions, generate a backtrace withbt
, exit bashdb withquit
or Ctrl-D, and examine a variable withprint $foo
. Aside from the prefixing of the variable with$
at the end of the last sentence, there are some other minor differences that you'll notice. For instance, pressing Enter on a blank line in bashdb executes the previous step or next command instead of whatever the previous command was.The print command forces you to prefix shell variables with the dollar sign (
$foo
). A slightly shorter way of inspecting variables and functions is to use thex foo
command, which usesdeclare
to print variables and functions.Both bashdb and your script run inside the same bash shell. Because bash lacks some namespace properties, bashdb will include some functions and symbols into the global namespace which your script can get at. bashdb prefixes its symbols with
_Dbg_
, so you should avoid that prefix in your scripts to avoid potential clashes. bashdb also uses some environment variables; it uses theDBG_
prefix for its own, and relies on some standard bash ones that begin withBASH_
.To illustrate the use of bashdb, I'll work on the small bash script below, which expects a numeric argument
#!/bin/bash version="0.01"; fibonacci() { n=${1:?If you want the nth fibonacci number, you must supply n as the first parameter.} if [ $n -le 1 ]; then echo $n else l=`fibonacci $((n-1))` r=`fibonacci $((n-2))` echo $((l + r)) fi } for i in `seq 1 10` do result=$(fibonacci $i) echo "i=$i result=$result" donen
and calculates the nth Fibonacci number .The below session shows bashdb in action, stepping over and then into the fibonacci function and inspecting variables. I've made my input text bold for ease of reading. An initial backtrace (
$ bash --debugger ./fibonacci.sh ... (/home/ben/testing/bashdb/fibonacci.sh:3): 3: version="0.01"; bashdb bt ->0 in file `./fibonacci.sh' at line 3 ##1 main() called from file `./fibonacci.sh' at line 0 bashdb next (/home/ben/testing/bashdb/fibonacci.sh:16): 16: for i in `seq 1 10` bashdb list 16:==>for i in `seq 1 10` 17: do 18: result=$(fibonacci $i) 19: echo "i=$i result=$result" 20: done bashdb next (/home/ben/testing/bashdb/fibonacci.sh:18): 18: result=$(fibonacci $i) bashdb (/home/ben/testing/bashdb/fibonacci.sh:19): 19: echo "i=$i result=$result" bashdb x i result declare -- i="1" declare -- result="" bashdb print $i $result 1 bashdb break fibonacci Breakpoint 1 set in file /home/ben/testing/bashdb/fibonacci.sh, line 5. bashdb continue Breakpoint 1 hit (1 times). (/home/ben/testing/bashdb/fibonacci.sh:5): 5: fibonacci() { bashdb next (/home/ben/testing/bashdb/fibonacci.sh:6): 6: n=${1:?If you want the nth fibonacci number, you must supply n as the first parameter.} bashdb next (/home/ben/testing/bashdb/fibonacci.sh:7): 7: if [ $n -le 1 ]; then bashdb x n declare -- n="2" bashdb quitbt
) shows that the script begins at line 3, which is where the version variable is written. Thenext
andlist
commands then progress to the next line of the script a few times and show the context of the current execution line. After one of thenext
commands I press Enter to executenext
again. I invoke theexamine
command through the single letter shortcutx
. Notice that the variables are printed out usingdeclare
as opposed to their display on the next line usingfibonacci
function andcontinue
the execution of the shell script. Thefibonacci
function is called and I move to thenext
line a few times and inspect a variable.Notice that the number in the bashdb prompt toward the end of the above example is enclosed in parentheses. Each set of parentheses indicates that you have entered a subshell. In this example this is due to being inside a shell function.
In the below example I use a watchpoint to see if and where the
(/home/ben/testing/bashdb/fibonacci.sh:3): 3: version="0.01"; bashdb<0> next (/home/ben/testing/bashdb/fibonacci.sh:16): 16: for i in `seq 1 10` bashdb<1> watch result 0: ($result)==0 arith: 0 bashdb<2> c Watchpoint 0: $result changed: old value: '' new value: '1' (/home/ben/testing/bashdb/fibonacci.sh:19): 19: echo "i=$i result=$result" bashdb<3> c i=1 result=1 i=2 result=1 Watchpoint 0: $result changed: old value: '1' new value: '2' (/home/ben/testing/bashdb/fibonacci.sh:19): 19: echo "i=$i result=$result"result
variable changes. Notice the initialnext
command. I found that if I didn't issue that next then my watch would fail to work. As you can see, after I issuec
to continue execution, execution is stopped whenever the result variable is about to change, and the new and old value are displayed.To get around the strange initial
$ bash --debugger ./fibonacci.sh (/home/ben/testing/bashdb/fibonacci.sh:3): 3: version="0.01"; bashdb<0> watche result > 4 0: (result > 4)==0 arith: 1 bashdb<1> continue i=1 result=1 i=2 result=1 i=3 result=2 i=4 result=3 Watchpoint 0: result > 4 changed: old value: '0' new value: '1' (/home/ben/testing/bashdb/fibonacci.sh:19): 19: echo "i=$i result=$result"next
requirement I used thewatche
command in the below session, which lets you stop whenever an expression becomes true. In this case I'm not overly interested in the first few Fibonacci numbers so I set a watch to have execution stop when the result is greater than 4. You can also use awatche
command without a condition; for example,watche result
would stop execution whenever the result variable changed.When a shell script goes wrong, many folks use the time-tested method of incrementally adding in
echo
orprintf
statements to look for invalid values or code paths that are never reached. With bashdb, you can save yourself time by just adding a few watches on variables or setting a few breakpoints.
Mar 05, 2020 | www.networkworld.com
One quick way to determine whether the command you are using is a bash built-in or not is to use the command "command". Yes, the command is called "command". Try it with a -V (capital V) option like this:
$ command -V command command is a shell builtin $ command -V echo echo is a shell builtin $ command -V date date is hashed (/bin/date)When you see a "command is hashed" message like the one above, that means that the command has been put into a hash table for quicker lookup.
... ... ... How to tell what shell you're currently usingIf you switch shells you can't depend on $SHELL to tell you what shell you're currently using because $SHELL is just an environment variable that is set when you log in and doesn't necessarily reflect your current shell. Try ps -p $$ instead as shown in these examples:
$ ps -p $$ PID TTY TIME CMD 18340 pts/0 00:00:00 bash <== $ /bin/dash $ ps -p $$ PID TTY TIME CMD 19517 pts/0 00:00:00 dash <==Built-ins are extremely useful and give each shell a lot of its character. If you use some particular shell all of the time, it's easy to lose track of which commands are part of your shell and which are not.
Differentiating a shell built-in from a Linux executable requires only a little extra effort.
Mar 05, 2020 | marketplace.visualstudio.com
Bash IDEVisual Studio Code extension utilizing the bash language server , that is based on Tree Sitter and its grammar for Bash and supports explainshell integration.
FeaturesConfiguration
- [x] Jump to declaration
- [x] Find references
- [x] Code Outline & Show Symbols
- [x] Highlight occurrences
- [x] Code completion
- [x] Simple diagnostics reporting
- [x] Documentation for flags on hover
- [ ] Rename symbol
To get documentation for flags on hover (thanks to explainshell), run the explainshell Docker container :
docker run --rm --name bash-explainshell -p 5000:5000 chrismwendt/codeintel-bash-with-explainshellAnd add this to your VS Code settings:
"bashIde.explainshellEndpoint": "http://localhost:5000",For security reasons, it defaults to
""
, which disables explainshell integration. When set, this extension will send requests to the endpoint and displays documentation for flags.Once https://github.com/idank/explainshell/pull/125 is merged, it would be possible to set this to
"https://explainshell.com"
, however doing this is not recommended as it will leak all your shell scripts to a third party -- do this at your own risk, or better always use a locally running Docker image.
Sep 30, 2010 | www.reddit.com
1 r/commandline Posted by u/acksed 6 years ago
I had to use file recovery software when I accidentally formatted my backup. It worked, but I now have 37,000 text files with numbers where names used to be.
If I name each file with the first 20-30 characters, I can sort the text-wheat from the bit-chaff.
I have the vague idea of using whatever the equivalent of head is on Windows, but that's as far as I got. I'm not so hot on bash scripting either. 9 comments 54% Upvoted This thread is archived New comments cannot be posted and votes cannot be cast Sort by level 1
tatumc 6 points 6 years ago
acksed 2 points 6 years agoTo rename each file with the first line of the file, you can do:
for i in *; do mv $i "$(head -1 "$i")"; doneYou can use cp instead of mv or make a backup of the dir first to be sure you don't accidentally nuke anything. level 2
edited 6 years agotatumc 1 point 6 years agoThis is almost exactly what I wanted. Thanks! A quick tweak:
for i in *; do mv $i "$(head -c 30 "$i")"; done
Now, I know CygWin is a thing, wonder if it'll work for me. level 3
acksed 1 point 6 years agoJust keep in mind that 'head -c' will include newlines which will garble the new file names. level 3
edited 6 years agotatumc 1 point 6 years agoAnswer: not really. The environment and script's working, but whenever there's a forward slash or non-escaping character in the text, it chokes when it tries to set up a new directory, and it deletes the file suffix. :-/ Good thing I used a copy of the data.
Need something to strip out the characters and spaces, and add the file suffix, before it tries to rename.
sed
? Also needsfile
to identify it as true text. I can do the suffix at least:for i in *; do mv $i "$(head -c 30 "$i").txt"; donelevel 4yeayoushookme 1 point 6 years agoI recommend you use 'head -1', which will make the first line of the file the filename and you won't have to worry about newlines. Then you can change the spaces to underscores with:
for i in *; do mv -v "$i" `echo $i | tr ' ' '_' `level 1edited 6 years agoacksed 1 point 6 years agoThere's the
file
program on *nix that'll tell you, in a verbose manner, the type of the file you give it as an argument, irregardless of its file extension. Example:$ file test.mp3 test.mp3: , 48 kHz, JntStereo $ file mbr.bin mbr.bin: data $ file CalendarExport.ics CalendarExport.ics: HTML document, UTF-8 Unicode text, with very long lines, with CRLF, LF line terminators $ file jmk.doc jmk.doc: Composite Document File V2 Document, Little Endian, Os: Windows, Version 6.0, Code page: 1250, Title: xx, Author: xx, Template: Normal, Last Saved By: xx, Revision Number: 4, Name of Creating Application: Microsoft Office Word, Total Editing Time: 2d+03:32:00, Last Printed: Fri Feb 22 11:29:00 2008, Create Time/Date: Fri Jan 4 12:57:00 2013, Last Saved Time/Date: Sun Jan 6 16:30:00 2013, Number of Pages: 6, Number of Words: 1711, Number of Characters: 11808, Security: 0level 2edited 6 years agoRonaldoNazario 1 point 6 years agoThank you, but the software I used to recover (R-Undelete) sorted them already. I found another program, RenameMaestro, that renames according to metadata in zip, rar, pdf, doc and other files, but text files are too basic.
Edit: You were right, I did need it. level 1
pfp-disciple 1 point 6 years agoNot command line, but you could probably do this pretty easily in python, using "glob" to get filenames, and os read and move/rename functions to get the text and change filenames. level 1
So far, you're not getting many windows command line ideas :(. I don't have any either, but here's an idea:
Use one of the live Linux distributions (Porteus is pretty cool, but there're a slew of others). In that Linux environment, you can mount your Windows hard drive, and use Linux tools, maybe something like /u/tatumc suggested. r/commandline
Jul 31, 2019 | opensource.com
Tired of typing the same long commands over and over? Do you feel inefficient working on the command line? Bash aliases can make a world of difference. 28 comments
A Bash alias is a method of supplementing or overriding Bash commands with new ones. Bash aliases make it easy for users to customize their experience in a POSIX terminal. They are often defined in $HOME/.bashrc or $HOME/bash_aliases (which must be loaded by $HOME/.bashrc ).
Most distributions add at least some popular aliases in the default .bashrc file of any new user account. These are simple ones to demonstrate the syntax of a Bash alias:
alias ls = 'ls -F'
alias ll = 'ls -lh'Not all distributions ship with pre-populated aliases, though. If you add aliases manually, then you must load them into your current Bash session:
$ source ~/.bashrcOtherwise, you can close your terminal and re-open it so that it reloads its configuration file.
With those aliases defined in your Bash initialization script, you can then type ll and get the results of ls -l , and when you type ls you get, instead of the output of plain old ls .
Those aliases are great to have, but they just scratch the surface of what's possible. Here are the top 10 Bash aliases that, once you try them, you won't be able to live without.
Set up firstBefore beginning, create a file called ~/.bash_aliases :
$ touch ~/.bash_aliasesThen, make sure that this code appears in your ~/.bashrc file:
if [ -e $HOME / .bash_aliases ] ; then
source $HOME / .bash_aliases
fiIf you want to try any of the aliases in this article for yourself, enter them into your .bash_aliases file, and then load them into your Bash session with the source ~/.bashrc command.
Sort by file sizeIf you started your computing life with GUI file managers like Nautilus in GNOME, the Finder in MacOS, or Explorer in Windows, then you're probably used to sorting a list of files by their size. You can do that in a terminal as well, but it's not exactly succinct.
Add this alias to your configuration on a GNU system:
alias lt = 'ls --human-readable --size -1 -S --classify'This alias replaces lt with an ls command that displays the size of each item, and then sorts it by size, in a single column, with a notation to indicate the kind of file. Load your new alias, and then try it out:
$ source ~ / .bashrc
$ lt
total 344K
140K configure *
44K aclocal.m4
36K LICENSE
32K config.status *
24K Makefile
24K Makefile.in
12K config.log
8.0K README.md
4.0K info.slackermedia.Git-portal.json
4.0K git-portal.spec
4.0K flatpak.path.patch
4.0K Makefile.am *
4.0K dot-gitlab.ci.yml
4.0K configure.ac *
0 autom4te.cache /
0 share /
0 bin /
0 install-sh @
0 compile @
0 missing @
0 COPYING @On MacOS or BSD, the ls command doesn't have the same options, so this alias works instead:
alias lt = 'du -sh * | sort -h'The results of this version are a little different:
$ du -sh * | sort -h
0 compile
0 COPYING
0 install-sh
0 missing
4.0K configure.ac
4.0K dot-gitlab.ci.yml
4.0K flatpak.path.patch
4.0K git-portal.spec
4.0K info.slackermedia.Git-portal.json
4.0K Makefile.am
8.0K README.md
12K config.log
16K bin
24K Makefile
24K Makefile.in
32K config.status
36K LICENSE
44K aclocal.m4
60K share
140K configure
476K autom4te.cacheIn fact, even on Linux, that command is useful, because using ls lists directories and symlinks as being 0 in size, which may not be the information you actually want. It's your choice.
Thanks to Brad Alexander for this alias idea.
View only mounted drivesThe mount command used to be so simple. With just one command, you could get a list of all the mounted filesystems on your computer, and it was frequently used for an overview of what drives were attached to a workstation. It used to be impressive to see more than three or four entries because most computers don't have many more USB ports than that, so the results were manageable.
Computers are a little more complicated now, and between LVM, physical drives, network storage, and virtual filesystems, the results of mount can be difficult to parse:
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=8131024k,nr_inodes=2032756,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
[...]
/dev/nvme0n1p2 on /boot type ext4 (rw,relatime,seclabel)
/dev/nvme0n1p1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro)
[...]
gvfsd-fuse on /run/user/100977/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=100977,group_id=100977)
/dev/sda1 on /run/media/seth/pocket type ext4 (rw,nosuid,nodev,relatime,seclabel,uhelper=udisks2)
/dev/sdc1 on /run/media/seth/trip type ext4 (rw,nosuid,nodev,relatime,seclabel,uhelper=udisks2)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)To solve that problem, try an alias like this:
alias mnt = "mount | awk -F' ' '{ printf \" %s \t %s \n\" , \$ 1, \$ 3; }' | column -t | egrep ^/dev/ | sort"This alias uses awk to parse the output of mount by column, reducing the output to what you probably looking for (what hard drives, and not file systems, are mounted):
$ mnt
/dev/mapper/fedora-root /
/dev/nvme0n1p1 /boot/efi
/dev/nvme0n1p2 /boot
/dev/sda1 /run/media/seth/pocket
/dev/sdc1 /run/media/seth/tripOn MacOS, the mount command doesn't provide terribly verbose output, so an alias may be overkill. However, if you prefer a succinct report, try this:
alias mnt = 'mount | grep -E ^/dev | column -t'The results:
$ mnt
/dev/disk1s1 on / (apfs, local, journaled)
/dev/disk1s4 on /private/var/vm (apfs, local, noexec, journaled, noatime, nobrowse) Find a command in your grep historySometimes you figure out how to do something in the terminal, and promise yourself that you'll never forget what you've just learned. Then an hour goes by, and you've completely forgotten what you did.
Searching through your Bash history is something everyone has to do from time to time. If you know exactly what you're searching for, you can use Ctrl+R to do a reverse search through your history, but sometimes you can't remember the exact command you want to find.
Here's an alias to make that task a little easier:
alias gh = 'history|grep'Here's an example of how to use it:
$ gh bash
482 cat ~/.bashrc | grep _alias
498 emacs ~/.bashrc
530 emacs ~/.bash_aliases
531 source ~/.bashrc Sort by modification timeIt happens every Monday: You get to work, you sit down at your computer, you open a terminal, and you find you've forgotten what you were doing last Friday. What you need is an alias to list the most recently modified files.
You can use the ls command to create an alias to help you find where you left off:
alias left = 'ls -t -1'The output is simple, although you can extend it with the -- long option if you prefer. The alias, as listed, displays this:
$ left
demo.jpeg
demo.xcf
design-proposal.md
rejects.txt
brainstorm.txt
query-letter.xml Count filesIf you need to know how many files you have in a directory, the solution is one of the most classic examples of UNIX command construction: You list files with the ls command, control its output to be only one column with the -1 option, and then pipe that output to the wc (word count) command to count how many lines of single files there are.
It's a brilliant demonstration of how the UNIX philosophy allows users to build their own solutions using small system components. This command combination is also a lot to type if you happen to do it several times a day, and it doesn't exactly work for a directory of directories without using the -R option, which introduces new lines to the output and renders the exercise useless.
Instead, this alias makes the process easy:
alias count = 'find . -type f | wc -l'This one counts files, ignoring directories, but not the contents of directories. If you have a project folder containing two directories, each of which contains two files, the alias returns four, because there are four files in the entire project.
$ ls
foo bar
$ count
4 Create a Python virtual environmentDo you code in Python?
Do you code in Python a lot?
If you do, then you know that creating a Python virtual environment requires, at the very least, 53 keystrokes.
alias ve = 'python3 -m venv ./venv'
That's 49 too many, but that's easily circumvented with two new aliases called ve and va :
alias va = 'source ./venv/bin/activate'Running ve creates a new directory, called venv , containing the usual virtual environment filesystem for Python3. The va alias activates the environment in your current shell:
$ cd my-project
$ ve
$ va
(venv) $ Add a copy progress barEverybody pokes fun at progress bars because they're infamously inaccurate. And yet, deep down, we all seem to want them. The UNIX cp command has no progress bar, but it does have a -v option for verbosity, meaning that it echoes the name of each file being copied to your terminal. That's a pretty good hack, but it doesn't work so well when you're copying one big file and want some indication of how much of the file has yet to be transferred.
The pv command provides a progress bar during copy, but it's not common as a default application. On the other hand, the rsync command is included in the default installation of nearly every POSIX system available, and it's widely recognized as one of the smartest ways to copy files both remotely and locally.
Better yet, it has a built-in progress bar.
alias cpv = 'rsync -ah --info=progress2'Using this alias is the same as using the cp command:
$ cpv bigfile.flac /run/media/seth/audio/
3.83M 6% 213.15MB/s 0:00:00 (xfr#4, to-chk=0/4)An interesting side effect of using this command is that rsync copies both files and directories without the -r flag that cp would otherwise require.
Protect yourself from file removal accidentsYou shouldn't use the rm command. The rm manual even says so:
Warning : If you use 'rm' to remove a file, it is usually possible to recover the contents of that file. If you want more assurance that the contents are truly unrecoverable, consider using 'shred'.
If you want to remove a file, you should move the file to your Trash, just as you do when using a desktop.
POSIX makes this easy, because the Trash is an accessible, actual location in your filesystem. That location may change, depending on your platform: On a FreeDesktop , the Trash is located at ~/.local/share/Trash , while on MacOS it's ~/.Trash , but either way, it's just a directory into which you place files that you want out of sight until you're ready to erase them forever.
This simple alias provides a way to toss files into the Trash bin from your terminal:
alias tcn = 'mv --force -t ~/.local/share/Trash 'This alias uses a little-known mv flag that enables you to provide the file you want to move as the final argument, ignoring the usual requirement for that file to be listed first. Now you can use your new command to move files and folders to your system Trash:
$ ls
foo bar
$ tcn foo
$ ls
barNow the file is "gone," but only until you realize in a cold sweat that you still need it. At that point, you can rescue the file from your system Trash; be sure to tip the Bash and mv developers on the way out.
Note: If you need a more robust Trash command with better FreeDesktop compliance, see Trashy .
Simplify your Git workflowEveryone has a unique workflow, but there are usually repetitive tasks no matter what. If you work with Git on a regular basis, then there's probably some sequence you find yourself repeating pretty frequently. Maybe you find yourself going back to the master branch and pulling the latest changes over and over again during the day, or maybe you find yourself creating tags and then pushing them to the remote, or maybe it's something else entirely.
No matter what Git incantation you've grown tired of typing, you may be able to alleviate some pain with a Bash alias. Largely thanks to its ability to pass arguments to hooks, Git has a rich set of introspective commands that save you from having to perform uncanny feats in Bash.
For instance, while you might struggle to locate, in Bash, a project's top-level directory (which, as far as Bash is concerned, is an entirely arbitrary designation, since the absolute top level to a computer is the root directory), Git knows its top level with a simple query. If you study up on Git hooks, you'll find yourself able to find out all kinds of information that Bash knows nothing about, but you can leverage that information with a Bash alias.
Here's an alias to find the top level of a Git project, no matter where in that project you are currently working, and then to change directory to it, change to the master branch, and perform a Git pull:
alias startgit = 'cd `git rev-parse --show-toplevel` && git checkout master && git pull'This kind of alias is by no means a universally useful alias, but it demonstrates how a relatively simple alias can eliminate a lot of laborious navigation, commands, and waiting for prompts.
A simpler, and probably more universal, alias returns you to the Git project's top level. This alias is useful because when you're working on a project, that project more or less becomes your "temporary home" directory. It should be as simple to go "home" as it is to go to your actual home, and here's an alias to do it:
alias cg = 'cd `git rev-parse --show-toplevel`'Now the command cg takes you to the top of your Git project, no matter how deep into its directory structure you have descended.
Change directories and view the contents at the same timeIt was once (allegedly) proposed by a leading scientist that we could solve many of the planet's energy problems by harnessing the energy expended by geeks typing cd followed by ls .
It's a common pattern, because generally when you change directories, you have the impulse or the need to see what's around.But "walking" your computer's directory tree doesn't have to be a start-and-stop process.
This one's cheating, because it's not an alias at all, but it's a great excuse to explore Bash functions. While aliases are great for quick substitutions, Bash allows you to add local functions in your .bashrc file (or a separate functions file that you load into .bashrc , just as you do your aliases file).
To keep things modular, create a new file called ~/.bash_functions and then have your .bashrc load it:
if [ -e $HOME / .bash_functions ] ; then
source $HOME / .bash_functions
fiIn the functions file, add this code:
function cl () {
DIR = "$*" ;
# if no DIR given, go home
if [ $# -lt 1 ] ; then
DIR = $HOME ;
fi ;
builtin cd " ${DIR} " && \
# use your preferred ls command
ls -F --color =auto
}Load the function into your Bash session and then try it out:
$ source ~ / .bash_functions
$ cl Documents
foo bar baz
$ pwd
/ home / seth / Documents
$ cl ..
Desktop Documents Downloads
[ ... ]
$ pwd
/ home / sethFunctions are much more flexible than aliases, but with that flexibility comes the responsibility for you to ensure that your code makes sense and does what you expect. Aliases are meant to be simple, so keep them easy, but useful. For serious modifications to how Bash behaves, use functions or custom shell scripts saved to a location in your PATH .
For the record, there are some clever hacks to implement the cd and ls sequence as an alias, so if you're patient enough, then the sky is the limit even using humble aliases.
Start aliasing and functioningCustomizing your environment is what makes Linux fun, and increasing your efficiency is what makes Linux life-changing. Get started with simple aliases, graduate to functions, and post your must-have aliases in the comments!
ACG on 31 Jul 2019 Permalink
Seth Kenlon on 31 Jul 2019 PermalinkOne function I like a lot is a function that diffs a file and its backup.
It goes something like#!/usr/bin/env bash
file="${1:?File not given}"if [[ ! -e "$file" || ! -e "$file"~ ]]; then
echo "File doesn't exist or has no backup" 1>&2
exit 1
fidiff --color=always "$file"{~,} | less -r
I may have gotten the if wrong, but you get the idea. I'm typing this on my phone, away from home.
CheersKevin Cole on 13 Aug 2019 PermalinkThat's pretty slick! I like it.
My backup tool of choice (rdiff-backup) handles these sorts of comparisons pretty well, so I tend to be confident in my backup files. That said, there's always the edge case, and this kind of function is a great solution for those. Thanks!
Seth Kenlon on 14 Aug 2019 PermalinkA few of my "cannot-live-withouts" are regex based:
Decomment removes full-line comments and blank lines. For example, when looking at a "stock" /etc/httpd/whatever.conf file that has a gazillion lines in it,
alias decomment='egrep -v "^[[:space:]]*((#|;|//).*)?$" '
will show you that only four lines in the file actually DO anything, and the gazillion minus four are comments. I use this ALL the time with config files, Python (and other languages) code, and god knows where else.
Then there's unprintables and expletives which are both very similar:
alias unprintable='grep --color="auto" -P -n "[\x00-\x1E]"'
alias expletives='grep --color="auto" -P -n "[^\x00-\x7E]" 'The first shows which lines (with line numbers) in a file contain control characters, and the second shows which lines in a file contain anything "above" a RUBOUT, er, excuse me, I mean above ASCII 127. (I feel old.) ;-) Handy when, for example, someone gives you a program that they edited or created with LibreOffice, and oops... half of the quoted strings have "real" curly opening and closing quote marks instead of ASCII 0x22 "straight" quote mark delimiters... But there's actually a few curlies you want to keep, so a "nuke 'em all in one swell foop" approach won't work.
Dan Jones on 13 Aug 2019 PermalinkThese are great!
Seth Kenlon on 14 Aug 2019 PermalinkYour `cl` function could be simplified, since `cd` without arguments already goes to home.
```
function cl() {
cd "$@" && \
ls -F --color=auto
}
```jkeener on 20 Aug 2019 PermalinkNice!
bhuvana on 04 Oct 2019 PermalinkThe first alias in my .bash_aliases file is always:
alias realias='vim ~/.bash_aliases; source ~/.bash_aliases'
replace vim with your favorite editor or $VISUAL
Thanks for this post! I have created a Github repo- https://github.com/bhuvana-guna/awesome-bash-shortcuts
with a motive to create an extended list of aliases/functions for various programs. As I am a newbie to terminal and linux, please do contribute to it with these and other super awesome utilities and help others easily access them.
Nov 08, 2019 | opensource.com
Your Linux terminal probably supports Unicode, so why not take advantage of that and add a seasonal touch to your prompt? 11 Dec 2018 Jason Baker (Red Hat) Feed 84 up 3 comments Image credits : Jason Baker x Subscribe now
Get the highlights in your inbox every week.
https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0
- Top 7 terminal emulators for Linux
- 10 command-line tools for data analysis in Linux
- Download Now: SSH cheat sheet
- Advanced Linux commands cheat sheet
- Linux command line tutorials
Hello once again for another installment of the Linux command-line toys advent calendar. If this is your first visit to the series, you might be asking yourself what a command-line toy even is? Really, we're keeping it pretty open-ended: It's anything that's a fun diversion at the terminal, and we're giving bonus points for anything holiday-themed.
Maybe you've seen some of these before, maybe you haven't. Either way, we hope you have fun.
Today's toy is super-simple: It's your Bash prompt. Your Bash prompt? Yep! We've got a few more weeks of the holiday season left to stare at it, and even more weeks of winter here in the northern hemisphere, so why not have some fun with it.
Your Bash prompt currently might be a simple dollar sign ( $ ), or more likely, it's something a little longer. If you're not sure what makes up your Bash prompt right now, you can find it in an environment variable called $PS1. To see it, type:
echo $PS1For me, this returns:
[\u@\h \W]\$The \u , \h , and \W are special characters for username, hostname, and working directory. There are others you can use as well; for help building out your Bash prompt, you can use EzPrompt , an online generator of PS1 configurations that includes lots of options including date and time, Git status, and more.
You may have other variables that make up your Bash prompt set as well; $PS2 for me contains the closing brace of my command prompt. See this article for more information.
To change your prompt, simply set the environment variable in your terminal like this:
$ PS1 = '\u is cold: '
jehb is cold:To set it permanently, add the same code to your /etc/bashrc using your favorite text editor.
So what does this have to do with winterization? Well, chances are on a modern machine, your terminal support Unicode, so you're not limited to the standard ASCII character set. You can use any emoji that's a part of the Unicode specification, including a snowflake ❄, a snowman ☃, or a pair of skis 🎿. You've got plenty of wintery options to choose from.
🎄 Christmas Tree
🧥 Coat
🦌 Deer
🧤 Gloves
🤶 Mrs. Claus
🎅 Santa Claus
🧣 Scarf
🎿 Skis
🏂 Snowboarder
❄ Snowflake
☃ Snowman
⛄ Snowman Without Snow
🎁 Wrapped GiftPick your favorite, and enjoy some winter cheer. Fun fact: modern filesystems also support Unicode characters in their filenames, meaning you can technically name your next program "❄❄❄❄❄.py" . That said, please don't.
Do you have a favorite command-line toy that you think I ought to include? The calendar for this series is mostly filled out but I've got a few spots left. Let me know in the comments below, and I'll check it out. If there's space, I'll try to include it. If not, but I get some good submissions, I'll do a round-up of honorable mentions at the end.
Jun 29, 2014 | access.redhat.com
Raw
- The shell prompt is controlled via the PS environment variables.
**PS1** - The value of this parameter is expanded and used as the primary prompt string. The default value is \u@\h \W\\$ . **PS2** - The value of this parameter is expanded as with PS1 and used as the secondary prompt string. The default is ] **PS3** - The value of this parameter is used as the prompt for the select command **PS4** - The value of this parameter is expanded as with PS1 and the value is printed before each command bash displays during an execution trace. The first character of PS4 is replicated multiple times, as necessary, to indicate multiple levels of indirection. The default is +Raw
PS1
is a primary prompt variable which holds\u@\h \W\\$
special bash characters. This is the default structure of the bash prompt and is displayed every time a user logs in using a terminal. These default values are set in the/etc/bashrc
file.- The special characters in the default prompt are as follows:
\u = username \h = hostname \W = current working directoryRaw
- This command will show the current value.
# echo $PS1Raw
- This can be modified by changing the PS1 variable:
# PS1='[[prod]\u@\h \W]\$'Raw
- The modified shell prompt will look like:
[[prod]root@hostname ~]#
- In order to make these settings permanent, edit the
/etc/bashrc
file:Find this line:
Raw[ "$PS1" = "\\s-\\v\\\$ " ] && PS1="[\u@\h \W]\\$ "And change it as needed:
Raw[ "$PS1" = "\\s-\\v\\\$ " ] && PS1="[[prod]\u@\h \W]\\$ "
- Product(s)
- Red Hat Enterprise Linux
- Component
- bash
- Category
- Customize or extend
This solution is part of Red Hat's fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form. 2 Comments Log in to comment
MW Community Member 48 points
6 October 2016 1:53 PM Mike Willis
27 March 2019 12:44 PM Mike ChanslorThis solution has simply "Red Hat Enterprise Linux" in the Environment section implying it applies to all versions of Red Hat Enterprise Linux.
Editing /etc/bashrc is against the advice of the comments in /etc/bashrc on Red Hat Enterprise Linux 7 which say
Raw# It's NOT a good idea to change this file unless you know what you # are doing. It's much better to create a custom.sh shell script in # /etc/profile.d/ to make custom changes to your environment, as this # will prevent the need for merging in future updates.On RHEL 7 instead of the solution suggested above create a /etc/profile.d/custom.sh which contains
RawPS1="[[prod]\u@\h \W]\\$ "Hello Red Hat community! I also found this useful: Raw
Special prompt variable characters: \d The date, in "Weekday Month Date" format (e.g., "Tue May 26"). \h The hostname, up to the first . (e.g. deckard) \H The hostname. (e.g. deckard.SS64.com) \j The number of jobs currently managed by the shell. \l The basename of the shell's terminal device name. \s The name of the shell, the basename of $0 (the portion following the final slash). \t The time, in 24-hour HH:MM:SS format. \T The time, in 12-hour HH:MM:SS format. \@ The time, in 12-hour am/pm format. \u The username of the current user. \v The version of Bash (e.g., 2.00) \V The release of Bash, version + patchlevel (e.g., 2.00.0) \w The current working directory. \W The basename of $PWD. \! The history number of this command. \# The command number of this command. \$ If you are not root, inserts a "$"; if you are root, you get a "#" (root uid = 0) \nnn The character whose ASCII code is the octal value nnn. \n A newline. \r A carriage return. \e An escape character (typically a color code). \a A bell character. \\ A backslash. \[ Begin a sequence of non-printing characters. (like color escape sequences). This allows bash to calculate word wrapping correctly. \] End a sequence of non-printing characters. Using single quotes instead of double quotes when exporting your PS variables is recommended, it makes the prompt a tiny bit faster to evaluate plus you can then do an echo $PS1 to see the current prompt settings.
Nov 08, 2019 | stackoverflow.com
How to escape unicode characters in bash prompt correctly Ask Question Asked 8 years, 2 months ago Active 9 months ago Viewed 6k times 7 2
Andy Ray ,Aug 18, 2011 at 19:08
I have a specific method for my bash prompt, let's say it looks like this:CHAR="༇ " my_function=" prompt=\" \[\$CHAR\]\" echo -e \$prompt" PS1="\$(${my_function}) \$ "To explain the above, I'm builidng my bash prompt by executing a function stored in a string, which was a decision made as the result of this question . Let's pretend like it works fine, because it does, except when unicode characters get involved
I am trying to find the proper way to escape a unicode character, because right now it messes with the bash line length. An easy way to test if it's broken is to type a long command, execute it, press CTRL-R and type to find it, and then pressing CTRL-A CTRL-E to jump to the beginning / end of the line. If the text gets garbled then it's not working.
I have tried several things to properly escape the unicode character in the function string, but nothing seems to be working.
Special characters like this work:
COLOR_BLUE=$(tput sgr0 && tput setaf 6) my_function=" prompt="\\[\$COLOR_BLUE\\] \" echo -e \$prompt"Which is the main reason I made the prompt a function string. That escape sequence does NOT mess with the line length, it's just the unicode character.
Andy Ray ,Aug 23, 2011 at 2:09
The\[...\]
sequence says to ignore this part of the string completely, which is useful when your prompt contains a zero-length sequence, such as a control sequence which changes the text color or the title bar, say. But in this case, you are printing a character, so the length of it is not zero. Perhaps you could work around this by, say, using a no-op escape sequence to fool Bash into calculating the correct line length, but it sounds like that way lies madness.The correct solution would be for the line length calculations in Bash to correctly grok UTF-8 (or whichever Unicode encoding it is that you are using). Uhm, have you tried without the
\[...\]
sequence?Edit: The following implements the solution I propose in the comments below. The cursor position is saved, then two spaces are printed, outside of
\[...\]
, then the cursor position is restored, and the Unicode character is printed on top of the two spaces. This assumes a fixed font width, with double width for the Unicode character.PS1='\['"`tput sc`"'\] \['"`tput rc`"'༇ \] \$ 'At least in the OSX Terminal, Bash 3.2.17(1)-release, this passes cursory [sic] testing.
In the interest of transparency and legibility, I have ignored the requirement to have the prompt's functionality inside a function, and the color coding; this just changes the prompt to the character, space, dollar prompt, space. Adapt to suit your somewhat more complex needs.
tripleee ,Aug 23, 2011 at 7:01
@tripleee wins it, posting the final solution here because it's a pain to post code in comments:CHAR="༇" my_function=" prompt=\" \\[`tput sc`\\] \\[`tput rc`\\]\\[\$CHAR\\] \" echo -e \$prompt" PS1="\$(${my_function}) \$ "The trick as pointed out in @tripleee's link is the use of the commands
tput sc
andtput rc
which save and then restore the cursor position. The code is effectively saving the cursor position, printing two spaces for width, restoring the cursor position to before the spaces, then printing the special character so that the width of the line is from the two spaces, not the character.> ,
(Not the answer to your problem, but some pointers and general experience related to your issue.)I see the behaviour you describe about cmd-line editing (Ctrl-R, ... Cntrl-A Ctrl-E ...) all the time, even without unicode chars.
At one work-site, I spent the time to figure out the diff between the terminals interpretation of the TERM setting VS the TERM definition used by the OS (well, stty I suppose).
NOW, when I have this problem, I escape out of my current attempt to edit the line, bring the line up again, and then immediately go to the 'vi' mode, which opens the vi editor. (press just the 'v' char, right?). All the ease of use of a full-fledged session of vi; why go with less ;-)?
Looking again at your problem description, when you say
my_function=" prompt=\" \[\$CHAR\]\" echo -e \$prompt"That is just a string definition, right? and I'm assuming your simplifying the problem definition by assuming this is the output of your
my_function
. It seems very likely in the steps of creating the function definition, calling the function AND using the values returned are a lot of opportunities for shell-quoting to not work the way you want it to.If you edit your question to include the
my_function
definition, and its complete use (reducing your function to just what is causing the problem), it may be easier for others to help with this too. Finally, do you useset -vx
regularly? It can help show how/wnen/what of variable expansions, you may find something there.Failing all of those, look at Orielly termcap & terminfo . You may need to look at the man page for your local systems
stty
and related cmds AND you may do well to look for user groups specific to you Linux system (I'm assuming you use a Linux variant).I hope this helps.
Oct 23, 2019 | www.ostechnix.com
Let us take the following one-liner Linux command as an example.
$ find . -size +10M -type f -print0 | xargs -0 ls -Ssh | sort -zFor those wondering, the above command will find and list files bigger than 10 MB in the current directory and sort them by size. I admit that I couldn't remember this command. I guess some of you can't remember this command either. This is why we are going to apply a tag to such kind of commands.
To apply a tag, just type the command and add the comment ( i.e. tag) at the end of the command as shown below.
$ find . -size +10M -type f -print0 | xargs -0 ls -Ssh | sort -z #ListFilesBiggerThanXSizeHere, #ListFilesBiggerThanXSize is the tag name to the above command. Make sure you have given a space between the command and tag name. Also, please use the tag name as simple, short and clear as possible to easily remember it later. Otherwise, you may need another tool to recall the tags.
To run it again, simply use the tag name like below.
$ !? #ListFilesBiggerThanXSizeHere, the ! (Exclamation mark) and ? (Question mark) operators are used to fetch and run the command which we tagged earlier from the BASH history.
Sep 11, 2019 | stackoverflow.com
Jeff ,May 8 at 18:30
Given a filename in the formsomeletters_12345_moreleters.ext
, I want to extract the 5 digits and put them into a variable.So to emphasize the point, I have a filename with x number of characters then a five digit sequence surrounded by a single underscore on either side then another set of x number of characters. I want to take the 5 digit number and put that into a variable.
I am very interested in the number of different ways that this can be accomplished.
Berek Bryan ,Jan 24, 2017 at 9:30
Use cut :echo 'someletters_12345_moreleters.ext' | cut -d'_' -f 2More generic:
INPUT='someletters_12345_moreleters.ext' SUBSTRING=$(echo $INPUT| cut -d'_' -f 2) echo $SUBSTRINGJB. ,Jan 6, 2015 at 10:13
If x is constant, the following parameter expansion performs substring extraction:b=${a:12:5}where 12 is the offset (zero-based) and 5 is the length
If the underscores around the digits are the only ones in the input, you can strip off the prefix and suffix (respectively) in two steps:
tmp=${a#*_} # remove prefix ending in "_" b=${tmp%_*} # remove suffix starting with "_"If there are other underscores, it's probably feasible anyway, albeit more tricky. If anyone knows how to perform both expansions in a single expression, I'd like to know too.
Both solutions presented are pure bash, with no process spawning involved, hence very fast.
A Sahra ,Mar 16, 2017 at 6:27
Generic solution where the number can be anywhere in the filename, using the first of such sequences:number=$(echo $filename | egrep -o '[[:digit:]]{5}' | head -n1)Another solution to extract exactly a part of a variable:
number=${filename:offset:length}If your filename always have the format
stuff_digits_...
you can use awk:number=$(echo $filename | awk -F _ '{ print $2 }')Yet another solution to remove everything except digits, use
number=$(echo $filename | tr -cd '[[:digit:]]')sshow ,Jul 27, 2017 at 17:22
In case someone wants more rigorous information, you can also search it in man bash like this$ man bash [press return key] /substring [press return key] [press "n" key] [press "n" key] [press "n" key] [press "n" key]Result:
${parameter:offset} ${parameter:offset:length} Substring Expansion. Expands to up to length characters of parameter starting at the character specified by offset. If length is omitted, expands to the substring of parameter start‐ ing at the character specified by offset. length and offset are arithmetic expressions (see ARITHMETIC EVALUATION below). If offset evaluates to a number less than zero, the value is used as an offset from the end of the value of parameter. Arithmetic expressions starting with a - must be separated by whitespace from the preceding : to be distinguished from the Use Default Values expansion. If length evaluates to a number less than zero, and parameter is not @ and not an indexed or associative array, it is interpreted as an offset from the end of the value of parameter rather than a number of characters, and the expan‐ sion is the characters between the two offsets. If parameter is @, the result is length positional parameters beginning at off‐ set. If parameter is an indexed array name subscripted by @ or *, the result is the length members of the array beginning with ${parameter[offset]}. A negative offset is taken relative to one greater than the maximum index of the specified array. Sub‐ string expansion applied to an associative array produces unde‐ fined results. Note that a negative offset must be separated from the colon by at least one space to avoid being confused with the :- expansion. Substring indexing is zero-based unless the positional parameters are used, in which case the indexing starts at 1 by default. If offset is 0, and the positional parameters are used, $0 is prefixed to the list.Aleksandr Levchuk ,Aug 29, 2011 at 5:51
Building on jor's answer (which doesn't work for me):substring=$(expr "$filename" : '.*_\([^_]*\)_.*')kayn ,Oct 5, 2015 at 8:48
I'm surprised this pure bash solution didn't come up:a="someletters_12345_moreleters.ext" IFS="_" set $a echo $2 # prints 12345You probably want to reset IFS to what value it was before, or
unset IFS
afterwards!zebediah49 ,Jun 4 at 17:31
Here's how i'd do it:FN=someletters_12345_moreleters.ext [[ ${FN} =~ _([[:digit:]]{5})_ ]] && NUM=${BASH_REMATCH[1]}Note: the above is a regular expression and is restricted to your specific scenario of five digits surrounded by underscores. Change the regular expression if you need different matching.
TranslucentCloud ,Jun 16, 2014 at 13:27
Following the requirementsI have a filename with x number of characters then a five digit sequence surrounded by a single underscore on either side then another set of x number of characters. I want to take the 5 digit number and put that into a variable.
I found some
grep
ways that may be useful:$ echo "someletters_12345_moreleters.ext" | grep -Eo "[[:digit:]]+" 12345or better
$ echo "someletters_12345_moreleters.ext" | grep -Eo "[[:digit:]]{5}" 12345And then with
-Po
syntax:$ echo "someletters_12345_moreleters.ext" | grep -Po '(?<=_)\d+' 12345Or if you want to make it fit exactly 5 characters:
$ echo "someletters_12345_moreleters.ext" | grep -Po '(?<=_)\d{5}' 12345Finally, to make it be stored in a variable it is just need to use the
var=$(command)
syntax.Darron ,Jan 9, 2009 at 16:13
Without any sub-processes you can:shopt -s extglob front=${input%%_+([a-zA-Z]).*} digits=${front##+([a-zA-Z])_}A very small variant of this will also work in ksh93.
user2350426
add a comment ,Aug 5, 2014 at 8:11
If we focus in the concept of:
"A run of (one or several) digits"We could use several external tools to extract the numbers.
We could quite easily erase all other characters, either sed or tr:
name='someletters_12345_moreleters.ext' echo $name | sed 's/[^0-9]*//g' # 12345 echo $name | tr -c -d 0-9 # 12345But if $name contains several runs of numbers, the above will fail:
If "name=someletters_12345_moreleters_323_end.ext", then:
echo $name | sed 's/[^0-9]*//g' # 12345323 echo $name | tr -c -d 0-9 # 12345323We need to use regular expresions (regex).
To select only the first run (12345 not 323) in sed and perl:echo $name | sed 's/[^0-9]*\([0-9]\{1,\}\).*$/\1/' perl -e 'my $name='$name';my ($num)=$name=~/(\d+)/;print "$num\n";'But we could as well do it directly in bash (1) :
regex=[^0-9]*([0-9]{1,}).*$; \ [[ $name =~ $regex ]] && echo ${BASH_REMATCH[1]}This allows us to extract the FIRST run of digits of any length
surrounded by any other text/characters.Note :
regex=[^0-9]*([0-9]{5,5}).*$;
will match only exactly 5 digit runs. :-)(1) : faster than calling an external tool for each short texts. Not faster than doing all processing inside sed or awk for large files.
codist ,May 6, 2011 at 12:50
Here's a prefix-suffix solution (similar to the solutions given by JB and Darron) that matches the first block of digits and does not depend on the surrounding underscores:str='someletters_12345_morele34ters.ext' s1="${str#"${str%%[[:digit:]]*}"}" # strip off non-digit prefix from str s2="${s1%%[^[:digit:]]*}" # strip off non-digit suffix from s1 echo "$s2" # 12345Campa ,Oct 21, 2016 at 8:12
I lovesed
's capability to deal with regex groups:> var="someletters_12345_moreletters.ext" > digits=$( echo $var | sed "s/.*_\([0-9]\+\).*/\1/p" -n ) > echo $digits 12345A slightly more general option would be not to assume that you have an underscore
_
marking the start of your digits sequence, hence for instance stripping off all non-numbers you get before your sequence:s/[^0-9]\+\([0-9]\+\).*/\1/p
.
> man sed | grep s/regexp/replacement -A 2 s/regexp/replacement/ Attempt to match regexp against the pattern space. If successful, replace that portion matched with replacement. The replacement may contain the special character & to refer to that portion of the pattern space which matched, and the special escapes \1 through \9 to refer to the corresponding matching sub-expressions in the regexp.
More on this, in case you're not too confident with regexps:
s
is for _s_ubstitute[0-9]+
matches 1+ digits\1
links to the group n.1 of the regex output (group 0 is the whole match, group 1 is the match within parentheses in this case)p
flag is for _p_rintingAll escapes
\
are there to makesed
's regexp processing work.Dan Dascalescu ,May 8 at 18:28
Given test.txt is a file containing "ABCDEFGHIJKLMNOPQRSTUVWXYZ"cut -b19-20 test.txt > test1.txt # This will extract chars 19 & 20 "ST" while read -r; do; > x=$REPLY > done < test1.txt echo $x STAlex Raj Kaliamoorthy ,Jul 29, 2016 at 7:41
My answer will have more control on what you want out of your string. Here is the code on how you can extract12345
out of your stringstr="someletters_12345_moreleters.ext" str=${str#*_} str=${str%_more*} echo $strThis will be more efficient if you want to extract something that has any chars like
abc
or any special characters like_
or-
. For example: If your string is like this and you want everything that is aftersomeletters_
and before_moreleters.ext
:str="someletters_123-45-24a&13b-1_moreleters.ext"With my code you can mention what exactly you want. Explanation:
#*
It will remove the preceding string including the matching key. Here the key we mentioned is_
%
It will remove the following string including the matching key. Here the key we mentioned is '_more*'Do some experiments yourself and you would find this interesting.
Dan Dascalescu ,May 8 at 18:27
similar to substr('abcdefg', 2-1, 3) in php:echo 'abcdefg'|tail -c +2|head -c 3olibre ,Nov 25, 2015 at 14:50
Ok, here goes pure Parameter Substitution with an empty string. Caveat is that I have defined someletters and moreletters as only characters. If they are alphanumeric, this will not work as it is.filename=someletters_12345_moreletters.ext substring=${filename//@(+([a-z])_|_+([a-z]).*)} echo $substring 12345gniourf_gniourf ,Jun 4 at 17:33
There's also the bash builtin 'expr' command:INPUT="someletters_12345_moreleters.ext" SUBSTRING=`expr match "$INPUT" '.*_\([[:digit:]]*\)_.*' ` echo $SUBSTRINGrussell ,Aug 1, 2013 at 8:12
A little late, but I just ran across this problem and found the following:host:/tmp$ asd=someletters_12345_moreleters.ext host:/tmp$ echo `expr $asd : '.*_\(.*\)_'` 12345 host:/tmp$I used it to get millisecond resolution on an embedded system that does not have %N for date:
set `grep "now at" /proc/timer_list` nano=$3 fraction=`expr $nano : '.*\(...\)......'` $debug nano is $nano, fraction is $fraction> ,Aug 5, 2018 at 17:13
A bash solution:IFS="_" read -r x digs x <<<'someletters_12345_moreleters.ext'This will clobber a variable called
x
. The varx
could be changed to the var_
.input='someletters_12345_moreleters.ext' IFS="_" read -r _ digs _ <<<"$input"
Sep 08, 2019 | stackoverflow.com
Ask Question Asked 9 years, 4 months ago Active 2 months ago Viewed 226k times 238 127
Mark Byers ,Apr 25, 2010 at 19:20
Can anyone recommend a safe solution to recursively replace spaces with underscores in file and directory names starting from a given root directory? For example:$ tree . |-- a dir | `-- file with spaces.txt `-- b dir |-- another file with spaces.txt `-- yet another file with spaces.pdfbecomes:
$ tree . |-- a_dir | `-- file_with_spaces.txt `-- b_dir |-- another_file_with_spaces.txt `-- yet_another_file_with_spaces.pdfJürgen Hötzel ,Nov 4, 2015 at 3:03
Userename
(akaprename
) which is a Perl script which may be on your system already. Do it in two steps:find -name "* *" -type d | rename 's/ /_/g' # do the directories first find -name "* *" -type f | rename 's/ /_/g'Based on Jürgen's answer and able to handle multiple layers of files and directories in a single bound using the "Revision 1.5 1998/12/18 16:16:31 rmb1" version of
/usr/bin/rename
(a Perl script):find /tmp/ -depth -name "* *" -execdir rename 's/ /_/g' "{}" \;oevna ,Jan 1, 2016 at 8:25
I use:for f in *\ *; do mv "$f" "${f// /_}"; doneThough it's not recursive, it's quite fast and simple. I'm sure someone here could update it to be recursive.
The
${f// /_}
part utilizes bash's parameter expansion mechanism to replace a pattern within a parameter with supplied string. The relevant syntax is${parameter/pattern/string}
. See: https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html or http://wiki.bash-hackers.org/syntax/pe .armandino ,Dec 3, 2013 at 20:51
find . -depth -name '* *' \ | while IFS= read -r f ; do mv -i "$f" "$(dirname "$f")/$(basename "$f"|tr ' ' _)" ; donefailed to get it right at first, because I didn't think of directories.
Edmund Elmer ,Jul 3 at 7:12
you can usedetox
by Doug Harpledetox -r <folder>Dennis Williamson ,Mar 22, 2012 at 20:33
A find/rename solution. rename is part of util-linux.You need to descend depth first, because a whitespace filename can be part of a whitespace directory:
find /tmp/ -depth -name "* *" -execdir rename " " "_" "{}" ";"armandino ,Apr 26, 2010 at 11:49
bash 4.0#!/bin/bash shopt -s globstar for file in **/*\ * do mv "$file" "${file// /_}" doneItamar ,Jan 31, 2013 at 21:27
you can use this:find . -name '* *' | while read fname do new_fname=`echo $fname | tr " " "_"` if [ -e $new_fname ] then echo "File $new_fname already exists. Not replacing $fname" else echo "Creating new file $new_fname to replace $fname" mv "$fname" $new_fname fi doneyabt ,Apr 26, 2010 at 14:54
Here's a (quite verbose) find -exec solution which writes "file already exists" warnings to stderr:function trspace() { declare dir name bname dname newname replace_char [ $# -lt 1 -o $# -gt 2 ] && { echo "usage: trspace dir char"; return 1; } dir="${1}" replace_char="${2:-_}" find "${dir}" -xdev -depth -name $'*[ \t\r\n\v\f]*' -exec bash -c ' for ((i=1; i<=$#; i++)); do name="${@:i:1}" dname="${name%/*}" bname="${name##*/}" newname="${dname}/${bname//[[:space:]]/${0}}" if [[ -e "${newname}" ]]; then echo "Warning: file already exists: ${newname}" 1>&2 else mv "${name}" "${newname}" fi done ' "${replace_char}" '{}' + } trspace rootdir _degi ,Aug 8, 2011 at 9:10
This one does a little bit more. I use it to rename my downloaded torrents (no special characters (non-ASCII), spaces, multiple dots, etc.).#!/usr/bin/perl &rena(`find . -type d`); &rena(`find . -type f`); sub rena { ($elems)=@_; @t=split /\n/,$elems; for $e (@t) { $_=$e; # remove ./ of find s/^\.\///; # non ascii transliterate tr [\200-\377][_]; tr [\000-\40][_]; # special characters we do not want in paths s/[ \-\,\;\?\+\'\"\!\[\]\(\)\@\#]/_/g; # multiple dots except for extension while (/\..*\./) { s/\./_/; } # only one _ consecutive s/_+/_/g; next if ($_ eq $e ) or ("./$_" eq $e); print "$e -> $_\n"; rename ($e,$_); } }Junyeop Lee ,Apr 10, 2018 at 9:44
Recursive version of Naidim's Answers.find . -name "* *" | awk '{ print length, $0 }' | sort -nr -s | cut -d" " -f2- | while read f; do base=$(basename "$f"); newbase="${base// /_}"; mv "$(dirname "$f")/$(basename "$f")" "$(dirname "$f")/$newbase"; doneghoti ,Dec 5, 2016 at 21:16
I found around this script, it may be interesting :)IFS=$'\n';for f in `find .`; do file=$(echo $f | tr [:blank:] '_'); [ -e $f ] && [ ! -e $file ] && mv "$f" $file;done;unset IFSghoti ,Dec 5, 2016 at 21:17
Here's a reasonably sized bash script solution#!/bin/bash ( IFS=$'\n' for y in $(ls $1) do mv $1/`echo $y | sed 's/ /\\ /g'` $1/`echo "$y" | sed 's/ /_/g'` done )user1060059 ,Nov 22, 2011 at 15:15
This only finds files inside the current directory and renames them . I have this aliased.
find ./ -name "* *" -type f -d 1 | perl -ple '$file = $_; $file =~ s/\s+/_/g; rename($_, $file);
Hongtao ,Sep 26, 2014 at 19:30
I just make one for my own purpose. You may can use it as reference.#!/bin/bash cd /vzwhome/c0cheh1/dev_source/UB_14_8 for file in * do echo $file cd "/vzwhome/c0cheh1/dev_source/UB_14_8/$file/Configuration/$file" echo "==> `pwd`" for subfile in *\ *; do [ -d "$subfile" ] && ( mv "$subfile" "$(echo $subfile | sed -e 's/ /_/g')" ); done ls cd /vzwhome/c0cheh1/dev_source/UB_14_8 doneMarcos Jean Sampaio ,Dec 5, 2016 at 20:56
For files in folder named /filesfor i in `IFS="";find /files -name *\ *` do echo $i done > /tmp/list while read line do mv "$line" `echo $line | sed 's/ /_/g'` done < /tmp/list rm /tmp/listMuhammad Annaqeeb ,Sep 4, 2017 at 11:03
For those struggling through this using macOS, first install all the tools:brew install tree findutils renameThen when needed to rename, make an alias for GNU find (gfind) as find. Then run the code of @Michel Krelin:
alias find=gfind find . -depth -name '* *' \ | while IFS= read -r f ; do mv -i "$f" "$(dirname "$f")/$(basename "$f"|tr ' ' _)" ; done
Sep 05, 2019 | linuxconfig.org
05 September 2019
... ... ... How to use other Bash optionsThe Bash options for debugging are turned off by default, but once they are turned on by using the set command, they stay on until explicitly turned off. If you are not sure which options are enabled, you can examine the
$-
variable to see the current state of all the variables.$ echo $- himBHs $ set -xv && echo $- himvxBHsThere is another useful switch we can use to help us find variables referenced without having any value set. This is the
<img src=https://linuxconfig.org/images/02-how-to-debug-bash-scripts.png alt="set u option at command line" width=1200 height=254 /> Setting-u
switch, and just like-x
and-v
it can also be used on the command line, as we see in the following example:u
option at the command lineWe mistakenly assigned a value of 7 to the variable called "level" then tried to echo a variable named "score" that simply resulted in printing nothing at all to the screen. Absolutely no debug information was given. Setting our
-u
switch allows us to see a specific error message, "score: unbound variable" that indicates exactly what went wrong.We can use those options in short Bash scripts to give us debug information to identify problems that do not otherwise trigger feedback from the Bash interpreter. Let's walk through a couple of examples.
#!/bin/bash read -p "Path to be added: " $path if [ "$path" = "/home/mike/bin" ]; then echo $path >> $PATH echo "new path: $PATH" else echo "did not modify PATH" fi<img src=https://linuxconfig.org/images/03-how-to-debug-bash-scripts.png alt="results from addpath script" width=1200 height=417 /> Usingx
option when running your Bash scriptIn the example above we run the addpath script normally and it simply does not modify our
PATH
. It does not give us any indication of why or clues to mistakes made. Running it again using the-x
option clearly shows us that the left side of our comparison is an empty string.$path
is an empty string because we accidentally put a dollar sign in front of "path" in our read statement. Sometimes we look right at a mistake like this and it doesn't look wrong until we get a clue and think, "Why is$path
evaluated to an empty string?"Looking this next example, we also get no indication of an error from the interpreter. We only get one value printed per line instead of two. This is not an error that will halt execution of the script, so we're left to simply wonder without being given any clues. Using the
-u
switch,we immediately get a notification that our variablej
is not bound to a value. So these are real time savers when we make mistakes that do not result in actual errors from the Bash interpreter's point of view.#!/bin/bash for i in 1 2 3 do echo $i $j done<img src=https://linuxconfig.org/images/04-how-to-debug-bash-scripts.png alt="results from count.sh script" width=1200 height=291 /> Usingu
option running your script from the command lineNow surely you are thinking that sounds fine, but we seldom need help debugging mistakes made in one-liners at the command line or in short scripts like these. We typically struggle with debugging when we deal with longer and more complicated scripts, and we rarely need to set these options and leave them set while we run multiple scripts. Setting
-xv
options and then running a more complex script will often add confusion by doubling or tripling the amount of output generated.Fortunately we can use these options in a more precise way by placing them inside our scripts. Instead of explicitly invoking a Bash shell with an option from the command line, we can set an option by adding it to the shebang line instead.
#!/bin/bash -xThis will set the
-x
option for the entire file or until it is unset during the script execution, allowing you to simply run the script by typing the filename instead of passing it to Bash as a parameter. A long script or one that has a lot of output will still become unwieldy using this technique however, so let's look at a more specific way to use options.
For a more targeted approach, surround only the suspicious blocks of code with the options you want. This approach is great for scripts that generate menus or detailed output, and it is accomplished by using the set keyword with plus or minus once again.
#!/bin/bash read -p "Path to be added: " $path set -xv if [ "$path" = "/home/mike/bin" ]; then echo $path >> $PATH echo "new path: $PATH" else echo "did not modify PATH" fi set +xv<img src=https://linuxconfig.org/images/05-how-to-debug-bash-scripts.png alt="results from addpath script" width=1200 height=469 /> Wrapping options around a block of code in your scriptWe surrounded only the blocks of code we suspect in order to reduce the output, making our task easier in the process. Notice we turn on our options only for the code block containing our if-then-else statement, then turn off the option(s) at the end of the suspect block. We can turn these options on and off multiple times in a single script if we can't narrow down the suspicious areas, or if we want to evaluate the state of variables at various points as we progress through the script. There is no need to turn off an option If we want it to continue for the remainder of the script execution.
For completeness sake we should mention also that there are debuggers written by third parties that will allow us to step through the code execution line by line. You might want to investigate these tools, but most people find that that they are not actually needed.
As seasoned programmers will suggest, if your code is too complex to isolate suspicious blocks with these options then the real problem is that the code should be refactored. Overly complex code means bugs can be difficult to detect and maintenance can be time consuming and costly.
One final thing to mention regarding Bash debugging options is that a file globbing option also exists and is set with
-f
. Setting this option will turn off globbing (expansion of wildcards to generate file names) while it is enabled. This-f
option can be a switch used at the command line with bash, after the shebang in a file or, as in this example to surround a block of code.#!/bin/bash echo "ignore fileglobbing option turned off" ls * echo "ignore file globbing option set" set -f ls * set +f<img src=https://linuxconfig.org/images/06-how-to-debug-bash-scripts.png alt="results from -f option" width=1200 height=314 /> Usingf
option to turn off file globbing How to use trap to help debugThere are more involved techniques worth considering if your scripts are complicated, including using an assert function as mentioned earlier. One such method to keep in mind is the use of trap. Shell scripts allow us to trap signals and do something at that point.
A simple but useful example you can use in your Bash scripts is to trap on
EXIT
.#!/bin/bash trap 'echo score is $score, status is $status' EXIT if [ -z ]; then status="default" else status= fi score=0 if [ ${USER} = 'superman' ]; then score=99 elif [ $# -gt 1 ]; then score= fi<img src=https://linuxconfig.org/images/07-how-to-debug-bash-scripts.png alt="results from using trap EXIT" width=1200 height=469 /> Using trapEXIT
to help debug your script
As you can see just dumping the current values of variables to the screen can be useful to show where your logic is failing. The
EXIT
signal obviously does not need an explicitexit
statement to be generated; in this case theecho
statement is executed when the end of the script is reached.Another useful trap to use with Bash scripts is
DEBUG
. This happens after every statement, so it can be used as a brute force way to show the values of variables at each step in the script execution.#!/bin/bash trap 'echo "line ${LINENO}: score is $score"' DEBUG score=0 if [ "${USER}" = "mike" ]; then let "score += 1" fi let "score += 1" if [ "" = "7" ]; then score=7 fi exit 0<img src=https://linuxconfig.org/images/08-how-to-debug-bash-scripts.png alt="results from using trap DEBUG" width=1200 height=469 /> Using trapDEBUG
to help debug your script ConclusionWhen you notice your Bash script not behaving as expected and the reason is not clear to you for whatever reason, consider what information would be useful to help you identify the cause then use the most comfortable tools available to help you pinpoint the issue. The xtrace option
-x
is easy to use and probably the most useful of the options presented here, so consider trying it out next time you're faced with a script that's not doing what you thought it would
Jun 30, 2019 | www.putorius.net
If you want to match the pattern regardless of it's case (Capital letters or lowercase letters) you can set the nocasematch shell option with the shopt builtin. You can do this as the first line of your script. Since the script will run in a subshell it won't effect your normal environment.
#!/bin/bash shopt -s nocasematch read -p "Name a Star Trek character: " CHAR case $CHAR in "Seven of Nine" | Neelix | Chokotay | Tuvok | Janeway ) echo "$CHAR was in Star Trek Voyager" ;;& Archer | Phlox | Tpol | Tucker ) echo "$CHAR was in Star Trek Enterprise" ;;& Odo | Sisko | Dax | Worf | Quark ) echo "$CHAR was in Star Trek Deep Space Nine" ;;& Worf | Data | Riker | Picard ) echo "$CHAR was in Star Trek The Next Generation" && echo "/etc/redhat-release" ;; *) echo "$CHAR is not in this script." ;; esac
Jul 29, 2017 | stackoverflow.com
getmizanur , asked Sep 10 '11 at 20:35
Is there any directory bookmarking utility for bash to allow move around faster on the command line?UPDATE
Thanks guys for the feedback however I created my own simple shell script (feel free to modify/expand it)
function cdb() { USAGE="Usage: cdb [-c|-g|-d|-l] [bookmark]" ; if [ ! -e ~/.cd_bookmarks ] ; then mkdir ~/.cd_bookmarks fi case $1 in # create bookmark -c) shift if [ ! -f ~/.cd_bookmarks/$1 ] ; then echo "cd `pwd`" > ~/.cd_bookmarks/"$1" ; else echo "Try again! Looks like there is already a bookmark '$1'" fi ;; # goto bookmark -g) shift if [ -f ~/.cd_bookmarks/$1 ] ; then source ~/.cd_bookmarks/"$1" else echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ; fi ;; # delete bookmark -d) shift if [ -f ~/.cd_bookmarks/$1 ] ; then rm ~/.cd_bookmarks/"$1" ; else echo "Oops, forgot to specify the bookmark" ; fi ;; # list bookmarks -l) shift ls -l ~/.cd_bookmarks/ ; ;; *) echo "$USAGE" ; ;; esac }INSTALL
1./ create a file ~/.cdb and copy the above script into it.
2./ in your ~/.bashrc add the following
if [ -f ~/.cdb ]; then source ~/.cdb fi3./ restart your bash session
USAGE
1./ to create a bookmark
$cd my_project $cdb -c project12./ to goto a bookmark
$cdb -g project13./ to list bookmarks
$cdb -l4./ to delete a bookmark
$cdb -d project15./ where are all my bookmarks stored?
$cd ~/.cd_bookmarksFredrik Pihl , answered Sep 10 '11 at 20:47
Also, have a look at CDPATHA colon-separated list of search paths available to the cd command, similar in function to the $PATH variable for binaries. The $CDPATH variable may be set in the local ~/.bashrc file.
ash$ cd bash-doc bash: cd: bash-doc: No such file or directory bash$ CDPATH=/usr/share/doc bash$ cd bash-doc /usr/share/doc/bash-doc bash$ echo $PWD /usr/share/doc/bash-docand
cd -It's the command-line equivalent of the back button (takes you to the previous directory you were in).
ajreal , answered Sep 10 '11 at 20:41
In bash script/command,
you can usepushd
andpopd
Save and then change the current directory. With no arguments, pushd exchanges the top two directories.
Usage
cd /abc pushd /xxx <-- save /abc to environment variables and cd to /xxx pushd /zzz pushd +1 <-- cd /xxxpopd is to remove the variable (reverse manner)
fgm , answered Sep 11 '11 at 8:28
bookmarks.sh provides a bookmark management system for the Bash version 4.0+. It can also use a Midnight Commander hotlist.Dmitry Frank , answered Jun 16 '15 at 10:22
Thanks for sharing your solution, and I'd like to share mine as well, which I find more useful than anything else I've came across before.The engine is a great, universal tool: command-line fuzzy finder by Junegunn.
It primarily allows you to "fuzzy-find" files in a number of ways, but it also allows to feed arbitrary text data to it and filter this data. So, the shortcuts idea is simple: all we need is to maintain a file with paths (which are shortcuts), and fuzzy-filter this file. Here's how it looks: we type
cdg
command (from "cd global", if you like), get a list of our bookmarks, pick the needed one in just a few keystrokes, and press Enter. Working directory is changed to the picked item:It is extremely fast and convenient: usually I just type 3-4 letters of the needed item, and all others are already filtered out. Additionally, of course we can move through list with arrow keys or with vim-like keybindings
Ctrl+j
/Ctrl+k
.Article with details: Fuzzy shortcuts for your shell .
It is possible to use it for GUI applications as well (via xterm): I use that for my GUI file manager Double Commander . I have plans to write an article about this use case, too.
return42 , answered Feb 6 '15 at 11:56
Inspired by the question and answers here, I added the lines below to my~/.bashrc
file.With this you have a
favdir
command (function) to manage your favorites and a autocompletion function to select an item from these favorites.# --------- # Favorites # --------- __favdirs_storage=~/.favdirs __favdirs=( "$HOME" ) containsElement () { local e for e in "${@:2}"; do [[ "$e" == "$1" ]] && return 0; done return 1 } function favdirs() { local cur local IFS local GLOBIGNORE case $1 in list) echo "favorite folders ..." printf -- ' - %s\n' "${__favdirs[@]}" ;; load) if [[ ! -e $__favdirs_storage ]] ; then favdirs save fi # mapfile requires bash 4 / my OS-X bash vers. is 3.2.53 (from 2007 !!?!). # mapfile -t __favdirs < $__favdirs_storage IFS=$'\r\n' GLOBIGNORE='*' __favdirs=($(< $__favdirs_storage)) ;; save) printf -- '%s\n' "${__favdirs[@]}" > $__favdirs_storage ;; add) cur=${2-$(pwd)} favdirs load if containsElement "$cur" "${__favdirs[@]}" ; then echo "'$cur' allready exists in favorites" else __favdirs+=( "$cur" ) favdirs save echo "'$cur' added to favorites" fi ;; del) cur=${2-$(pwd)} favdirs load local i=0 for fav in ${__favdirs[@]}; do if [ "$fav" = "$cur" ]; then echo "delete '$cur' from favorites" unset __favdirs[$i] favdirs save break fi let i++ done ;; *) echo "Manage favorite folders." echo "" echo "usage: favdirs [ list | load | save | add | del ]" echo "" echo " list : list favorite folders" echo " load : load favorite folders from $__favdirs_storage" echo " save : save favorite directories to $__favdirs_storage" echo " add : add directory to favorites [default pwd $(pwd)]." echo " del : delete directory from favorites [default pwd $(pwd)]." esac } && favdirs load function __favdirs_compl_command() { COMPREPLY=( $( compgen -W "list load save add del" -- ${COMP_WORDS[COMP_CWORD]})) } && complete -o default -F __favdirs_compl_command favdirs function __favdirs_compl() { local IFS=$'\n' COMPREPLY=( $( compgen -W "${__favdirs[*]}" -- ${COMP_WORDS[COMP_CWORD]})) } alias _cd='cd' complete -F __favdirs_compl _cdWithin the last two lines, an alias to change the current directory (with autocompletion) is created. With this alias (
_cd
) you are able to change to one of your favorite directories. May you wan't to change this alias to something which fits your needs .With the function
favdirs
you can manage your favorites (see usage).$ favdirs Manage favorite folders. usage: favdirs [ list | load | save | add | del ] list : list favorite folders load : load favorite folders from ~/.favdirs save : save favorite directories to ~/.favdirs add : add directory to favorites [default pwd /tmp ]. del : delete directory from favorites [default pwd /tmp ].Zied , answered Mar 12 '14 at 9:53
Yes there is DirB: Directory Bookmarks for Bash well explained in this Linux Journal articleAn example from the article:
% cd ~/Desktop % s d # save(bookmark) ~/Desktop as d % cd /tmp # go somewhere % pwd /tmp % g d # go to the desktop % pwd /home/DesktopAl Conrad , answered Sep 4 '15 at 16:10
@getmizanur I used your cdb script. I enhanced it slightly by adding bookmarks tab completion. Here's my version of your cdb script._cdb() { local _script_commands=$(ls -1 ~/.cd_bookmarks/) local cur=${COMP_WORDS[COMP_CWORD]} COMPREPLY=( $(compgen -W "${_script_commands}" -- $cur) ) } complete -F _cdb cdb function cdb() { local USAGE="Usage: cdb [-h|-c|-d|-g|-l|-s] [bookmark]\n \t[-h or no args] - prints usage help\n \t[-c bookmark] - create bookmark\n \t[-d bookmark] - delete bookmark\n \t[-g bookmark] - goto bookmark\n \t[-l] - list bookmarks\n \t[-s bookmark] - show bookmark location\n \t[bookmark] - same as [-g bookmark]\n Press tab for bookmark completion.\n" if [ ! -e ~/.cd_bookmarks ] ; then mkdir ~/.cd_bookmarks fi case $1 in # create bookmark -c) shift if [ ! -f ~/.cd_bookmarks/$1 ] ; then echo "cd `pwd`" > ~/.cd_bookmarks/"$1" complete -F _cdb cdb else echo "Try again! Looks like there is already a bookmark '$1'" fi ;; # goto bookmark -g) shift if [ -f ~/.cd_bookmarks/$1 ] ; then source ~/.cd_bookmarks/"$1" else echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ; fi ;; # show bookmark -s) shift if [ -f ~/.cd_bookmarks/$1 ] ; then cat ~/.cd_bookmarks/"$1" else echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ; fi ;; # delete bookmark -d) shift if [ -f ~/.cd_bookmarks/$1 ] ; then rm ~/.cd_bookmarks/"$1" ; else echo "Oops, forgot to specify the bookmark" ; fi ;; # list bookmarks -l) shift ls -1 ~/.cd_bookmarks/ ; ;; -h) echo -e $USAGE ; ;; # goto bookmark by default *) if [ -z "$1" ] ; then echo -e $USAGE elif [ -f ~/.cd_bookmarks/$1 ] ; then source ~/.cd_bookmarks/"$1" else echo "Mmm...looks like your bookmark has spontaneously combusted. What I mean to say is that your bookmark does not exist." ; fi ;; esac }tobimensch , answered Jun 5 '16 at 21:31
Yes, one that I have written, that is called anc.https://github.com/tobimensch/anc
Anc stands for anchor, but anc's anchors are really just bookmarks.
It's designed for ease of use and there're multiple ways of navigating, either by giving a text pattern, using numbers, interactively, by going back, or using [TAB] completion.
I'm actively working on it and open to input on how to make it better.
Allow me to paste the examples from anc's github page here:
# make the current directory the default anchor: $ anc s # go to /etc, then /, then /usr/local and then back to the default anchor: $ cd /etc; cd ..; cd usr/local; anc # go back to /usr/local : $ anc b # add another anchor: $ anc a $HOME/test # view the list of anchors (the default one has the asterisk): $ anc l (0) /path/to/first/anchor * (1) /home/usr/test # jump to the anchor we just added: # by using its anchor number $ anc 1 # or by jumping to the last anchor in the list $ anc -1 # add multiple anchors: $ anc a $HOME/projects/first $HOME/projects/second $HOME/documents/first # use text matching to jump to $HOME/projects/first $ anc pro fir # use text matching to jump to $HOME/documents/first $ anc doc fir # add anchor and jump to it using an absolute path $ anc /etc # is the same as $ anc a /etc; anc -1 # add anchor and jump to it using a relative path $ anc ./X11 #note that "./" is required for relative paths # is the same as $ anc a X11; anc -1 # using wildcards you can add many anchors at once $ anc a $HOME/projects/* # use shell completion to see a list of matching anchors # and select the one you want to jump to directly $ anc pro[TAB]Cảnh Ton Nguyễn , answered Feb 20 at 5:41
Bashmarks is an amazingly simple and intuitive utility. In short, after installation, the usage is:s <bookmark_name> - Saves the current directory as "bookmark_name" g <bookmark_name> - Goes (cd) to the directory associated with "bookmark_name" p <bookmark_name> - Prints the directory associated with "bookmark_name" d <bookmark_name> - Deletes the bookmark l - Lists all available bookmarks,
For short term shortcuts, I have a the following in my respective init script (Sorry. I can't find the source right now and didn't bother then):function b() { alias $1="cd `pwd -P`" }Usage:
In any directory that you want to bookmark type
b THEDIR # <THEDIR> being the name of your 'bookmark'It will create an alias to cd (back) to here.
To return to a 'bookmarked' dir type
THEDIRIt will run the stored alias and cd back there.
Caution: Use only if you understand that this might override existing shell aliases and what that means.
Sep 02, 2019 | www.linuxquestions.org
Switch statement for bash script
<a rel='nofollow' target='_blank' href='//rev.linuxquestions.org/www/delivery/ck.php?n=a054b75'><img border='0' alt='' src='//rev.linuxquestions.org/www/delivery/avw.php?zoneid=10&n=a054b75' /></a>
[ Log in to get rid of this advertisement] Hello, i am currently trying out the switch statement using bash script.CODE:
showmenu () {
echo "1. Number1"
echo "2. Number2"
echo "3. Number3"
echo "4. All"
echo "5. Quit"
}while true
do
showmenu
read choice
echo "Enter a choice:"
case "$choice" in
"1")
echo "Number One"
;;
"2")
echo "Number Two"
;;
"3")
echo "Number Three"
;;
"4")
echo "Number One, Two, Three"
;;
"5")
echo "Program Exited"
exit 0
;;
*)
echo "Please enter number ONLY ranging from 1-5!"
;;
esac
doneOUTPUT:
1. Number1
2. Number2
3. Number3
4. All
5. Quit
Enter a choice:So, when the code is run, a menu with option 1-5 will be shown, then the user will be asked to enter a choice and finally an output is shown. But it is possible if the user want to enter multiple choices. For example, user enter choice "1" and "3", so the output will be "Number One" and "Number Three". Any idea?
Just something to get you started. Code:
#! /bin/bash showmenu () { typeset ii typeset -i jj=1 typeset -i kk typeset -i valid=0 # valid=1 if input is good while (( ! valid )) do for ii in "${options[@]}" do echo "$jj) $ii" let jj++ done read -e -p 'Select a list of actions : ' -a answer jj=0 valid=1 for kk in "${answer[@]}" do if (( kk < 1 || kk > "${#options[@]}" )) then echo "Error Item $jj is out of bounds" 1>&2 valid=0 break fi let jj++ done done } typeset -r c1=Number1 typeset -r c2=Number2 typeset -r c3=Number3 typeset -r c4=All typeset -r c5=Quit typeset -ra options=($c1 $c2 $c3 $c4 $c5) typeset -a answer typeset -i kk while true do showmenu for kk in "${answer[@]}" do case $kk in 1) echo 'Number One' ;; 2) echo 'Number Two' ;; 3) echo 'Number Three' ;; 4) echo 'Number One, Two, Three' ;; 5) echo 'Program Exit' exit 0 ;; esac done done
stevenworr View Public Profile View LQ Blog View Review Entries View HCL Entries Find More Posts by stevenworr
11-16-2009, 10:10 PM # 4 wjs1990 Member
Registered: Nov 2009 Posts: 30
Original Poster
Rep:Ok will try it out first. Thanks.
Last edited by wjs1990; 11-16-2009 at 10:13 PM .
wjs1990 View Public Profile View LQ Blog View Review Entries View HCL Entries Find More Posts by wjs1990
11-16-2009, 10:16 PM # 5 evo2 LQ Guru
Registered: Jan 2009 Location: Japan Distribution: Mostly Debian and CentOS Posts: 5,945
Rep:This can be done just by wrapping your case block in a for loop and changing one line. Code:
#!/bin/bash showmenu () { echo "1. Number1" echo "2. Number2" echo "3. Number3" echo "4. All" echo "5. Quit" } while true ; do showmenu read choices for choice in $choices ; do case "$choice" in 1) echo "Number One" ;; 2) echo "Number Two" ;; 3) echo "Number Three" ;; 4) echo "Numbers One, two, three" ;; 5) echo "Exit" exit 0 ;; *) echo "Please enter number ONLY ranging from 1-5!" ;; esac done doneYou can now enter any number of numbers seperated by white space.Cheers,
EVo2.
Mar 30, 2018 | sookocheff.com
Parsing bash script options with getopts Posted on January 4, 2015 | 5 minutes | Kevin Sookocheff A common task in shell scripting is to parse command line arguments to your script. Bash provides the
getopts
built-in function to do just that. This tutorial explains how to use thegetopts
built-in function to parse arguments and options to a bash script.The
getopts
function takes three parameters. The first is a specification of which options are valid, listed as a sequence of letters. For example, the string'ht'
signifies that the options-h
and-t
are valid.The second argument to
getopts
is a variable that will be populated with the option or argument to be processed next. In the following loop,opt
will hold the value of the current option that has been parsed bygetopts
.while getopts ":ht" opt; do case ${opt} in h ) # process option a ;; t ) # process option t ;; \? ) echo "Usage: cmd [-h] [-t]" ;; esac doneThis example shows a few additional features of
getopts
. First, if an invalid option is provided, the option variable is assigned the value?
. You can catch this case and provide an appropriate usage message to the user. Second, this behaviour is only true when you prepend the list of valid options with:
to disable the default error handling of invalid options. It is recommended to always disable the default error handling in your scripts.The third argument to
Shifting processed optionsgetopts
is the list of arguments and options to be processed. When not provided, this defaults to the arguments and options provided to the application ($@
). You can provide this third argument to usegetopts
to parse any list of arguments and options you provide.The variable
OPTIND
holds the number of options parsed by the last call togetopts
. It is common practice to call theshift
command at the end of your processing loop to remove options that have already been handled from$@
.shift $((OPTIND -1))Parsing options with argumentsOptions that themselves have arguments are signified with a
:
. The argument to an option is placed in the variableOPTARG
. In the following example, the optiont
takes an argument. When the argument is provided, we copy its value to the variabletarget
. If no argument is providedgetopts
will setopt
to:
. We can recognize this error condition by catching the:
case and printing an appropriate error message.while getopts ":t:" opt; do case ${opt} in t ) target=$OPTARG ;; \? ) echo "Invalid option: $OPTARG" 1>&2 ;; : ) echo "Invalid option: $OPTARG requires an argument" 1>&2 ;; esac done shift $((OPTIND -1))An extended example – parsing nested arguments and optionsLet's walk through an extended example of processing a command that takes options, has a sub-command, and whose sub-command takes an additional option that has an argument. This is a mouthful so let's break it down using an example. Let's say we are writing our own version of the
pip
command . In this version you can callpip
with the-h
option to display a help message.> pip -h Usage: pip -h Display this help message. pip install Install a Python package.We can use
getopts
to parse the-h
option with the followingwhile
loop. In it we catch invalid options with\?
andshift
all arguments that have been processed withshift $((OPTIND -1))
.while getopts ":h" opt; do case ${opt} in h ) echo "Usage:" echo " pip -h Display this help message." echo " pip install Install a Python package." exit 0 ;; \? ) echo "Invalid Option: -$OPTARG" 1>&2 exit 1 ;; esac done shift $((OPTIND -1))Now let's add the sub-command
install
to our script.install
takes as an argument the Python package to install.> pip install urllib3
install
also takes an option,-t
.-t
takes as an argument the location to install the package to relative to the current directory.> pip install urllib3 -t ./src/libTo process this line we must find the sub-command to execute. This value is the first argument to our script.
subcommand=$1 shift # Remove `pip` from the argument listNow we can process the sub-command
install
. In our example, the option-t
is actually an option that follows the package argument so we begin by removinginstall
from the argument list and processing the remainder of the line.case "$subcommand" in install) package=$1 shift # Remove `install` from the argument list ;; esacAfter shifting the argument list we can process the remaining arguments as if they are of the form
package -t src/lib
. The-t
option takes an argument itself. This argument will be stored in the variableOPTARG
and we save it to the variabletarget
for further work.case "$subcommand" in install) package=$1 shift # Remove `install` from the argument list while getopts ":t:" opt; do case ${opt} in t ) target=$OPTARG ;; \? ) echo "Invalid Option: -$OPTARG" 1>&2 exit 1 ;; : ) echo "Invalid Option: -$OPTARG requires an argument" 1>&2 exit 1 ;; esac done shift $((OPTIND -1)) ;; esacPutting this all together, we end up with the following script that parses arguments to our version of
pip
and its sub-commandinstall
.package="" # Default to empty package target="" # Default to empty target # Parse options to the `pip` command while getopts ":h" opt; do case ${opt} in h ) echo "Usage:" echo " pip -h Display this help message." echo " pip install <package> Install <package>." exit 0 ;; \? ) echo "Invalid Option: -$OPTARG" 1>&2 exit 1 ;; esac done shift $((OPTIND -1)) subcommand=$1; shift # Remove 'pip' from the argument list case "$subcommand" in # Parse options to the install sub command install) package=$1; shift # Remove 'install' from the argument list # Process package options while getopts ":t:" opt; do case ${opt} in t ) target=$OPTARG ;; \? ) echo "Invalid Option: -$OPTARG" 1>&2 exit 1 ;; : ) echo "Invalid Option: -$OPTARG requires an argument" 1>&2 exit 1 ;; esac done shift $((OPTIND -1)) ;; esacAfter processing the above sequence of commands, the variable
bash getoptspackage
will hold the package to install and the variabletarget
will hold the target to install the package to. You can use this as a template for processing any set of arguments and options to your scripts.
Jul 10, 2017 | stackoverflow.com
Livven, Jul 10, 2017 at 8:11
Update: It's been more than 5 years since I started this answer. Thank you for LOTS of great edits/comments/suggestions. In order save maintenance time, I've modified the code block to be 100% copy-paste ready. Please do not post comments like "What if you changed X to Y ". Instead, copy-paste the code block, see the output, make the change, rerun the script, and comment "I changed X to Y and " I don't have time to test your ideas and tell you if they work.
Method #1: Using bash without getopt[s]Two common ways to pass key-value-pair arguments are:
Bash Space-Separated (e.g.,--option argument
) (without getopt[s])Usage
demo-space-separated.sh -e conf -s /etc -l /usr/lib /etc/hosts
cat >/tmp/demo-space-separated.sh <<'EOF' #!/bin/bash POSITIONAL=() while [[ $# -gt 0 ]] do key="$1" case $key in -e|--extension) EXTENSION="$2" shift # past argument shift # past value ;; -s|--searchpath) SEARCHPATH="$2" shift # past argument shift # past value ;; -l|--lib) LIBPATH="$2" shift # past argument shift # past value ;; --default) DEFAULT=YES shift # past argument ;; *) # unknown option POSITIONAL+=("$1") # save it in an array for later shift # past argument ;; esac done set -- "${POSITIONAL[@]}" # restore positional parameters echo "FILE EXTENSION = ${EXTENSION}" echo "SEARCH PATH = ${SEARCHPATH}" echo "LIBRARY PATH = ${LIBPATH}" echo "DEFAULT = ${DEFAULT}" echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l) if [[ -n $1 ]]; then echo "Last line of file specified as non-opt/last argument:" tail -1 "$1" fi EOF chmod +x /tmp/demo-space-separated.sh /tmp/demo-space-separated.sh -e conf -s /etc -l /usr/lib /etc/hostsoutput from copy-pasting the block above:
FILE EXTENSION = conf SEARCH PATH = /etc LIBRARY PATH = /usr/lib DEFAULT = Number files in SEARCH PATH with EXTENSION: 14 Last line of file specified as non-opt/last argument: #93.184.216.34 example.comBash Equals-Separated (e.g.,--option=argument
) (without getopt[s])Usage
demo-equals-separated.sh -e=conf -s=/etc -l=/usr/lib /etc/hosts
cat >/tmp/demo-equals-separated.sh <<'EOF' #!/bin/bash for i in "$@" do case $i in -e=*|--extension=*) EXTENSION="${i#*=}" shift # past argument=value ;; -s=*|--searchpath=*) SEARCHPATH="${i#*=}" shift # past argument=value ;; -l=*|--lib=*) LIBPATH="${i#*=}" shift # past argument=value ;; --default) DEFAULT=YES shift # past argument with no value ;; *) # unknown option ;; esac done echo "FILE EXTENSION = ${EXTENSION}" echo "SEARCH PATH = ${SEARCHPATH}" echo "LIBRARY PATH = ${LIBPATH}" echo "DEFAULT = ${DEFAULT}" echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l) if [[ -n $1 ]]; then echo "Last line of file specified as non-opt/last argument:" tail -1 $1 fi EOF chmod +x /tmp/demo-equals-separated.sh /tmp/demo-equals-separated.sh -e=conf -s=/etc -l=/usr/lib /etc/hostsoutput from copy-pasting the block above:
FILE EXTENSION = conf SEARCH PATH = /etc LIBRARY PATH = /usr/lib DEFAULT = Number files in SEARCH PATH with EXTENSION: 14 Last line of file specified as non-opt/last argument: #93.184.216.34 example.comTo better understand
Method #2: Using bash with getopt[s]${i#*=}
search for "Substring Removal" in this guide . It is functionally equivalent to`sed 's/[^=]*=//' <<< "$i"`
which calls a needless subprocess or`echo "$i" | sed 's/[^=]*=//'`
which calls two needless subprocesses.from: http://mywiki.wooledge.org/BashFAQ/035#getopts
getopt(1) limitations (older, relatively-recent
getopt
versions):
- can't handle arguments that are empty strings
- can't handle arguments with embedded whitespace
More recent
getopt
versions don't have these limitations.Additionally, the POSIX shell (and others) offer
getopts
which doesn't have these limitations. I've included a simplisticgetopts
example.Usage
demo-getopts.sh -vf /etc/hosts foo bar
cat >/tmp/demo-getopts.sh <<'EOF' #!/bin/sh # A POSIX variable OPTIND=1 # Reset in case getopts has been used previously in the shell. # Initialize our own variables: output_file="" verbose=0 while getopts "h?vf:" opt; do case "$opt" in h|\?) show_help exit 0 ;; v) verbose=1 ;; f) output_file=$OPTARG ;; esac done shift $((OPTIND-1)) [ "${1:-}" = "--" ] && shift echo "verbose=$verbose, output_file='$output_file', Leftovers: $@" EOF chmod +x /tmp/demo-getopts.sh /tmp/demo-getopts.sh -vf /etc/hosts foo baroutput from copy-pasting the block above:
verbose=1, output_file='/etc/hosts', Leftovers: foo barThe advantages of
getopts
are:
- It's more portable, and will work in other shells like
dash
.- It can handle multiple single options like
-vf filename
in the typical Unix way, automatically.The disadvantage of
getopts
is that it can only handle short options (-h
, not--help
) without additional code.There is a getopts tutorial which explains what all of the syntax and variables mean. In bash, there is also
help getopts
, which might be informative.johncip ,Jul 23, 2018 at 15:15
No answer mentions enhanced getopt . And the top-voted answer is misleading: It either ignores-vfd
style short options (requested by the OP) or options after positional arguments (also requested by the OP); and it ignores parsing-errors. Instead:
- Use enhanced
getopt
from util-linux or formerly GNU glibc . 1- It works with
getopt_long()
the C function of GNU glibc.- Has all useful distinguishing features (the others don't have them):
- handles spaces, quoting characters and even binary in arguments 2 (non-enhanced
getopt
can't do this)- it can handle options at the end:
script.sh -o outFile file1 file2 -v
(getopts
doesn't do this)- allows
=
-style long options:script.sh --outfile=fileOut --infile fileIn
(allowing both is lengthy if self parsing)- allows combined short options, e.g.
-vfd
(real work if self parsing)- allows touching option-arguments, e.g.
-oOutfile
or-vfdoOutfile
- Is so old already 3 that no GNU system is missing this (e.g. any Linux has it).
- You can test for its existence with:
getopt --test
→ return value 4.- Other
getopt
or shell-builtingetopts
are of limited use.The following calls
myscript -vfd ./foo/bar/someFile -o /fizz/someOtherFile myscript -v -f -d -o/fizz/someOtherFile -- ./foo/bar/someFile myscript --verbose --force --debug ./foo/bar/someFile -o/fizz/someOtherFile myscript --output=/fizz/someOtherFile ./foo/bar/someFile -vfd myscript ./foo/bar/someFile -df -v --output /fizz/someOtherFileall return
verbose: y, force: y, debug: y, in: ./foo/bar/someFile, out: /fizz/someOtherFilewith the following
myscript
#!/bin/bash # saner programming env: these switches turn some bugs into errors set -o errexit -o pipefail -o noclobber -o nounset # -allow a command to fail with !'s side effect on errexit # -use return value from ${PIPESTATUS[0]}, because ! hosed $? ! getopt --test > /dev/null if [[ ${PIPESTATUS[0]} -ne 4 ]]; then echo 'I'm sorry, `getopt --test` failed in this environment.' exit 1 fi OPTIONS=dfo:v LONGOPTS=debug,force,output:,verbose # -regarding ! and PIPESTATUS see above # -temporarily store output to be able to check for errors # -activate quoting/enhanced mode (e.g. by writing out "--options") # -pass arguments only via -- "$@" to separate them correctly ! PARSED=$(getopt --options=$OPTIONS --longoptions=$LONGOPTS --name "$0" -- "$@") if [[ ${PIPESTATUS[0]} -ne 0 ]]; then # e.g. return value is 1 # then getopt has complained about wrong arguments to stdout exit 2 fi # read getopt's output this way to handle the quoting right: eval set -- "$PARSED" d=n f=n v=n outFile=- # now enjoy the options in order and nicely split until we see -- while true; do case "$1" in -d|--debug) d=y shift ;; -f|--force) f=y shift ;; -v|--verbose) v=y shift ;; -o|--output) outFile="$2" shift 2 ;; --) shift break ;; *) echo "Programming error" exit 3 ;; esac done # handle non-option arguments if [[ $# -ne 1 ]]; then echo "$0: A single input file is required." exit 4 fi echo "verbose: $v, force: $f, debug: $d, in: $1, out: $outFile"
1 enhanced getopt is available on most "bash-systems", including Cygwin; on OS X try brew install gnu-getopt or
sudo port install getopt
2 the POSIXexec()
conventions have no reliable way to pass binary NULL in command line arguments; those bytes prematurely end the argument
3 first version released in 1997 or before (I only tracked it back to 1997)Tobias Kienzler ,Mar 19, 2016 at 15:23
from : digitalpeer.com with minor modificationsUsage
myscript.sh -p=my_prefix -s=dirname -l=libname
#!/bin/bash for i in "$@" do case $i in -p=*|--prefix=*) PREFIX="${i#*=}" ;; -s=*|--searchpath=*) SEARCHPATH="${i#*=}" ;; -l=*|--lib=*) DIR="${i#*=}" ;; --default) DEFAULT=YES ;; *) # unknown option ;; esac done echo PREFIX = ${PREFIX} echo SEARCH PATH = ${SEARCHPATH} echo DIRS = ${DIR} echo DEFAULT = ${DEFAULT}To better understand
${i#*=}
search for "Substring Removal" in this guide . It is functionally equivalent to`sed 's/[^=]*=//' <<< "$i"`
which calls a needless subprocess or`echo "$i" | sed 's/[^=]*=//'`
which calls two needless subprocesses.Robert Siemer ,Jun 1, 2018 at 1:57
getopt()
/getopts()
is a good option. Stolen from here :The simple use of "getopt" is shown in this mini-script:
#!/bin/bash echo "Before getopt" for i do echo $i done args=`getopt abc:d $*` set -- $args echo "After getopt" for i do echo "-->$i" doneWhat we have said is that any of -a, -b, -c or -d will be allowed, but that -c is followed by an argument (the "c:" says that).
If we call this "g" and try it out:
bash-2.05a$ ./g -abc foo Before getopt -abc foo After getopt -->-a -->-b -->-c -->foo -->--We start with two arguments, and "getopt" breaks apart the options and puts each in its own argument. It also added "--".
hfossli ,Jan 31 at 20:05
More succinct wayscript.sh
#!/bin/bash while [[ "$#" -gt 0 ]]; do case $1 in -d|--deploy) deploy="$2"; shift;; -u|--uglify) uglify=1;; *) echo "Unknown parameter passed: $1"; exit 1;; esac; shift; done echo "Should deploy? $deploy" echo "Should uglify? $uglify"Usage:
./script.sh -d dev -u # OR: ./script.sh --deploy dev --uglifybronson ,Apr 27 at 23:22
At the risk of adding another example to ignore, here's my scheme.
- handles
-n arg
and--name=arg
- allows arguments at the end
- shows sane errors if anything is misspelled
- compatible, doesn't use bashisms
- readable, doesn't require maintaining state in a loop
Hope it's useful to someone.
while [ "$#" -gt 0 ]; do case "$1" in -n) name="$2"; shift 2;; -p) pidfile="$2"; shift 2;; -l) logfile="$2"; shift 2;; --name=*) name="${1#*=}"; shift 1;; --pidfile=*) pidfile="${1#*=}"; shift 1;; --logfile=*) logfile="${1#*=}"; shift 1;; --name|--pidfile|--logfile) echo "$1 requires an argument" >&2; exit 1;; -*) echo "unknown option: $1" >&2; exit 1;; *) handle_argument "$1"; shift 1;; esac doneRobert Siemer ,Jun 6, 2016 at 19:28
I'm about 4 years late to this question, but want to give back. I used the earlier answers as a starting point to tidy up my old adhoc param parsing. I then refactored out the following template code. It handles both long and short params, using = or space separated arguments, as well as multiple short params grouped together. Finally it re-inserts any non-param arguments back into the $1,$2.. variables. I hope it's useful.#!/usr/bin/env bash # NOTICE: Uncomment if your script depends on bashisms. #if [ -z "$BASH_VERSION" ]; then bash $0 $@ ; exit $? ; fi echo "Before" for i ; do echo - $i ; done # Code template for parsing command line parameters using only portable shell # code, while handling both long and short params, handling '-f file' and # '-f=file' style param data and also capturing non-parameters to be inserted # back into the shell positional parameters. while [ -n "$1" ]; do # Copy so we can modify it (can't modify $1) OPT="$1" # Detect argument termination if [ x"$OPT" = x"--" ]; then shift for OPT ; do REMAINS="$REMAINS \"$OPT\"" done break fi # Parse current opt while [ x"$OPT" != x"-" ] ; do case "$OPT" in # Handle --flag=value opts like this -c=* | --config=* ) CONFIGFILE="${OPT#*=}" shift ;; # and --flag value opts like this -c* | --config ) CONFIGFILE="$2" shift ;; -f* | --force ) FORCE=true ;; -r* | --retry ) RETRY=true ;; # Anything unknown is recorded for later * ) REMAINS="$REMAINS \"$OPT\"" break ;; esac # Check for multiple short options # NOTICE: be sure to update this pattern to match valid options NEXTOPT="${OPT#-[cfr]}" # try removing single short opt if [ x"$OPT" != x"$NEXTOPT" ] ; then OPT="-$NEXTOPT" # multiple short opts, keep going else break # long form, exit inner loop fi done # Done with that param. move to next shift done # Set the non-parameters back into the positional parameters ($1 $2 ..) eval set -- $REMAINS echo -e "After: \n configfile='$CONFIGFILE' \n force='$FORCE' \n retry='$RETRY' \n remains='$REMAINS'" for i ; do echo - $i ; done> ,
I have found the matter to write portable parsing in scripts so frustrating that I have written Argbash - a FOSS code generator that can generate the arguments-parsing code for your script plus it has some nice features:
May 10, 2013 | stackoverflow.com
An example of how to use getopts in bash Ask Question Asked 6 years, 3 months ago Active 10 months ago Viewed 419k times 288 132
chepner ,May 10, 2013 at 13:42
I want to callmyscript
file in this way:$ ./myscript -s 45 -p any_stringor
$ ./myscript -h >>> should display help $ ./myscript >>> should display helpMy requirements are:
getopt
here to get the input arguments- check that
-s
exists, if not return error- check that the value after the
-s
is 45 or 90- check that the
-p
exists and there is an input string after- if the user enters
./myscript -h
or just./myscript
then display helpI tried so far this code:
#!/bin/bash while getopts "h:s:" arg; do case $arg in h) echo "usage" ;; s) strength=$OPTARG echo $strength ;; esac doneBut with that code I get errors. How to do it with Bash and
getopt
?,
#!/bin/bash usage() { echo "Usage: $0 [-s <45|90>] [-p <string>]" 1>&2; exit 1; } while getopts ":s:p:" o; do case "${o}" in s) s=${OPTARG} ((s == 45 || s == 90)) || usage ;; p) p=${OPTARG} ;; *) usage ;; esac done shift $((OPTIND-1)) if [ -z "${s}" ] || [ -z "${p}" ]; then usage fi echo "s = ${s}" echo "p = ${p}"Example runs:
$ ./myscript.sh Usage: ./myscript.sh [-s <45|90>] [-p <string>] $ ./myscript.sh -h Usage: ./myscript.sh [-s <45|90>] [-p <string>] $ ./myscript.sh -s "" -p "" Usage: ./myscript.sh [-s <45|90>] [-p <string>] $ ./myscript.sh -s 10 -p foo Usage: ./myscript.sh [-s <45|90>] [-p <string>] $ ./myscript.sh -s 45 -p foo s = 45 p = foo $ ./myscript.sh -s 90 -p bar s = 90 p = bar
Aug 28, 2019 | linoxide.com
The -e parameter is used for the interpretation of backslashes
... ... ...
To create a new line after each word in a string use the -e operator with the \n option as shown$ echo -e "Linux \nis \nan \nopensource \noperating \nsystem"... ... ...
Omit echoing trailing newlineThe -n option is used for omitting trailing newline. This is shown in the example below
$ echo -n "Linux is an opensource operating system"Sample Output
Linux is an opensource operating systemjames@buster:/$
Aug 27, 2019 | bash.cyberciti.biz
BASH_LINENO
An array variable whose members are the line numbers in source files corresponding to each member of FUNCNAME .
${BASH_LINENO[$i]}
is the line number in the source file where${FUNCNAME[$i]}
was called. The corresponding source file name is${BASH_SOURCE[$i]}
. UseLINENO
to obtain the current line number.
Aug 27, 2019 | stackoverflow.com
How to show line number when executing bash script Ask Question Asked 6 years, 1 month ago Active 1 year, 4 months ago Viewed 47k times 68 31
dspjm ,Jul 23, 2013 at 7:31
I have a test script which has a lot of commands and will generate lots of output, I useset -x
orset -v
andset -e
, so the script would stop when error occurs. However, it's still rather difficult for me to locate which line did the execution stop in order to locate the problem. Is there a method which can output the line number of the script before each line is executed? Or output the line number before the command exhibition generated byset -x
? Or any method which can deal with my script line location problem would be a great help. Thanks.Suvarna Pattayil ,Jul 28, 2017 at 17:25
You mention that you're already using-x
. The variablePS4
denotes the value is the prompt printed before the command line is echoed when the-x
option is set and defaults to:
followed by space.You can change
PS4
to emit theLINENO
(The line number in the script or shell function currently executing).For example, if your script reads:
$ cat script foo=10 echo ${foo} echo $((2 + 2))Executing it thus would print line numbers:
$ PS4='Line ${LINENO}: ' bash -x script Line 1: foo=10 Line 2: echo 10 10 Line 3: echo 4 4http://wiki.bash-hackers.org/scripting/debuggingtips gives the ultimate
PS4
that would output everything you will possibly need for tracing:export PS4='+(${BASH_SOURCE}:${LINENO}): ${FUNCNAME[0]:+${FUNCNAME[0]}(): }'Deqing ,Jul 23, 2013 at 8:16
In Bash,$LINENO
contains the line number where the script currently executing.If you need to know the line number where the function was called, try
$BASH_LINENO
. Note that this variable is an array.For example:
#!/bin/bash function log() { echo "LINENO: ${LINENO}" echo "BASH_LINENO: ${BASH_LINENO[*]}" } function foo() { log "$@" } foo "$@"See here for details of Bash variables.
Eliran Malka ,Apr 25, 2017 at 10:14
Simple (but powerful) solution: Placeecho
around the code you think that causes the problem and move theecho
line by line until the messages does not appear anymore on screen - because the script has stop because of an error before.Even more powerful solution: Install
bashdb
the bash debugger and debug the script line by linekklepper ,Apr 2, 2018 at 22:44
Workaround for shells without LINENOIn a fairly sophisticated script I wouldn't like to see all line numbers; rather I would like to be in control of the output.
Define a function
echo_line_no () { grep -n "$1" $0 | sed "s/echo_line_no//" # grep the line(s) containing input $1 with line numbers # replace the function name with nothing } # echo_line_noUse it with quotes like
echo_line_no "this is a simple comment with a line number"Output is
16 "this is a simple comment with a line number"if the number of this line in the source file is 16.
This basically answers the question How to show line number when executing bash script for users of ash or other shells without
LINENO
.Anything more to add?
Sure. Why do you need this? How do you work with this? What can you do with this? Is this simple approach really sufficient or useful? Why do you want to tinker with this at all?
Want to know more? Read reflections on debugging
Aug 27, 2019 | www.tecmint.com
~/.config/gogo/gogo.conf
file (which should be auto created if it doesn't exist) and has the following syntax.# Comments are lines that start from '#' character. default = ~/something alias = /desired/path alias2 = /desired/path with space alias3 = "/this/also/works" zażółć = "unicode/is/also/supported/zażółć gęślą jaźń"If you run gogo run without any arguments, it will go to the directory specified in default; this alias is always available, even if it's not in the configuration file, and points to $HOME directory.
To display the current aliases, use the
-l
switch. From the following screenshot, you can see that default points to~/home/tecmint
which is user tecmint's home directory on the system.$ gogo -l<img aria-describedby="caption-attachment-28848" src="https://www.tecmint.com/wp-content/uploads/2018/03/List-Gogo-Aliases.png" alt="List Gogo Aliases" width="664" height="150" />List Gogo Aliases
Below is an example of running gogo without any arguments.
$ cd Documents/Phone-Backup/Linux-Docs/ $ gogo $ pwd<img aria-describedby="caption-attachment-28849" src="https://www.tecmint.com/wp-content/uploads/2018/03/Gogo-Listing.png" alt="Running Gogo Without Options" width="661" height="105" />Running Gogo Without Options
To create a shortcut to a long path, move into the directory you want and use the
-a
flag to add an alias for that directory in gogo , as shown.$ cd Documents/Phone-Backup/Linux-Docs/Ubuntu/ $ gogo -a Ubuntu $ gogo $ gogo -l $ gogo -a Ubuntu $ pwd<img aria-describedby="caption-attachment-28850" src="https://www.tecmint.com/wp-content/uploads/2018/03/Create-Gogo-Shortcut.png" alt="Create Long Directory Shortcut " width="739" height="270" />Create Long Directory Shortcut
You can also create aliases for connecting directly into directories on a remote Linux servers. To do this, simple add the following lines to gogo configuration file, which can be accessed using -e flag, this will use the editor specified in the $EDITOR env variable.
$ gogo -eOne configuration file opens, add these following lines to it.
sshroot = ssh://[email protected]:/bin/bash /root/ sshtdocs = ssh://tecmint@server3 ~/tecmint/docs/
- sitaram says: August 25, 2019 at 7:46 am
The bulk of what this tool does can be replaced with a shell function that does `
cd $(grep -w ^$1 ~/.config/gogo.conf | cut -f2 -d' ')
`, where`$1`
is the argument supplied to the function.If you've already installed fzf (and you really should), then you can get a far better experience than even zsh's excellent "completion" facilities. I use something like `
cd $(fzf -1 +m -q "$1" < ~/.cache/to)
` (My equivalent of gogo.conf is `~/.cache/to
`).
Aug 07, 2016 | shapeshed.com
Tutorial on using exit codes from Linux or UNIX commands. Examples of how to get the exit code of a command, how to set the exit code and how to suppress exit codes.Estimated reading time: 3 minutes
Table of contentsWhat is an exit code in the UNIX or Linux shell?
An exit code, or sometimes known as a return code, is the code returned to a parent process by an executable. On POSIX systems the standard exit code is
0
for success and any number from1
to255
for anything else.Exit codes can be interpreted by machine scripts to adapt in the event of successes of failures. If exit codes are not set the exit code will be the exit code of the last run command.
How to get the exit code of a commandTo get the exit code of a command type
echo $?
at the command prompt. In the following example a file is printed to the terminal using the cat command.cat file.txt hello world echo $? 0The command was successful. The file exists and there are no errors in reading the file or writing it to the terminal. The exit code is therefore
0
.In the following example the file does not exist.
cat doesnotexist.txt cat: doesnotexist.txt: No such file or directory echo $? 1The exit code is
How to use exit codes in scripts1
as the operation was not successful.To use exit codes in scripts an
if
statement can be used to see if an operation was successful.#!/bin/bash cat file.txt if [ $? -eq 0 ] then echo "The script ran ok" exit 0 else echo "The script failed" >&2 exit 1 fiIf the command was unsuccessful the exit code will be
How to set an exit code0
and 'The script ran ok' will be printed to the terminal.To set an exit code in a script use
exit 0
where0
is the number you want to return. In the following example a shell script exits with a1
. This file is saved asexit.sh
.#!/bin/bash exit 1Executing this script shows that the exit code is correctly set.
bash exit.sh echo $? 1What exit code should I use?The Linux Documentation Project has a list of reserved codes that also offers advice on what code to use for specific scenarios. These are the standard error codes in Linux or UNIX.
How to suppress exit statuses
1
- Catchall for general errors2
- Misuse of shell builtins (according to Bash documentation)126
- Command invoked cannot execute127
- "command not found"128
- Invalid argument to exit128+n
- Fatal error signal "n"130
- Script terminated by Control-C255\*
- Exit status out of rangeSometimes there may be a requirement to suppress an exit status. It may be that a command is being run within another script and that anything other than a
0
status is undesirable.In the following example a file is printed to the terminal using cat . This file does not exist so will cause an exit status of
1
.To suppress the error message any output to standard error is sent to
/dev/null
using2>/dev/null
.If the cat command fails an
OR
operation can be used to provide a fallback -cat file.txt || exit 0
. In this case an exit code of0
is returned even if there is an error.Combining both the suppression of error output and the
OR
operation the following script returns a status code of0
with no output even though the file does not exist.#!/bin/bash cat 'doesnotexist.txt' 2>/dev/null || exit 0Further reading
Aug 26, 2019 | www.shellscript.sh
Exit codes are a number between 0 and 255, which is returned by any Unix command when it returns control to its parent process.
Other numbers can be used, but these are treated modulo 256, soexit -10
is equivalent toexit 246
, andexit 257
is equivalent toexit 1
.These can be used within a shell script to change the flow of execution depending on the success or failure of commands executed. This was briefly introduced in Variables - Part II . Here we shall look in more detail in the available interpretations of exit codes.
Success is traditionally represented with
exit 0
; failure is normally indicated with a non-zero exit-code. This value can indicate different reasons for failure.
For example, GNUgrep
returns0
on success,1
if no matches were found, and2
for other errors (syntax errors, non-existent input files, etc).We shall look at three different methods for checking error status, and discuss the pros and cons of each approach.
Firstly, the simple approach:
#!/bin/sh # First attempt at checking return codes USERNAME=`grep "^${1}:" /etc/passwd|cut -d":" -f1` if [ "$?" -ne "0" ]; then echo "Sorry, cannot find user ${1} in /etc/passwd" exit 1 fi NAME=`grep "^${1}:" /etc/passwd|cut -d":" -f5` HOMEDIR=`grep "^${1}:" /etc/passwd|cut -d":" -f6` echo "USERNAME: $USERNAME" echo "NAME: $NAME" echo "HOMEDIR: $HOMEDIR"
This script works fine if you supply a valid username in/etc/passwd
. However, if you enter an invalid code, it does not do what you might at first expect - it keeps running, and just shows:USERNAME: NAME: HOMEDIR:Why is this? As mentioned, the$?
variable is set to the return code of the last executed command . In this case, that iscut
.cut
had no problems which it feels like reporting - as far as I can tell from testing it, and reading the documentation,cut
returns zero whatever happens! It was fed an empty string, and did its job - returned the first field of its input, which just happened to be the empty string.So what do we do? If we have an error here,
grep
will report it, notcut
. Therefore, we have to testgrep
's return code, notcut
's.
#!/bin/sh # Second attempt at checking return codes grep "^${1}:" /etc/passwd > /dev/null 2>&1 if [ "$?" -ne "0" ]; then echo "Sorry, cannot find user ${1} in /etc/passwd" exit 1 fi USERNAME=`grep "^${1}:" /etc/passwd|cut -d":" -f1` NAME=`grep "^${1}:" /etc/passwd|cut -d":" -f5` HOMEDIR=`grep "^${1}:" /etc/passwd|cut -d":" -f6` echo "USERNAME: $USERNAME" echo "NAME: $NAME" echo "HOMEDIR: $HOMEDIR"
This fixes the problem for us, though at the expense of slightly longer code.
That is the basic way which textbooks might show you, but it is far from being all there is to know about error-checking in shell scripts. This method may not be the most suitable to your particular command-sequence, or may be unmaintainable. Below, we shall investigate two alternative approaches.As a second approach, we can tidy this somewhat by putting the test into a separate function, instead of littering the code with lots of 4-line tests:
#!/bin/sh # A Tidier approach check_errs() { # Function. Parameter 1 is the return code # Para. 2 is text to display on failure. if [ "${1}" -ne "0" ]; then echo "ERROR # ${1} : ${2}" # as a bonus, make our script exit with the right error code. exit ${1} fi } ### main script starts here ### grep "^${1}:" /etc/passwd > /dev/null 2>&1 check_errs $? "User ${1} not found in /etc/passwd" USERNAME=`grep "^${1}:" /etc/passwd|cut -d":" -f1` check_errs $? "Cut returned an error" echo "USERNAME: $USERNAME" check_errs $? "echo returned an error - very strange!"
This allows us to test for errors 3 times, with customised error messages, without having to write 3 individual tests. By writing the test routine once. we can call it as many times as we wish, creating a more intelligent script, at very little expense to the programmer. Perl programmers will recognise this as being similar to thedie
command in Perl.As a third approach, we shall look at a simpler and cruder method. I tend to use this for building Linux kernels - simple automations which, if they go well, should just get on with it, but when things go wrong, tend to require the operator to do something intelligent (ie, that which a script cannot do!):
#!/bin/sh cd /usr/src/linux && \ make dep && make bzImage && make modules && make modules_install && \ cp arch/i386/boot/bzImage /boot/my-new-kernel && cp System.map /boot && \ echo "Your new kernel awaits, m'lord."This script runs through the various tasks involved in building a Linux kernel (which can take quite a while), and uses the&&
operator to check for success. To do this withif
would involve:
#!/bin/sh cd /usr/src/linux if [ "$?" -eq "0" ]; then make dep if [ "$?" -eq "0" ]; then make bzImage if [ "$?" -eq "0" ]; then make modules if [ "$?" -eq "0" ]; then make modules_install if [ "$?" -eq "0" ]; then cp arch/i386/boot/bzImage /boot/my-new-kernel if [ "$?" -eq "0" ]; then cp System.map /boot/ if [ "$?" -eq "0" ]; then echo "Your new kernel awaits, m'lord." fi fi fi fi fi fi fi fi
... which I, personally, find pretty difficult to follow.
The&&
and||
operators are the shell's equivalent of AND and OR tests. These can be thrown together as above, or:
#!/bin/sh cp /foo /bar && echo Success || echo FailedThis code will either echo
Successor
Faileddepending on whether or not the
cp
command was successful. Look carefully at this; the construct iscommand && command-to-execute-on-success || command-to-execute-on-failureOnly one command can be in each part. This method is handy for simple success / fail scenarios, but if you want to check on the status of the
echo
commands themselves, it is easy to quickly become confused about which&&
and||
applies to which command. It is also very difficult to maintain. Therefore this construct is only recommended for simple sequencing of commands.In earlier versions, I had suggested that you can use a subshell to execute multiple commands depending on whether the
cp
command succeeded or failed:cp /foo /bar && ( echo Success ; echo Success part II; ) || ( echo Failed ; echo Failed part II )But in fact, Marcel found that this does not work properly. The syntax for a subshell is:
( command1 ; command2; command3 )The return code of the subshell is the return code of the final command (
command3
in this example). That return code will affect the overall command. So the output of this script:cp /foo /bar && ( echo Success ; echo Success part II; /bin/false ) || ( echo Failed ; echo Failed part II )Is that it runs the Success part (because
cp
succeeded, and then - because/bin/false
returns failure, it also executes the Failure part:Success Success part II Failed Failed part IISo if you need to execute multiple commands as a result of the status of some other condition, it is better (and much clearer) to use the standard
if
,then
,else
syntax.
Aug 22, 2019 | www.ostechnix.com
Method 2 Using history commandWe can use the history command's write option to print the history without numbers like below.
$ history -w /dev/stdoutMethod 3 Using history and cut commandsOne such way is to use history and cut commands like below.
$ history | cut -c 8-
Aug 14, 2019 | unix.stackexchange.com
PID background process Ask Question Asked 2 years, 8 months ago Active 2 years, 8 months ago Viewed 2k times 2
Raul ,Nov 27, 2016 at 18:21
As I understand pipes and commands, bash takes each command, spawns a process for each one and connects stdout of the previous one with the stdin of the next one.For example, in "ls -lsa | grep feb", bash will create two processes, and connect the output of "ls -lsa" to the input of "grep feb".
When you execute a background command like "sleep 30 &" in bash, you get the pid of the background process running your command. Surprisingly for me, when I wrote "ls -lsa | grep feb &" bash returned only one PID.
How should this be interpreted? A process runs both "ls -lsa" and "grep feb"? Several process are created but I only get the pid of one of them?
Raul ,Nov 27, 2016 at 19:21
Spawns 2 processes. The&
displays the PID of the second process. Example below.$ echo $$ 13358 $ sleep 100 | sleep 200 & [1] 13405 $ ps -ef|grep 13358 ec2-user 13358 13357 0 19:02 pts/0 00:00:00 -bash ec2-user 13404 13358 0 19:04 pts/0 00:00:00 sleep 100 ec2-user 13405 13358 0 19:04 pts/0 00:00:00 sleep 200 ec2-user 13406 13358 0 19:04 pts/0 00:00:00 ps -ef ec2-user 13407 13358 0 19:04 pts/0 00:00:00 grep --color=auto 13358 $> ,
When you run a job in the background, bash prints the process ID of its subprocess, the one that runs the command in that job. If that job happens to create more subprocesses, that's none of the parent shell's business.When the background job is a pipeline (i.e. the command is of the form
something1 | something2 &
, and not e.g.{ something1 | something2; } &
), there's an optimization which is strongly suggested by POSIX and performed by most shells including bash: each of the elements of the pipeline are executed directly as subprocesses of the original shell. What POSIX mandates is that the variable$!
is set to the last command in the pipeline in this case. In most shells, that last command is a subprocess of the original process, and so are the other commands in the pipeline.When you run
ls -lsa | grep feb
, there are three processes involved: the one that runs the left-hand side of the pipe (a subshell that finishes setting up the pipe then executesls
), the one that runs the right-hand side of the pipe (a subshell that finishes setting up the pipe then executesgrep
), and the original process that waits for the pipe to finish.You can watch what happens by tracing the processes:
$ strace -f -e clone,wait4,pipe,execve,setpgid bash --norc execve("/usr/local/bin/bash", ["bash", "--norc"], [/* 82 vars */]) = 0 setpgid(0, 24084) = 0 bash-4.3$ sleep 10 | sleep 20 &Note how the second
sleep
is reported and stored as$!
, but the process group ID is the firstsleep
. Dash has the same oddity, ksh and mksh don't.
Aug 14, 2019 | stackoverflow.com
Nidhi ,Nov 28, 2014 at 0:54
pids=$(pgrep <name>)will get you the pids of all processes with the given name. To kill them all, use
kill -9 $pidsTo refrain from using a variable and directly kill all processes with a given name issue
pkill -9 <name>panticz.de ,Nov 11, 2016 at 10:11
On a single line...pgrep -f process_name | xargs kill -9flazzarini ,Jun 13, 2014 at 9:54
Another possibility would be to usepidof
it usually comes with most distributions. It will return you the PID of a given process by using it's name.pidof process_nameThis way you could store that information in a variable and execute
kill -9
on it.#!/bin/bash pid=`pidof process_name` kill -9 $pidPawel K ,Dec 20, 2017 at 10:27
use grep [n]ame to remove that grep -v name this is first... Sec using xargs in the way how it is up there is wrong to rnu whatever it is piped you have to use -i ( interactive mode) otherwise you may have issues with the command.ps axf | grep | grep -v grep | awk '{print "kill -9 " $1}' ? ps aux |grep [n]ame | awk '{print "kill -9 " $2}' ? isnt that better ?
Aug 14, 2019 | stackoverflow.com
How to get PID of background process? Ask Question Asked 9 years, 8 months ago Active 7 months ago Viewed 238k times 336 64
pixelbeat ,Mar 20, 2013 at 9:11
I start a background process from my shell script, and I would like to kill this process when my script finishes.How to get the PID of this process from my shell script? As far as I can see variable
$!
contains the PID of the current script, not the background process.WiSaGaN ,Jun 2, 2015 at 14:40
You need to save the PID of the background process at the time you start it:foo & FOO_PID=$! # do other stuff kill $FOO_PIDYou cannot use job control, since that is an interactive feature and tied to a controlling terminal. A script will not necessarily have a terminal attached at all so job control will not necessarily be available.
Phil ,Dec 2, 2017 at 8:01
You can use thejobs -l
command to get to a particular jobL^Z [1]+ Stopped guard my_mac:workspace r$ jobs -l [1]+ 46841 Suspended: 18 guardIn this case, 46841 is the PID.
From
help jobs
:-l Report the process group ID and working directory of the jobs.
jobs -p
is another option which shows just the PIDs.Timo ,Dec 2, 2017 at 8:03
$$
is the current script's pid$!
is the pid of the last background processHere's a sample transcript from a bash session (
%1
refers to the ordinal number of background process as seen fromjobs
):$ echo $$ 3748 $ sleep 100 & [1] 192 $ echo $! 192 $ kill %1 [1]+ Terminated sleep 100lepe ,Dec 2, 2017 at 8:29
An even simpler way to kill all child process of a bash script:pkill -P $$The
-P
flag works the same way withpkill
andpgrep
- it gets child processes, only withpkill
the child processes get killed and withpgrep
child PIDs are printed to stdout.Luis Ramirez ,Feb 20, 2013 at 23:11
this is what I have done. Check it out, hope it can help.#!/bin/bash # # So something to show. echo "UNO" > UNO.txt echo "DOS" > DOS.txt # # Initialize Pid List dPidLst="" # # Generate background processes tail -f UNO.txt& dPidLst="$dPidLst $!" tail -f DOS.txt& dPidLst="$dPidLst $!" # # Report process IDs echo PID=$$ echo dPidLst=$dPidLst # # Show process on current shell ps -f # # Start killing background processes from list for dPid in $dPidLst do echo killing $dPid. Process is still there. ps | grep $dPid kill $dPid ps | grep $dPid echo Just ran "'"ps"'" command, $dPid must not show again. doneThen just run it as:
./bgkill.sh
with proper permissions of courseroot@umsstd22 [P]:~# ./bgkill.sh PID=23757 dPidLst= 23758 23759 UNO DOS UID PID PPID C STIME TTY TIME CMD root 3937 3935 0 11:07 pts/5 00:00:00 -bash root 23757 3937 0 11:55 pts/5 00:00:00 /bin/bash ./bgkill.sh root 23758 23757 0 11:55 pts/5 00:00:00 tail -f UNO.txt root 23759 23757 0 11:55 pts/5 00:00:00 tail -f DOS.txt root 23760 23757 0 11:55 pts/5 00:00:00 ps -f killing 23758. Process is still there. 23758 pts/5 00:00:00 tail ./bgkill.sh: line 24: 23758 Terminated tail -f UNO.txt Just ran 'ps' command, 23758 must not show again. killing 23759. Process is still there. 23759 pts/5 00:00:00 tail ./bgkill.sh: line 24: 23759 Terminated tail -f DOS.txt Just ran 'ps' command, 23759 must not show again. root@umsstd22 [P]:~# ps -f UID PID PPID C STIME TTY TIME CMD root 3937 3935 0 11:07 pts/5 00:00:00 -bash root 24200 3937 0 11:56 pts/5 00:00:00 ps -fPhil ,Oct 15, 2013 at 18:22
You might also be able to use pstree:pstree -p userThis typically gives a text representation of all the processes for the "user" and the -p option gives the process-id. It does not depend, as far as I understand, on having the processes be owned by the current shell. It also shows forks.
Phil ,Dec 4, 2018 at 9:46
pgrep
can get you all of the child PIDs of a parent process. As mentioned earlier$$
is the current scripts PID. So, if you want a script that cleans up after itself, this should do the trick:trap 'kill $( pgrep -P $$ | tr "\n" " " )' SIGINT SIGTERM EXIT
Aug 10, 2019 | www.cyberciti.biz
The stat command shows information about the file. The syntax is as follows to get the file size on GNU/Linux stat:
stat -c %s "/etc/passwd"
OR
stat --format=%s "/etc/passwd"
Aug 10, 2019 | stackoverflow.com
[ -n file.txt ]
doesn't check its size , it checks that the stringfile.txt
is non-zero length, so it will always succeed.If you want to say " size is non-zero", you need
[ -s file.txt ]
.To get a file's size , you can use
wc -c
to get the size ( file length) in bytes:file=file.txt minimumsize=90000 actualsize=$(wc -c <"$file") if [ $actualsize -ge $minimumsize ]; then echo size is over $minimumsize bytes else echo size is under $minimumsize bytes fiIn this case, it sounds like that's what you want.
But FYI, if you want to know how much disk space the file is using, you could use
du -k
to get the size (disk space used) in kilobytes:file=file.txt minimumsize=90 actualsize=$(du -k "$file" | cut -f 1) if [ $actualsize -ge $minimumsize ]; then echo size is over $minimumsize kilobytes else echo size is under $minimumsize kilobytes fiIf you need more control over the output format, you can also look at
stat
. On Linux, you'd start with something likestat -c '%s' file.txt
, and on BSD/Mac OS X, something likestat -f '%z' file.txt
.--Mikel
- 5 Why
du -b "$file" | cut -f 1
instead ofstat -c '%s' "$file"
? Orstat --printf="%s" "$file"
? mivk Dec 14 '13 at 11:00- 1 Only because it's more portable. BSD and Linux
stat
have different flags. Mikel Dec 16 '13 at 16:40- 2 Mac OS can't
du -b
Nakilon Apr 13 '14 at 5:28On Linux, you'd start with something like
stat -c '%s' file.txt
, and on BSD/Mac OS X, something likestat -f '%z' file.txt
.Oz Solomon ,Jun 13, 2014 at 21:44
It surprises me that no one mentionedstat
to check file size. Some methods are definitely better: using-s
to find out whether the file is empty or not is easier than anything else if that's all you want. And if you want to find files of a size, thenfind
is certainly the way to go.I also like
du
a lot to get file size in kb, but, for bytes, I'd usestat
:size=$(stat -f%z $filename) # BSD stat size=$(stat -c%s $filename) # GNU stat?alternative solution with awk and double parenthesis:FILENAME=file.txt SIZE=$(du -sb $FILENAME | awk '{ print $1 }') if ((SIZE<90000)) ; then echo "less"; else echo "not less"; fi
Jul 26, 2019 | www.linuxuprising.com
While it does have its own cheat sheet repository too, the project is actually concentrated around the creation of a unified mechanism to access well developed and maintained cheat sheet repositories.
The tool is developed by Igor Chubin, also known for its console-oriented weather forecast service wttr.in , which can be used to retrieve the weather from the console using only cURL or Wget.
It's worth noting that cheat.sh is not new. In fact it had its initial commit around May, 2017, and is a very popular repository on GitHub. But I personally only came across it recently, and I found it very useful, so I figured there must be some Linux Uprising readers who are not aware of this cool gem.
cheat.sh features & more
cheat.sh major features:The command line client features a special shell mode with a persistent queries context and readline support. It also has a query history, it integrates with the clipboard, supports tab completion for shells like Bash, Fish and Zsh, and it includes the stealth mode I mentioned in the cheat.sh features.
- Supports 58 programming languages , several DBMSes, and more than 1000 most important UNIX/Linux commands
- Very fast, returns answers within 100ms
- Simple curl / browser interface
- An optional command line client (cht.sh) is available, which allows you to quickly search cheat sheets and easily copy snippets without leaving the terminal
- Can be used from code editors, allowing inserting code snippets without having to open a web browser, search for the code, copy it, then return to your code editor and paste it. It supports Vim, Emacs, Visual Studio Code, Sublime Text and IntelliJ Idea
- Comes with a special stealth mode in which any text you select (adding it into the selection buffer of X Window System or into the clipboard) is used as a search query by cht.sh, so you can get answers without touching any other keys
The web, curl and cht.sh (command line) interfaces all make use of https://cheat.sh/ but if you prefer, you can self-host it .
It should be noted that each editor plugin supports a different feature set (configurable server, multiple answers, toggle comments, and so on). You can view a feature comparison of each cheat.sh editor plugin on the Editors integration section of the project's GitHub page.
Want to contribute a cheat sheet? See the cheat.sh guide on editing or adding a new cheat sheet.
Interested in bookmarking commands instead? You may want to give Marker, a command bookmark manager for the console , a try.
cheat.sh curl / command line client usage examples
Examples of using cheat.sh using the curl interface (this requires having curl installed as you'd expect) from the command line:Show the
tar
command cheat sheet:curl cheat.sh/tar
Example with output:$ curl cheat.sh/tar # To extract an uncompressed archive: tar -xvf /path/to/foo.tar # To create an uncompressed archive: tar -cvf /path/to/foo.tar /path/to/foo/ # To extract a .gz archive: tar -xzvf /path/to/foo.tgz # To create a .gz archive: tar -czvf /path/to/foo.tgz /path/to/foo/ # To list the content of an .gz archive: tar -ztvf /path/to/foo.tgz # To extract a .bz2 archive: tar -xjvf /path/to/foo.tgz # To create a .bz2 archive: tar -cjvf /path/to/foo.tgz /path/to/foo/ # To extract a .tar in specified Directory: tar -xvf /path/to/foo.tar -C /path/to/destination/ # To list the content of an .bz2 archive: tar -jtvf /path/to/foo.tgz # To create a .gz archive and exclude all jpg,gif,... from the tgz tar czvf /path/to/foo.tgz --exclude=\*.{jpg,gif,png,wmv,flv,tar.gz,zip} /path/to/foo/ # To use parallel (multi-threaded) implementation of compression algorithms: tar -z ... -> tar -Ipigz ... tar -j ... -> tar -Ipbzip2 ... tar -J ... -> tar -Ipixz ...
cht.sh also works instead of cheat.sh:curl cht.sh/tar
Want to search for akeyword
in all cheat sheets? Use:curl cheat.sh/~keyword
List the Python programming language cheat sheet forrandom list
:curl cht.sh/python/random+list
Example with output:$ curl cht.sh/python/random+list # python - How to randomly select an item from a list? # # Use random.choice # (https://docs.python.org/2/library/random.htmlrandom.choice): import random foo = ['a', 'b', 'c', 'd', 'e'] print(random.choice(foo)) # For cryptographically secure random choices (e.g. for generating a # passphrase from a wordlist), use random.SystemRandom # (https://docs.python.org/2/library/random.htmlrandom.SystemRandom) # class: import random foo = ['battery', 'correct', 'horse', 'staple'] secure_random = random.SystemRandom() print(secure_random.choice(foo)) # [Pēteris Caune] [so/q/306400] [cc by-sa 3.0]
Replacepython
with some other programming language supported by cheat.sh, andrandom+list
with the cheat sheet you want to show.Want to eliminate the comments from your answer? Add
?Q
at the end of the query (below is an example using the same /python/random+list):$ curl cht.sh/python/random+list?Q import random foo = ['a', 'b', 'c', 'd', 'e'] print(random.choice(foo)) import random foo = ['battery', 'correct', 'horse', 'staple'] secure_random = random.SystemRandom() print(secure_random.choice(foo))
For more flexibility and tab completion you can use cht.sh, the command line cheat.sh client; you'll find instructions for how to install it further down this article. Examples of using the cht.sh command line client:Show the
tar
command cheat sheet:cht.sh tar
List the Python programming language cheat sheet forrandom list
:cht.sh python random list
There is no need to use quotes with multiple keywords.You can start the cht.sh client in a special shell mode using:
cht.sh --shell
And then you can start typing your queries. Example:$ cht.sh --shell cht.sh> bash loop
If all your queries are about the same programming language, you can start the client in the special shell mode, directly in that context. As an example, start it with the Bash context using:cht.sh --shell bash
Example with output:$ cht.sh --shell bash cht.sh/bash> loop ........... cht.sh/bash> switch case
Want to copy the previously listed answer to the clipboard? Typec
, then pressEnter
to copy the whole answer, or typeC
and pressEnter
to copy it without comments.Type
help
in the cht.sh interactive shell mode to see all available commands. Also look under the Usage section from the cheat.sh GitHub project page for more options and advanced usage.How to install cht.sh command line client
You can use cheat.sh in a web browser, from the command line with the help of curl and without having to install anything else, as explained above, as a code editor plugin, or using its command line client which has some extra features, which I already mentioned. The steps below are for installing this cht.sh command line client.If you'd rather install a code editor plugin for cheat.sh, see the Editors integration page.
1. Install dependencies.
To install the cht.sh command line client, the
curl
command line tool will be used, so this needs to be installed on your system. Another dependency isrlwrap
, which is required by the cht.sh special shell mode. Install these dependencies as follows.
- Debian, Ubuntu, Linux Mint, Pop!_OS, and any other Linux distribution based on Debian or Ubuntu:
sudo apt install curl rlwrap
- Fedora:
sudo dnf install curl rlwrap
- Arch Linux, Manjaro:
sudo pacman -S curl rlwrap
- openSUSE:
sudo zypper install curl rlwrap
The packages seem to be named the same on most (if not all) Linux distributions, so if your Linux distribution is not on this list, just install thecurl
andrlwrap
packages using your distro's package manager.2. Download and install the cht.sh command line interface.
You can install this either for your user only (so only you can run it), or for all users:
- Install it for your user only. The command below assumes you have a
~/.bin
folder added to yourPATH
(and the folder exists). If you have some other local folder in yourPATH
where you want to install cht.sh, change install path in the commands:curl https://cht.sh/:cht.sh > ~/.bin/cht.sh chmod +x ~/.bin/cht.sh
- Install it for all users (globally, in
/usr/local/bin
):curl https://cht.sh/:cht.sh | sudo tee /usr/local/bin/cht.sh sudo chmod +x /usr/local/bin/cht.sh
If the first command appears to have frozen displaying only the cURL output, press theEnter
key and you'll be prompted to enter your password in order to save the file to/usr/local/bin
.You may also download and install the cheat.sh command completion for Bash or Zsh:
- Bash:
mkdir ~/.bash.d curl https://cheat.sh/:bash_completion > ~/.bash.d/cht.sh echo ". ~/.bash.d/cht.sh" >> ~/.bashrc
- Zsh:
mkdir ~/.zsh.d curl https://cheat.sh/:zsh > ~/.zsh.d/_cht echo 'fpath=(~/.zsh.d/ $fpath)' >> ~/.zshrc
Opening a new shell / terminal and it will load the cheat.sh completion.
Jul 23, 2019 | www.maketecheasier.com
... ... ...In technical terms, "/dev/null" is a virtual device file. As far as programs are concerned, these are treated just like real files. Utilities can request data from this kind of source, and the operating system feeds them data. But, instead of reading from disk, the operating system generates this data dynamically. An example of such a file is "/dev/zero."
In this case, however, you will write to a device file. Whatever you write to "/dev/null" is discarded, forgotten, thrown into the void. To understand why this is useful, you must first have a basic understanding of standard output and standard error in Linux or *nix type operating systems.
Related : How to Use the Tee Command in Linux
stdout and stderA command-line utility can generate two types of output. Standard output is sent to stdout. Errors are sent to stderr.
By default, stdout and stderr are associated with your terminal window (or console). This means that anything sent to stdout and stderr is normally displayed on your screen. But through shell redirections, you can change this behavior. For example, you can redirect stdout to a file. This way, instead of displaying output on the screen, it will be saved to a file for you to read later or you can redirect stdout to a physical device, say, a digital LED or LCD display.
A full article about pipes and redirections is available if you want to learn more.
- With
2>
you redirect standard error messages. Example:2>/dev/null
or2>/home/user/error.log
.- With
1>
you redirect standard output.- With
&>
you redirect both standard error and standard output.Related : 12 Useful Linux Commands for New User
Use /dev/null to Get Rid of Output You Don't NeedSince there are two types of output, standard output and standard error, the first use case is to filter out one type or the other. It's easier to understand through a practical example. Let's say you're looking for a string in "/sys" to find files that refer to power settings.
grep -r power /sys/There will be a lot of files that a regular, non-root user cannot read. This will result in many "Permission denied" errors.
These clutter the output and make it harder to spot the results that you're looking for. Since "Permission denied" errors are part of stderr, you can redirect them to "/dev/null."
grep -r power /sys/ 2>/dev/nullAs you can see, this is much easier to read.
In other cases, it might be useful to do the reverse: filter out standard output so you can only see errors.
ping google.com 1>/dev/nullThe screenshot above shows that, without redirecting, ping displays its normal output when it can reach the destination machine. In the second command, nothing is displayed while the network is online, but as soon as it gets disconnected, only error messages are displayed.
You can redirect both stdout and stderr to two different locations.
ping google.com 1>/dev/null 2>error.logIn this case, stdout messages won't be displayed at all, and error messages will be saved to the "error.log" file.
Redirect All Output to /dev/nullSometimes it's useful to get rid of all output. There are two ways to do this.
grep -r power /sys/ >/dev/null 2>&1The string
>/dev/null
means "send stdout to /dev/null," and the second part,2>&1
, means send stderr to stdout. In this case you have to refer to stdout as "&1" instead of simply "1." Writing "2>1" would just redirect stdout to a file named "1."What's important to note here is that the order is important. If you reverse the redirect parameters like this:
grep -r power /sys/ 2>&1 >/dev/nullit won't work as intended. That's because as soon as
2>&1
is interpreted, stderr is sent to stdout and displayed on screen. Next, stdout is supressed when sent to "/dev/null." The final result is that you will see errors on the screen instead of suppressing all output. If you can't remember the correct order, there's a simpler redirect that is much easier to type:grep -r power /sys/ &>/dev/nullIn this case,
Other Examples Where It Can Be Useful to Redirect to /dev/null&>/dev/null
is equivalent to saying "redirect both stdout and stderr to this location."Say you want to see how fast your disk can read sequential data. The test is not extremely accurate but accurate enough. You can use
dd
for this, but dd either outputs to stdout or can be instructed to write to a file. Withof=/dev/null
you can tell dd to write to this virtual file. You don't even have to use shell redirections here.if=
specifies the location of the input file to be read;of=
specifies the name of the output file, where to write.dd if=debian-disk.qcow2 of=/dev/null status=progress bs=1M iflag=directIn some scenarios, you may want to see how fast you can download from a server. But you don't want to write to your disk unnecessarily. Simply enough, don't write to a regular file, write to "/dev/null."
wget -O /dev/null http://ftp.halifax.rwth-aachen.de/ubuntu-releases/18.04/ubuntu-18.04.2-desktop-amd64.isoConclusionHopefully, the examples in this article can inspire you to find your own creative ways to use "/dev/null."
Know an interesting use-case for this special device file? Leave a comment below and share the knowledge!
Jun 18, 2019 | linuxconfig.org
Before proceeding further, let me give you one tip. In the example above the shell tried to expand a non-existing variable, producing a blank result. This can be very dangerous, especially when working with path names, therefore, when writing scripts, it's always recommended to use the
nounset
option which causes the shell to exit with error whenever a non existing variable is referenced:$ set -o nounset $ echo "You are reading this article on $site_!" bash: site_: unbound variableWorking with indirectionThe use of the
${!parameter}
syntax, adds a level of indirection to our parameter expansion. What does it mean? The parameter which the shell will try to expand is notparameter
; instead it will try to use the the value ofparameter
as the name of the variable to be expanded. Let's explain this with an example. We all know theHOME
variable expands in the path of the user home directory in the system, right?$ echo "${HOME}" /home/egdocVery well, if now we assign the string "HOME", to another variable, and use this type of expansion, we obtain:
$ variable_to_inspect="HOME" $ echo "${!variable_to_inspect}" /home/egdocAs you can see in the example above, instead of obtaining "HOME" as a result, as it would have happened if we performed a simple expansion, the shell used the value of
Case modification expansionvariable_to_inspect
as the name of the variable to expand, that's why we talk about a level of indirection.This parameter expansion syntax let us change the case of the alphabetic characters inside the string resulting from the expansion of the parameter. Say we have a variable called
name
; to capitalize the text returned by the expansion of the variable we would use the${parameter^}
syntax:$ name="egidio" $ echo "${name^}" EgidioWhat if we want to uppercase the entire string, instead of capitalize it? Easy! we use the
${parameter^^}
syntax:$ echo "${name^^}" EGIDIOSimilarly, to lowercase the first character of a string, we use the
${parameter,}
expansion syntax:$ name="EGIDIO" $ echo "${name,}" eGIDIOTo lowercase the entire string, instead, we use the
${parameter,,}
syntax:$ name="EGIDIO" $ echo "${name,,}" egidioIn all cases a
pattern
to match a single character can also be provided. When the pattern is provided the operation is applied only to the parts of the original string that matches it:$ name="EGIDIO" $ echo "${name,,[DIO]}" EGidio
In the example above we enclose the characters in square brackets: this causes anyone of them to be matched as a pattern.
When using the expansions we explained in this paragraph and the
parameter
is an array subscripted by@
or*
, the operation is applied to all the elements contained in it:$ my_array=(one two three) $ echo "${my_array[@]^^}" ONE TWO THREEWhen the index of a specific element in the array is referenced, instead, the operation is applied only to it:
$ my_array=(one two three) $ echo "${my_array[2]^^}" THREESubstring removalThe next syntax we will examine allows us to remove a
pattern
from the beginning or from the end of string resulting from the expansion of a parameter.Remove matching pattern from the beginning of the stringThe next syntax we will examine,
${parameter#pattern}
, allows us to remove apattern
from the beginning of the string resulting from theparameter
expansion:$ name="Egidio" $ echo "${name#Egi}" dioA similar result can be obtained by using the
"${parameter##pattern}"
syntax, but with one important difference: contrary to the one we used in the example above, which removes the shortest matching pattern from the beginning of the string, it removes the longest one. The difference is clearly visible when using the*
character in thepattern
:$ name="Egidio Docile" $ echo "${name#*i}" dio DocileIn the example above we used
*
as part of the pattern that should be removed from the string resulting by the expansion of thename
variable. Thiswildcard
matches any character, so the pattern itself translates in "'i' character and everything before it". As we already said, when we use the${parameter#pattern}
syntax, the shortest matching pattern is removed, in this case it is "Egi". Let's see what happens when we use the"${parameter##pattern}"
syntax instead:$ name="Egidio Docile" $ echo "${name##*i}" leThis time the longest matching pattern is removed ("Egidio Doci"): the longest possible match includes the third 'i' and everything before it. The result of the expansion is just "le".
Remove matching pattern from the end of the stringThe syntax we saw above remove the shortest or longest matching pattern from the beginning of the string. If we want the pattern to be removed from the end of the string, instead, we must use the
${parameter%pattern}
or${parameter%%pattern}
expansions, to remove, respectively, the shortest and longest match from the end of the string:$ name="Egidio Docile" $ echo "${name%i*}" Egidio DocIn this example the pattern we provided roughly translates in "'i' character and everything after it starting from the end of the string". The shortest match is "ile", so what is returned is "Egidio Doc". If we try the same example but we use the syntax which removes the longest match we obtain:
$ name="Egidio Docile" $ echo "${name%%i*}" EgIn this case the once the longest match is removed, what is returned is "Eg".
In all the expansions we saw above, if
parameter
is an array and it is subscripted with*
or@
, the removal of the matching pattern is applied to all its elements:$ my_array=(one two three) $ echo "${my_array[@]#*o}" ne three
Search and replace patternWe used the previous syntax to remove a matching pattern from the beginning or from the end of the string resulting from the expansion of a parameter. What if we want to replace
pattern
with something else? We can use the${parameter/pattern/string}
or${parameter//pattern/string}
syntax. The former replaces only the first occurrence of the pattern, the latter all the occurrences:$ phrase="yellow is the sun and yellow is the lemon" $ echo "${phrase/yellow/red}" red is the sun and yellow is the lemonThe
parameter
(phrase) is expanded, and the longest match of thepattern
(yellow) is matched against it. The match is then replaced by the providedstring
(red). As you can observe only the first occurrence is replaced, so the lemon remains yellow! If we want to change all the occurrences of the pattern, we must prefix it with the/
character:$ phrase="yellow is the sun and yellow is the lemon" $ echo "${phrase//yellow/red}" red is the sun and red is the lemonThis time all the occurrences of "yellow" has been replaced by "red". As you can see the pattern is matched wherever it is found in the string resulting from the expansion of
parameter
. If we want to specify that it must be matched only at the beginning or at the end of the string, we must prefix it respectively with the#
or%
character.Just like in the previous cases, if
parameter
is an array subscripted by either*
or@
, the substitution happens in each one of its elements:$ my_array=(one two three) $ echo "${my_array[@]/o/u}" une twu threeSubstring expansionThe
${parameter:offset}
and${parameter:offset:length}
expansions let us expand only a part of the parameter, returning a substring starting at the specifiedoffset
andlength
characters long. If the length is not specified the expansion proceeds until the end of the original string. This type of expansion is calledsubstring expansion
:$ name="Egidio Docile" $ echo "${name:3}" dio DocileIn the example above we provided just the
offset
, without specifying thelength
, therefore the result of the expansion was the substring obtained by starting at the character specified by the offset (3).If we specify a length, the substring will start at
offset
and will belength
characters long:$ echo "${name:3:3}" dioIf the
offset
is negative, it is calculated from the end of the string. In this case an additional space must be added after:
otherwise the shell will consider it as another type of expansion identified by:-
which is used to provide a default value if the parameter to be expanded doesn't exist (we talked about it in the article about managing the expansion of empty or unset bash variables ):$ echo "${name: -6}" DocileIf the provided
length
is negative, instead of being interpreted as the total number of characters the resulting string should be long, it is considered as an offset to be calculated from the end of the string. The result of the expansion will therefore be a substring starting atoffset
and ending atlength
characters from the end of the original string:$ echo "${name:7:-3}" DocWhen using this expansion and
parameter
is an indexed array subscribed by*
or@
, theoffset
is relative to the indexes of the array elements. For example:$ my_array=(one two three) $ echo "${my_array[@]:0:2}" one two $ echo "${my_array[@]: -2}" two three
Mar 25, 2019 | linuxize.com
https://acdn.adnxs.com/ib/static/usersync/v3/async_usersync.html
https://bh.contextweb.com/visitormatch
Concatenating Strings with the += OperatorAnother way of concatenating strings in bash is by appending variables or literal strings to a variable using the
+=
operator:VAR1="Hello, " VAR1+=" World" echo "$VAR1"Hello, WorldThe following example is using the
languages.sh+=
operator to concatenate strings in bash for loop :VAR="" for ELEMENT in 'Hydrogen' 'Helium' 'Lithium' 'Beryllium'; do VAR+="${ELEMENT} " done echo "$VAR"
Feb 21, 2019 | alvinalexander.com
By Alvin Alexander. Last updated: June 22 2017 Unix/Linux bash shell script FAQ: How do I prompt a user for input from a shell script (Bash shell script), and then read the input the user provides?
Answer: I usually use the shell script
read
function to read input from a shell script. Here are two slightly different versions of the same shell script. This first version prompts the user for input only once, and then dies if the user doesn't give a correct Y/N answer:# (1) prompt user, and read command line argument read -p "Run the cron script now? " answer # (2) handle the command line argument we were given while true do case $answer in [yY]* ) /usr/bin/wget -O - -q -t 1 http://www.example.com/cron.php echo "Okay, just ran the cron script." break;; [nN]* ) exit;; * ) echo "Dude, just enter Y or N, please."; break ;; esac doneThis second version stays in a loop until the user supplies a Y/N answer:
while true do # (1) prompt user, and read command line argument read -p "Run the cron script now? " answer # (2) handle the input we were given case $answer in [yY]* ) /usr/bin/wget -O - -q -t 1 http://www.example.com/cron.php echo "Okay, just ran the cron script." break;; [nN]* ) exit;; * ) echo "Dude, just enter Y or N, please.";; esac doneI prefer the second approach, but I thought I'd share both of them here. They are subtly different, so not the extra break in the first script.
This Linux Bash 'read' function is nice, because it does both things, prompting the user for input, and then reading the input. The other nice thing it does is leave the cursor at the end of your prompt, as shown here:
Run the cron script now? _(This is so much nicer than what I had to do years ago.)
Nov 17, 2018 | dvorka.github.io
Configuration
Get most of HSTR by configuring it with:
hstr --show-configuration >> ~/.bashrcRun
hstr --show-configuration
to determine what will be appended to your Bash profile. Don't forget to source~/.bashrc
to apply changes.
For more configuration options details please refer to:
- bind HSTR to a keyboard shortcut
- Bash Emacs keymap (default)
- Bash Vim keymap
- zsh Emacs keymap (default)
- create
hh
alias forhstr
- het more colors
- choose default history view
- set filtering preferences
- configure commands blacklist
- disable confirm on delete
- tune verbosity
- history settings:
Check also configuration examples .
Binding HSTR to Keyboard ShortcutBash uses Emacs style keyboard shortcuts by default. There is also Vi mode. Find out how to bind HSTR to a keyboard shortcut based on the style you prefer below.
Check your active Bash keymap with:
bind -v | grep editing-mode bind -v | grep keymapTo determine character sequence emitted by a pressed key in terminal, type Ctrl-v and then press the key. Check your current bindings using:
bind -SBash Emacs Keymap (default)Bind HSTR to a Bash key e.g. to Ctrl-r :
bind '"\C-r": "\C-ahstr -- \C-j"'or Ctrl-Altr :
bind '"\e\C-r":"\C-ahstr -- \C-j"'or Ctrl-F12 :
bind '"\e[24;5~":"\C-ahstr -- \C-j"'Bind HSTR to Ctrl-r only if it is interactive shell:
if [[ $- =~ .*i.* ]]; then bind '"\C-r": "\C-a hstr -- \C-j"'; fiYou can bind also other HSTR commands like
--kill-last-command
:if [[ $- =~ .*i.* ]]; then bind '"\C-xk": "\C-a hstr -k \C-j"'; fiBash Vim KeymapBind HSTR to a Bash key e.g. to Ctrlr :
bind '"\C-r": "\e0ihstr -- \C-j"'Zsh Emacs KeymapBind HSTR to a
zsh
key e.g. to Ctrlr :bindkey -s "\C-r" "\eqhstr --\n"AliasIf you want to make running of
hstr
from command line even easier, then define alias in your~/.bashrc
:alias hh=hstrDon't forget to source
Colors~/.bashrc
to be able to to usehh
command.Let HSTR to use colors:
export HSTR_CONFIG=hicoloror ensure black and white mode:
export HSTR_CONFIG=monochromaticDefault History ViewTo show normal history by default (instead of metrics-based view, which is default) use:
export HSTR_CONFIG=raw-history-viewTo show favorite commands as default view use:
export HSTR_CONFIG=favorites-viewFilteringTo use regular expressions based matching:
export HSTR_CONFIG=regexp-matchingTo use substring based matching:
export HSTR_CONFIG=substring-matchingTo use keywords (substrings whose order doesn't matter) search matching (default):
export HSTR_CONFIG=keywords-matching
Make search case sensitive (insensitive by default):
export HSTR_CONFIG=case-sensitiveKeep duplicates in
raw-history-view
(duplicate commands are discarded by default):export HSTR_CONFIG=duplicatesStatic favoritesLast selected favorite command is put the head of favorite commands list by default. If you want to disable this behavior and make favorite commands list static, then use the following configuration:
export HSTR_CONFIG=static-favoritesSkip favorites commentsIf you don't want to show lines starting with
#
(comments) among favorites, then use the following configuration:export HSTR_CONFIG=skip-favorites-commentsBlacklistSkip commands when processing history i.e. make sure that these commands will not be shown in any view:
export HSTR_CONFIG=blacklistCommands to be stored in
~/.hstr_blacklist
file with trailing empty line. For instance:cd my-private-command ls llConfirm on DeleteDo not prompt for confirmation when deleting history items:
export HSTR_CONFIG=no-confirmVerbosityShow a message when deleting the last command from history:
export HSTR_CONFIG=verbose-killShow warnings:
export HSTR_CONFIG=warningShow debug messages:
export HSTR_CONFIG=debugBash History SettingsUse the following Bash settings to get most out of HSTR.
Increase the size of history maintained by BASH - variables defined below increase the number of history items and history file size (default value is 500):
export HISTFILESIZE=10000 export HISTSIZE=${HISTFILESIZE}Ensure syncing (flushing and reloading) of
.bash_history
with in-memory history:export PROMPT_COMMAND="history -a; history -n; ${PROMPT_COMMAND}"Force appending of in-memory history to
.bash_history
(instead of overwriting):shopt -s histappendUse leading space to hide commands from history:
export HISTCONTROL=ignorespaceSuitable for a sensitive information like passwords.
zsh History SettingsIf you use
zsh
, setHISTFILE
environment variable in~/.zshrc
:export HISTFILE=~/.zsh_historyExamplesMore colors with case sensitive search of history:
export HSTR_CONFIG=hicolor,case-sensitiveFavorite commands view in black and white with prompt at the bottom of the screen:
export HSTR_CONFIG=favorites-view,prompt-bottomKeywords based search in colors with debug mode verbosity:
export HSTR_CONFIG=keywords-matching,hicolor,debug
May 14, 2012 | stackoverflow.com
Lgn ,May 14, 2012 at 15:15
In a Bash script I would like to split a line into pieces and store them in an array.The line:
Paris, France, EuropeI would like to have them in an array like this:
array[0] = Paris array[1] = France array[2] = EuropeI would like to use simple code, the command's speed doesn't matter. How can I do it?
antak ,Jun 18, 2018 at 9:22
This is #1 Google hit but there's controversy in the answer because the question unfortunately asks about delimiting on,
(comma-space) and not a single character such as comma. If you're only interested in the latter, answers here are easier to follow: stackoverflow.com/questions/918886/ antak Jun 18 '18 at 9:22Dennis Williamson ,May 14, 2012 at 15:16
IFS=', ' read -r -a array <<< "$string"Note that the characters in
$IFS
are treated individually as separators so that in this case fields may be separated by either a comma or a space rather than the sequence of the two characters. Interestingly though, empty fields aren't created when comma-space appears in the input because the space is treated specially.To access an individual element:
echo "${array[0]}"To iterate over the elements:
for element in "${array[@]}" do echo "$element" doneTo get both the index and the value:
for index in "${!array[@]}" do echo "$index ${array[index]}" doneThe last example is useful because Bash arrays are sparse. In other words, you can delete an element or add an element and then the indices are not contiguous.
unset "array[1]" array[42]=EarthTo get the number of elements in an array:
echo "${#array[@]}"As mentioned above, arrays can be sparse so you shouldn't use the length to get the last element. Here's how you can in Bash 4.2 and later:
echo "${array[-1]}"in any version of Bash (from somewhere after 2.05b):
echo "${array[@]: -1:1}"Larger negative offsets select farther from the end of the array. Note the space before the minus sign in the older form. It is required.
l0b0 ,May 14, 2012 at 15:24
Just useIFS=', '
, then you don't have to remove the spaces separately. Test:IFS=', ' read -a array <<< "Paris, France, Europe"; echo "${array[@]}"
l0b0 May 14 '12 at 15:24Dennis Williamson ,May 14, 2012 at 16:33
@l0b0: Thanks. I don't know what I was thinking. I like to usedeclare -p array
for test output, by the way. Dennis Williamson May 14 '12 at 16:33Nathan Hyde ,Mar 16, 2013 at 21:09
@Dennis Williamson - Awesome, thorough answer. Nathan Hyde Mar 16 '13 at 21:09dsummersl ,Aug 9, 2013 at 14:06
MUCH better than multiplecut -f
calls! dsummersl Aug 9 '13 at 14:06caesarsol ,Oct 29, 2015 at 14:45
Warning: the IFS variable means split by one of these characters , so it's not a sequence of chars to split by.IFS=', ' read -a array <<< "a,d r s,w"
=>${array[*]} == a d r s w
caesarsol Oct 29 '15 at 14:45Jim Ho ,Mar 14, 2013 at 2:20
Here is a way without setting IFS:string="1:2:3:4:5" set -f # avoid globbing (expansion of *). array=(${string//:/ }) for i in "${!array[@]}" do echo "$i=>${array[i]}" doneThe idea is using string replacement:
${string//substring/replacement}to replace all matches of $substring with white space and then using the substituted string to initialize a array:
(element1 element2 ... elementN)Note: this answer makes use of the split+glob operator . Thus, to prevent expansion of some characters (such as
*
) it is a good idea to pause globbing for this script.Werner Lehmann ,May 4, 2013 at 22:32
Used this approach... until I came across a long string to split. 100% CPU for more than a minute (then I killed it). It's a pity because this method allows to split by a string, not some character in IFS. Werner Lehmann May 4 '13 at 22:32Dieter Gribnitz ,Sep 2, 2014 at 15:46
WARNING: Just ran into a problem with this approach. If you have an element named * you will get all the elements of your cwd as well. thus string="1:2:3:4:*" will give some unexpected and possibly dangerous results depending on your implementation. Did not get the same error with (IFS=', ' read -a array <<< "$string") and this one seems safe to use. Dieter Gribnitz Sep 2 '14 at 15:46akostadinov ,Nov 6, 2014 at 14:31
not reliable for many kinds of values, use with care akostadinov Nov 6 '14 at 14:31Andrew White ,Jun 1, 2016 at 11:44
quoting${string//:/ }
prevents shell expansion Andrew White Jun 1 '16 at 11:44Mark Thomson ,Jun 5, 2016 at 20:44
I had to use the following on OSX:array=(${string//:/ })
Mark Thomson Jun 5 '16 at 20:44bgoldst ,Jul 19, 2017 at 21:20
All of the answers to this question are wrong in one way or another.
IFS=', ' read -r -a array <<< "$string"1: This is a misuse of
$IFS
. The value of the$IFS
variable is not taken as a single variable-length string separator, rather it is taken as a set of single-character string separators, where each field thatread
splits off from the input line can be terminated by any character in the set (comma or space, in this example).Actually, for the real sticklers out there, the full meaning of
$IFS
is slightly more involved. From the bash manual :The shell treats each character of IFS as a delimiter, and splits the results of the other expansions into words using these characters as field terminators. If IFS is unset, or its value is exactly <space><tab><newline> , the default, then sequences of <space> , <tab> , and <newline> at the beginning and end of the results of the previous expansions are ignored, and any sequence of IFS characters not at the beginning or end serves to delimit words. If IFS has a value other than the default, then sequences of the whitespace characters <space> , <tab> , and <newline> are ignored at the beginning and end of the word, as long as the whitespace character is in the value of IFS (an IFS whitespace character). Any character in IFS that is not IFS whitespace, along with any adjacent IFS whitespace characters, delimits a field. A sequence of IFS whitespace characters is also treated as a delimiter. If the value of IFS is null, no word splitting occurs.
Basically, for non-default non-null values of
$IFS
, fields can be separated with either (1) a sequence of one or more characters that are all from the set of "IFS whitespace characters" (that is, whichever of <space> , <tab> , and <newline> ("newline" meaning line feed (LF) ) are present anywhere in$IFS
), or (2) any non-"IFS whitespace character" that's present in$IFS
along with whatever "IFS whitespace characters" surround it in the input line.For the OP, it's possible that the second separation mode I described in the previous paragraph is exactly what he wants for his input string, but we can be pretty confident that the first separation mode I described is not correct at all. For example, what if his input string was
'Los Angeles, United States, North America'
?IFS=', ' read -ra a <<<'Los Angeles, United States, North America'; declare -p a; ## declare -a a=([0]="Los" [1]="Angeles" [2]="United" [3]="States" [4]="North" [5]="America")2: Even if you were to use this solution with a single-character separator (such as a comma by itself, that is, with no following space or other baggage), if the value of the
$string
variable happens to contain any LFs, thenread
will stop processing once it encounters the first LF. Theread
builtin only processes one line per invocation. This is true even if you are piping or redirecting input only to theread
statement, as we are doing in this example with the here-string mechanism, and thus unprocessed input is guaranteed to be lost. The code that powers theread
builtin has no knowledge of the data flow within its containing command structure.You could argue that this is unlikely to cause a problem, but still, it's a subtle hazard that should be avoided if possible. It is caused by the fact that the
read
builtin actually does two levels of input splitting: first into lines, then into fields. Since the OP only wants one level of splitting, this usage of theread
builtin is not appropriate, and we should avoid it.3: A non-obvious potential issue with this solution is that
read
always drops the trailing field if it is empty, although it preserves empty fields otherwise. Here's a demo:string=', , a, , b, c, , , '; IFS=', ' read -ra a <<<"$string"; declare -p a; ## declare -a a=([0]="" [1]="" [2]="a" [3]="" [4]="b" [5]="c" [6]="" [7]="")Maybe the OP wouldn't care about this, but it's still a limitation worth knowing about. It reduces the robustness and generality of the solution.
This problem can be solved by appending a dummy trailing delimiter to the input string just prior to feeding it to
read
, as I will demonstrate later.
string="1:2:3:4:5" set -f # avoid globbing (expansion of *). array=(${string//:/ })t="one,two,three" a=($(echo $t | tr ',' "\n"))(Note: I added the missing parentheses around the command substitution which the answerer seems to have omitted.)
string="1,2,3,4" array=(`echo $string | sed 's/,/\n/g'`)These solutions leverage word splitting in an array assignment to split the string into fields. Funnily enough, just like
read
, general word splitting also uses the$IFS
special variable, although in this case it is implied that it is set to its default value of <space><tab><newline> , and therefore any sequence of one or more IFS characters (which are all whitespace characters now) is considered to be a field delimiter.This solves the problem of two levels of splitting committed by
read
, since word splitting by itself constitutes only one level of splitting. But just as before, the problem here is that the individual fields in the input string can already contain$IFS
characters, and thus they would be improperly split during the word splitting operation. This happens to not be the case for any of the sample input strings provided by these answerers (how convenient...), but of course that doesn't change the fact that any code base that used this idiom would then run the risk of blowing up if this assumption were ever violated at some point down the line. Once again, consider my counterexample of'Los Angeles, United States, North America'
(or'Los Angeles:United States:North America'
).Also, word splitting is normally followed by filename expansion ( aka pathname expansion aka globbing), which, if done, would potentially corrupt words containing the characters
*
,?
, or[
followed by]
(and, ifextglob
is set, parenthesized fragments preceded by?
,*
,+
,@
, or!
) by matching them against file system objects and expanding the words ("globs") accordingly. The first of these three answerers has cleverly undercut this problem by runningset -f
beforehand to disable globbing. Technically this works (although you should probably addset +f
afterward to reenable globbing for subsequent code which may depend on it), but it's undesirable to have to mess with global shell settings in order to hack a basic string-to-array parsing operation in local code.Another issue with this answer is that all empty fields will be lost. This may or may not be a problem, depending on the application.
Note: If you're going to use this solution, it's better to use the
${string//:/ }
"pattern substitution" form of parameter expansion , rather than going to the trouble of invoking a command substitution (which forks the shell), starting up a pipeline, and running an external executable (tr
orsed
), since parameter expansion is purely a shell-internal operation. (Also, for thetr
andsed
solutions, the input variable should be double-quoted inside the command substitution; otherwise word splitting would take effect in theecho
command and potentially mess with the field values. Also, the$(...)
form of command substitution is preferable to the old`...`
form since it simplifies nesting of command substitutions and allows for better syntax highlighting by text editors.)
str="a, b, c, d" # assuming there is a space after ',' as in Q arr=(${str//,/}) # delete all occurrences of ','This answer is almost the same as #2 . The difference is that the answerer has made the assumption that the fields are delimited by two characters, one of which being represented in the default
$IFS
, and the other not. He has solved this rather specific case by removing the non-IFS-represented character using a pattern substitution expansion and then using word splitting to split the fields on the surviving IFS-represented delimiter character.This is not a very generic solution. Furthermore, it can be argued that the comma is really the "primary" delimiter character here, and that stripping it and then depending on the space character for field splitting is simply wrong. Once again, consider my counterexample:
'Los Angeles, United States, North America'
.Also, again, filename expansion could corrupt the expanded words, but this can be prevented by temporarily disabling globbing for the assignment with
set -f
and thenset +f
.Also, again, all empty fields will be lost, which may or may not be a problem depending on the application.
string='first line second line third line' oldIFS="$IFS" IFS=' ' IFS=${IFS:0:1} # this is useful to format your code with tabs lines=( $string ) IFS="$oldIFS"This is similar to #2 and #3 in that it uses word splitting to get the job done, only now the code explicitly sets
$IFS
to contain only the single-character field delimiter present in the input string. It should be repeated that this cannot work for multicharacter field delimiters such as the OP's comma-space delimiter. But for a single-character delimiter like the LF used in this example, it actually comes close to being perfect. The fields cannot be unintentionally split in the middle as we saw with previous wrong answers, and there is only one level of splitting, as required.One problem is that filename expansion will corrupt affected words as described earlier, although once again this can be solved by wrapping the critical statement in
set -f
andset +f
.Another potential problem is that, since LF qualifies as an "IFS whitespace character" as defined earlier, all empty fields will be lost, just as in #2 and #3 . This would of course not be a problem if the delimiter happens to be a non-"IFS whitespace character", and depending on the application it may not matter anyway, but it does vitiate the generality of the solution.
So, to sum up, assuming you have a one-character delimiter, and it is either a non-"IFS whitespace character" or you don't care about empty fields, and you wrap the critical statement in
set -f
andset +f
, then this solution works, but otherwise not.(Also, for information's sake, assigning a LF to a variable in bash can be done more easily with the
$'...'
syntax, e.g.IFS=$'\n';
.)
countries='Paris, France, Europe' OIFS="$IFS" IFS=', ' array=($countries) IFS="$OIFS"IFS=', ' eval 'array=($string)'This solution is effectively a cross between #1 (in that it sets
$IFS
to comma-space) and #2-4 (in that it uses word splitting to split the string into fields). Because of this, it suffers from most of the problems that afflict all of the above wrong answers, sort of like the worst of all worlds.Also, regarding the second variant, it may seem like the
eval
call is completely unnecessary, since its argument is a single-quoted string literal, and therefore is statically known. But there's actually a very non-obvious benefit to usingeval
in this way. Normally, when you run a simple command which consists of a variable assignment only , meaning without an actual command word following it, the assignment takes effect in the shell environment:IFS=', '; ## changes $IFS in the shell environmentThis is true even if the simple command involves multiple variable assignments; again, as long as there's no command word, all variable assignments affect the shell environment:
IFS=', ' array=($countries); ## changes both $IFS and $array in the shell environmentBut, if the variable assignment is attached to a command name (I like to call this a "prefix assignment") then it does not affect the shell environment, and instead only affects the environment of the executed command, regardless whether it is a builtin or external:
IFS=', ' :; ## : is a builtin command, the $IFS assignment does not outlive it IFS=', ' env; ## env is an external command, the $IFS assignment does not outlive itRelevant quote from the bash manual :
If no command name results, the variable assignments affect the current shell environment. Otherwise, the variables are added to the environment of the executed command and do not affect the current shell environment.
It is possible to exploit this feature of variable assignment to change
$IFS
only temporarily, which allows us to avoid the whole save-and-restore gambit like that which is being done with the$OIFS
variable in the first variant. But the challenge we face here is that the command we need to run is itself a mere variable assignment, and hence it would not involve a command word to make the$IFS
assignment temporary. You might think to yourself, well why not just add a no-op command word to the statement like the: builtin
to make the$IFS
assignment temporary? This does not work because it would then make the$array
assignment temporary as well:IFS=', ' array=($countries) :; ## fails; new $array value never escapes the : commandSo, we're effectively at an impasse, a bit of a catch-22. But, when
eval
runs its code, it runs it in the shell environment, as if it was normal, static source code, and therefore we can run the$array
assignment inside theeval
argument to have it take effect in the shell environment, while the$IFS
prefix assignment that is prefixed to theeval
command will not outlive theeval
command. This is exactly the trick that is being used in the second variant of this solution:IFS=', ' eval 'array=($string)'; ## $IFS does not outlive the eval command, but $array doesSo, as you can see, it's actually quite a clever trick, and accomplishes exactly what is required (at least with respect to assignment effectation) in a rather non-obvious way. I'm actually not against this trick in general, despite the involvement of
eval
; just be careful to single-quote the argument string to guard against security threats.But again, because of the "worst of all worlds" agglomeration of problems, this is still a wrong answer to the OP's requirement.
IFS=', '; array=(Paris, France, Europe) IFS=' ';declare -a array=(Paris France Europe)Um... what? The OP has a string variable that needs to be parsed into an array. This "answer" starts with the verbatim contents of the input string pasted into an array literal. I guess that's one way to do it.
It looks like the answerer may have assumed that the
$IFS
variable affects all bash parsing in all contexts, which is not true. From the bash manual:IFS The Internal Field Separator that is used for word splitting after expansion and to split lines into words with the read builtin command. The default value is <space><tab><newline> .
So the
$IFS
special variable is actually only used in two contexts: (1) word splitting that is performed after expansion (meaning not when parsing bash source code) and (2) for splitting input lines into words by theread
builtin.Let me try to make this clearer. I think it might be good to draw a distinction between parsing and execution . Bash must first parse the source code, which obviously is a parsing event, and then later it executes the code, which is when expansion comes into the picture. Expansion is really an execution event. Furthermore, I take issue with the description of the
$IFS
variable that I just quoted above; rather than saying that word splitting is performed after expansion , I would say that word splitting is performed during expansion, or, perhaps even more precisely, word splitting is part of the expansion process. The phrase "word splitting" refers only to this step of expansion; it should never be used to refer to the parsing of bash source code, although unfortunately the docs do seem to throw around the words "split" and "words" a lot. Here's a relevant excerpt from the linux.die.net version of the bash manual:Expansion is performed on the command line after it has been split into words. There are seven kinds of expansion performed: brace expansion , tilde expansion , parameter and variable expansion , command substitution , arithmetic expansion , word splitting , and pathname expansion .
The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, arithmetic expansion, and command substitution (done in a left-to-right fashion); word splitting; and pathname expansion.
You could argue the GNU version of the manual does slightly better, since it opts for the word "tokens" instead of "words" in the first sentence of the Expansion section:
Expansion is performed on the command line after it has been split into tokens.
The important point is,
$IFS
does not change the way bash parses source code. Parsing of bash source code is actually a very complex process that involves recognition of the various elements of shell grammar, such as command sequences, command lists, pipelines, parameter expansions, arithmetic substitutions, and command substitutions. For the most part, the bash parsing process cannot be altered by user-level actions like variable assignments (actually, there are some minor exceptions to this rule; for example, see the variouscompatxx
shell settings , which can change certain aspects of parsing behavior on-the-fly). The upstream "words"/"tokens" that result from this complex parsing process are then expanded according to the general process of "expansion" as broken down in the above documentation excerpts, where word splitting of the expanded (expanding?) text into downstream words is simply one step of that process. Word splitting only touches text that has been spit out of a preceding expansion step; it does not affect literal text that was parsed right off the source bytestream.
string='first line second line third line' while read -r line; do lines+=("$line"); done <<<"$string"This is one of the best solutions. Notice that we're back to using
read
. Didn't I say earlier thatread
is inappropriate because it performs two levels of splitting, when we only need one? The trick here is that you can callread
in such a way that it effectively only does one level of splitting, specifically by splitting off only one field per invocation, which necessitates the cost of having to call it repeatedly in a loop. It's a bit of a sleight of hand, but it works.But there are problems. First: When you provide at least one NAME argument to
read
, it automatically ignores leading and trailing whitespace in each field that is split off from the input string. This occurs whether$IFS
is set to its default value or not, as described earlier in this post. Now, the OP may not care about this for his specific use-case, and in fact, it may be a desirable feature of the parsing behavior. But not everyone who wants to parse a string into fields will want this. There is a solution, however: A somewhat non-obvious usage ofread
is to pass zero NAME arguments. In this case,read
will store the entire input line that it gets from the input stream in a variable named$REPLY
, and, as a bonus, it does not strip leading and trailing whitespace from the value. This is a very robust usage ofread
which I've exploited frequently in my shell programming career. Here's a demonstration of the difference in behavior:string=$' a b \n c d \n e f '; ## input string a=(); while read -r line; do a+=("$line"); done <<<"$string"; declare -p a; ## declare -a a=([0]="a b" [1]="c d" [2]="e f") ## read trimmed surrounding whitespace a=(); while read -r; do a+=("$REPLY"); done <<<"$string"; declare -p a; ## declare -a a=([0]=" a b " [1]=" c d " [2]=" e f ") ## no trimmingThe second issue with this solution is that it does not actually address the case of a custom field separator, such as the OP's comma-space. As before, multicharacter separators are not supported, which is an unfortunate limitation of this solution. We could try to at least split on comma by specifying the separator to the
-d
option, but look what happens:string='Paris, France, Europe'; a=(); while read -rd,; do a+=("$REPLY"); done <<<"$string"; declare -p a; ## declare -a a=([0]="Paris" [1]=" France")Predictably, the unaccounted surrounding whitespace got pulled into the field values, and hence this would have to be corrected subsequently through trimming operations (this could also be done directly in the while-loop). But there's another obvious error: Europe is missing! What happened to it? The answer is that
read
returns a failing return code if it hits end-of-file (in this case we can call it end-of-string) without encountering a final field terminator on the final field. This causes the while-loop to break prematurely and we lose the final field.Technically this same error afflicted the previous examples as well; the difference there is that the field separator was taken to be LF, which is the default when you don't specify the
-d
option, and the<<<
("here-string") mechanism automatically appends a LF to the string just before it feeds it as input to the command. Hence, in those cases, we sort of accidentally solved the problem of a dropped final field by unwittingly appending an additional dummy terminator to the input. Let's call this solution the "dummy-terminator" solution. We can apply the dummy-terminator solution manually for any custom delimiter by concatenating it against the input string ourselves when instantiating it in the here-string:a=(); while read -rd,; do a+=("$REPLY"); done <<<"$string,"; declare -p a; declare -a a=([0]="Paris" [1]=" France" [2]=" Europe")There, problem solved. Another solution is to only break the while-loop if both (1)
read
returned failure and (2)$REPLY
is empty, meaningread
was not able to read any characters prior to hitting end-of-file. Demo:a=(); while read -rd,|| [[ -n "$REPLY" ]]; do a+=("$REPLY"); done <<<"$string"; declare -p a; ## declare -a a=([0]="Paris" [1]=" France" [2]=$' Europe\n')This approach also reveals the secretive LF that automatically gets appended to the here-string by the
<<<
redirection operator. It could of course be stripped off separately through an explicit trimming operation as described a moment ago, but obviously the manual dummy-terminator approach solves it directly, so we could just go with that. The manual dummy-terminator solution is actually quite convenient in that it solves both of these two problems (the dropped-final-field problem and the appended-LF problem) in one go.So, overall, this is quite a powerful solution. It's only remaining weakness is a lack of support for multicharacter delimiters, which I will address later.
string='first line second line third line' readarray -t lines <<<"$string"(This is actually from the same post as #7 ; the answerer provided two solutions in the same post.)
The
readarray
builtin, which is a synonym formapfile
, is ideal. It's a builtin command which parses a bytestream into an array variable in one shot; no messing with loops, conditionals, substitutions, or anything else. And it doesn't surreptitiously strip any whitespace from the input string. And (if-O
is not given) it conveniently clears the target array before assigning to it. But it's still not perfect, hence my criticism of it as a "wrong answer".First, just to get this out of the way, note that, just like the behavior of
read
when doing field-parsing,readarray
drops the trailing field if it is empty. Again, this is probably not a concern for the OP, but it could be for some use-cases. I'll come back to this in a moment.Second, as before, it does not support multicharacter delimiters. I'll give a fix for this in a moment as well.
Third, the solution as written does not parse the OP's input string, and in fact, it cannot be used as-is to parse it. I'll expand on this momentarily as well.
For the above reasons, I still consider this to be a "wrong answer" to the OP's question. Below I'll give what I consider to be the right answer.
Right answer
Here's a nave attempt to make #8 work by just specifying the
-d
option:string='Paris, France, Europe'; readarray -td, a <<<"$string"; declare -p a; ## declare -a a=([0]="Paris" [1]=" France" [2]=$' Europe\n')We see the result is identical to the result we got from the double-conditional approach of the looping
read
solution discussed in #7 . We can almost solve this with the manual dummy-terminator trick:readarray -td, a <<<"$string,"; declare -p a; ## declare -a a=([0]="Paris" [1]=" France" [2]=" Europe" [3]=$'\n')The problem here is that
readarray
preserved the trailing field, since the<<<
redirection operator appended the LF to the input string, and therefore the trailing field was not empty (otherwise it would've been dropped). We can take care of this by explicitly unsetting the final array element after-the-fact:readarray -td, a <<<"$string,"; unset 'a[-1]'; declare -p a; ## declare -a a=([0]="Paris" [1]=" France" [2]=" Europe")The only two problems that remain, which are actually related, are (1) the extraneous whitespace that needs to be trimmed, and (2) the lack of support for multicharacter delimiters.
The whitespace could of course be trimmed afterward (for example, see How to trim whitespace from a Bash variable? ). But if we can hack a multicharacter delimiter, then that would solve both problems in one shot.
Unfortunately, there's no direct way to get a multicharacter delimiter to work. The best solution I've thought of is to preprocess the input string to replace the multicharacter delimiter with a single-character delimiter that will be guaranteed not to collide with the contents of the input string. The only character that has this guarantee is the NUL byte . This is because, in bash (though not in zsh, incidentally), variables cannot contain the NUL byte. This preprocessing step can be done inline in a process substitution. Here's how to do it using awk :
readarray -td '' a < <(awk '{ gsub(/, /,"\0"); print; }' <<<"$string, "); unset 'a[-1]'; declare -p a; ## declare -a a=([0]="Paris" [1]="France" [2]="Europe")There, finally! This solution will not erroneously split fields in the middle, will not cut out prematurely, will not drop empty fields, will not corrupt itself on filename expansions, will not automatically strip leading and trailing whitespace, will not leave a stowaway LF on the end, does not require loops, and does not settle for a single-character delimiter.
Trimming solution
Lastly, I wanted to demonstrate my own fairly intricate trimming solution using the obscure
-C callback
option ofreadarray
. Unfortunately, I've run out of room against Stack Overflow's draconian 30,000 character post limit, so I won't be able to explain it. I'll leave that as an exercise for the reader.function mfcb { local val="$4"; "$1"; eval "$2[$3]=\$val;"; }; function val_ltrim { if [[ "$val" =~ ^[[:space:]]+ ]]; then val="${val:${#BASH_REMATCH[0]}}"; fi; }; function val_rtrim { if [[ "$val" =~ [[:space:]]+$ ]]; then val="${val:0:${#val}-${#BASH_REMATCH[0]}}"; fi; }; function val_trim { val_ltrim; val_rtrim; }; readarray -c1 -C 'mfcb val_trim a' -td, <<<"$string,"; unset 'a[-1]'; declare -p a; ## declare -a a=([0]="Paris" [1]="France" [2]="Europe")fbicknel ,Aug 18, 2017 at 15:57
It may also be helpful to note (though understandably you had no room to do so) that the-d
option toreadarray
first appears in Bash 4.4. fbicknel Aug 18 '17 at 15:57Cyril Duchon-Doris ,Nov 3, 2017 at 9:16
You should add a "TL;DR : scroll 3 pages to see the right solution at the end of my answer" Cyril Duchon-Doris Nov 3 '17 at 9:16dawg ,Nov 26, 2017 at 22:28
Great answer (+1). If you change your awk toawk '{ gsub(/,[ ]+|$/,"\0"); print }'
and eliminate that concatenation of the final", "
then you don't have to go through the gymnastics on eliminating the final record. So:readarray -td '' a < <(awk '{ gsub(/,[ ]+/,"\0"); print; }' <<<"$string")
on Bash that supportsreadarray
. Note your method is Bash 4.4+ I think because of the-d
inreadarray
dawg Nov 26 '17 at 22:28datUser ,Feb 22, 2018 at 14:54
Looks likereadarray
is not an available builtin on OSX. datUser Feb 22 '18 at 14:54bgoldst ,Feb 23, 2018 at 3:37
@datUser That's unfortunate. Your version of bash must be too old forreadarray
. In this case, you can use the second-best solution built onread
. I'm referring to this:a=(); while read -rd,; do a+=("$REPLY"); done <<<"$string,";
(with theawk
substitution if you need multicharacter delimiter support). Let me know if you run into any problems; I'm pretty sure this solution should work on fairly old versions of bash, back to version 2-something, released like two decades ago. bgoldst Feb 23 '18 at 3:37Jmoney38 ,Jul 14, 2015 at 11:54
t="one,two,three" a=($(echo "$t" | tr ',' '\n')) echo "${a[2]}"Prints three
shrimpwagon ,Oct 16, 2015 at 20:04
I actually prefer this approach. Simple. shrimpwagon Oct 16 '15 at 20:04Ben ,Oct 31, 2015 at 3:11
I copied and pasted this and it did did not work with echo, but did work when I used it in a for loop. Ben Oct 31 '15 at 3:11Pinaki Mukherjee ,Nov 9, 2015 at 20:22
This is the simplest approach. thanks Pinaki Mukherjee Nov 9 '15 at 20:22abalter ,Aug 30, 2016 at 5:13
This does not work as stated. @Jmoney38 or shrimpwagon if you can paste this in a terminal and get the desired output, please paste the result here. abalter Aug 30 '16 at 5:13leaf ,Jul 17, 2017 at 16:28
@abalter Works for me witha=($(echo $t | tr ',' "\n"))
. Same result witha=($(echo $t | tr ',' ' '))
. leaf Jul 17 '17 at 16:28Luca Borrione ,Nov 2, 2012 at 13:44
Sometimes it happened to me that the method described in the accepted answer didn't work, especially if the separator is a carriage return.
In those cases I solved in this way:string='first line second line third line' oldIFS="$IFS" IFS=' ' IFS=${IFS:0:1} # this is useful to format your code with tabs lines=( $string ) IFS="$oldIFS" for line in "${lines[@]}" do echo "--> $line" doneStefan van den Akker ,Feb 9, 2015 at 16:52
+1 This completely worked for me. I needed to put multiple strings, divided by a newline, into an array, andread -a arr <<< "$strings"
did not work withIFS=$'\n'
. Stefan van den Akker Feb 9 '15 at 16:52Stefan van den Akker ,Feb 10, 2015 at 13:49
Here is the answer to make the accepted answer work when the delimiter is a newline . Stefan van den Akker Feb 10 '15 at 13:49,Jul 24, 2015 at 21:24
The accepted answer works for values in one line.
If the variable has several lines:string='first line second line third line'We need a very different command to get all lines:
while read -r line; do lines+=("$line"); done <<<"$string"
Or the much simpler bash readarray :
readarray -t lines <<<"$string"Printing all lines is very easy taking advantage of a printf feature:
printf ">[%s]\n" "${lines[@]}" >[first line] >[ second line] >[ third line]Mayhem ,Dec 31, 2015 at 3:13
While not every solution works for every situation, your mention of readarray... replaced my last two hours with 5 minutes... you got my vote Mayhem Dec 31 '15 at 3:13Derek 朕會功夫 ,Mar 23, 2018 at 19:14
readarray
is the right answer. Derek 朕會功夫 Mar 23 '18 at 19:14ssanch ,Jun 3, 2016 at 15:24
This is similar to the approach by Jmoney38, but using sed:string="1,2,3,4" array=(`echo $string | sed 's/,/\n/g'`) echo ${array[0]}Prints 1
dawg ,Nov 26, 2017 at 19:59
The key to splitting your string into an array is the multi character delimiter of", "
. Any solution usingIFS
for multi character delimiters is inherently wrong since IFS is a set of those characters, not a string.If you assign
IFS=", "
then the string will break on EITHER","
OR" "
or any combination of them which is not an accurate representation of the two character delimiter of", "
.You can use
awk
orsed
to split the string, with process substitution:#!/bin/bash str="Paris, France, Europe" array=() while read -r -d $'\0' each; do # use a NUL terminated field separator array+=("$each") done < <(printf "%s" "$str" | awk '{ gsub(/,[ ]+|$/,"\0"); print }') declare -p array # declare -a array=([0]="Paris" [1]="France" [2]="Europe") outputIt is more efficient to use a regex you directly in Bash:
#!/bin/bash str="Paris, France, Europe" array=() while [[ $str =~ ([^,]+)(,[ ]+|$) ]]; do array+=("${BASH_REMATCH[1]}") # capture the field i=${#BASH_REMATCH} # length of field + delimiter str=${str:i} # advance the string by that length done # the loop deletes $str, so make a copy if needed declare -p array # declare -a array=([0]="Paris" [1]="France" [2]="Europe") output...With the second form, there is no sub shell and it will be inherently faster.
Edit by bgoldst: Here are some benchmarks comparing my
readarray
solution to dawg's regex solution, and I also included theread
solution for the heck of it (note: I slightly modified the regex solution for greater harmony with my solution) (also see my comments below the post):## competitors function c_readarray { readarray -td '' a < <(awk '{ gsub(/, /,"\0"); print; };' <<<"$1, "); unset 'a[-1]'; }; function c_read { a=(); local REPLY=''; while read -r -d ''; do a+=("$REPLY"); done < <(awk '{ gsub(/, /,"\0"); print; };' <<<"$1, "); }; function c_regex { a=(); local s="$1, "; while [[ $s =~ ([^,]+),\ ]]; do a+=("${BASH_REMATCH[1]}"); s=${s:${#BASH_REMATCH}}; done; }; ## helper functions function rep { local -i i=-1; for ((i = 0; i<$1; ++i)); do printf %s "$2"; done; }; ## end rep() function testAll { local funcs=(); local args=(); local func=''; local -i rc=-1; while [[ "$1" != ':' ]]; do func="$1"; if [[ ! "$func" =~ ^[_a-zA-Z][_a-zA-Z0-9]*$ ]]; then echo "bad function name: $func" >&2; return 2; fi; funcs+=("$func"); shift; done; shift; args=("$@"); for func in "${funcs[@]}"; do echo -n "$func "; { time $func "${args[@]}" >/dev/null 2>&1; } 2>&1| tr '\n' '/'; rc=${PIPESTATUS[0]}; if [[ $rc -ne 0 ]]; then echo "[$rc]"; else echo; fi; done| column -ts/; }; ## end testAll() function makeStringToSplit { local -i n=$1; ## number of fields if [[ $n -lt 0 ]]; then echo "bad field count: $n" >&2; return 2; fi; if [[ $n -eq 0 ]]; then echo; elif [[ $n -eq 1 ]]; then echo 'first field'; elif [[ "$n" -eq 2 ]]; then echo 'first field, last field'; else echo "first field, $(rep $[$1-2] 'mid field, ')last field"; fi; }; ## end makeStringToSplit() function testAll_splitIntoArray { local -i n=$1; ## number of fields in input string local s=''; echo "===== $n field$(if [[ $n -ne 1 ]]; then echo 's'; fi;) ====="; s="$(makeStringToSplit "$n")"; testAll c_readarray c_read c_regex : "$s"; }; ## end testAll_splitIntoArray() ## results testAll_splitIntoArray 1; ## ===== 1 field ===== ## c_readarray real 0m0.067s user 0m0.000s sys 0m0.000s ## c_read real 0m0.064s user 0m0.000s sys 0m0.000s ## c_regex real 0m0.000s user 0m0.000s sys 0m0.000s ## testAll_splitIntoArray 10; ## ===== 10 fields ===== ## c_readarray real 0m0.067s user 0m0.000s sys 0m0.000s ## c_read real 0m0.064s user 0m0.000s sys 0m0.000s ## c_regex real 0m0.001s user 0m0.000s sys 0m0.000s ## testAll_splitIntoArray 100; ## ===== 100 fields ===== ## c_readarray real 0m0.069s user 0m0.000s sys 0m0.062s ## c_read real 0m0.065s user 0m0.000s sys 0m0.046s ## c_regex real 0m0.005s user 0m0.000s sys 0m0.000s ## testAll_splitIntoArray 1000; ## ===== 1000 fields ===== ## c_readarray real 0m0.084s user 0m0.031s sys 0m0.077s ## c_read real 0m0.092s user 0m0.031s sys 0m0.046s ## c_regex real 0m0.125s user 0m0.125s sys 0m0.000s ## testAll_splitIntoArray 10000; ## ===== 10000 fields ===== ## c_readarray real 0m0.209s user 0m0.093s sys 0m0.108s ## c_read real 0m0.333s user 0m0.234s sys 0m0.109s ## c_regex real 0m9.095s user 0m9.078s sys 0m0.000s ## testAll_splitIntoArray 100000; ## ===== 100000 fields ===== ## c_readarray real 0m1.460s user 0m0.326s sys 0m1.124s ## c_read real 0m2.780s user 0m1.686s sys 0m1.092s ## c_regex real 17m38.208s user 15m16.359s sys 2m19.375s ##bgoldst ,Nov 27, 2017 at 4:28
Very cool solution! I never thought of using a loop on a regex match, nifty use of$BASH_REMATCH
. It works, and does indeed avoid spawning subshells. +1 from me. However, by way of criticism, the regex itself is a little non-ideal, in that it appears you were forced to duplicate part of the delimiter token (specifically the comma) so as to work around the lack of support for non-greedy multipliers (also lookarounds) in ERE ("extended" regex flavor built into bash). This makes it a little less generic and robust. bgoldst Nov 27 '17 at 4:28bgoldst ,Nov 27, 2017 at 4:28
Secondly, I did some benchmarking, and although the performance is better than the other solutions for smallish strings, it worsens exponentially due to the repeated string-rebuilding, becoming catastrophic for very large strings. See my edit to your answer. bgoldst Nov 27 '17 at 4:28dawg ,Nov 27, 2017 at 4:46
@bgoldst: What a cool benchmark! In defense of the regex, for 10's or 100's of thousands of fields (what the regex is splitting) there would probably be some form of record (like\n
delimited text lines) comprising those fields so the catastrophic slow-down would likely not occur. If you have a string with 100,000 fields -- maybe Bash is not ideal ;-) Thanks for the benchmark. I learned a thing or two. dawg Nov 27 '17 at 4:46Geoff Lee ,Mar 4, 2016 at 6:02
Try thisIFS=', '; array=(Paris, France, Europe) for item in ${array[@]}; do echo $item; doneIt's simple. If you want, you can also add a declare (and also remove the commas):
IFS=' ';declare -a array=(Paris France Europe)The IFS is added to undo the above but it works without it in a fresh bash instance
MrPotatoHead ,Nov 13, 2018 at 13:19
Pure bash multi-character delimiter solution.As others have pointed out in this thread, the OP's question gave an example of a comma delimited string to be parsed into an array, but did not indicate if he/she was only interested in comma delimiters, single character delimiters, or multi-character delimiters.
Since Google tends to rank this answer at or near the top of search results, I wanted to provide readers with a strong answer to the question of multiple character delimiters, since that is also mentioned in at least one response.
If you're in search of a solution to a multi-character delimiter problem, I suggest reviewing Mallikarjun M 's post, in particular the response from gniourf_gniourf who provides this elegant pure BASH solution using parameter expansion:
#!/bin/bash str="LearnABCtoABCSplitABCaABCString" delimiter=ABC s=$str$delimiter array=(); while [[ $s ]]; do array+=( "${s%%"$delimiter"*}" ); s=${s#*"$delimiter"}; done; declare -p arrayLink to cited comment/referenced post
Link to cited question: Howto split a string on a multi-character delimiter in bash?
Eduardo Cuomo ,Dec 19, 2016 at 15:27
Use this:countries='Paris, France, Europe' OIFS="$IFS" IFS=', ' array=($countries) IFS="$OIFS" #${array[1]} == Paris #${array[2]} == France #${array[3]} == Europegniourf_gniourf ,Dec 19, 2016 at 17:22
Bad: subject to word splitting and pathname expansion. Please don't revive old questions with good answers to give bad answers. gniourf_gniourf Dec 19 '16 at 17:22Scott Weldon ,Dec 19, 2016 at 18:12
This may be a bad answer, but it is still a valid answer. Flaggers / reviewers: For incorrect answers such as this one, downvote, don't delete! Scott Weldon Dec 19 '16 at 18:12George Sovetov ,Dec 26, 2016 at 17:31
@gniourf_gniourf Could you please explain why it is a bad answer? I really don't understand when it fails. George Sovetov Dec 26 '16 at 17:31gniourf_gniourf ,Dec 26, 2016 at 18:07
@GeorgeSovetov: As I said, it's subject to word splitting and pathname expansion. More generally, splitting a string into an array asarray=( $string )
is a (sadly very common) antipattern: word splitting occurs:string='Prague, Czech Republic, Europe'
; Pathname expansion occurs:string='foo[abcd],bar[efgh]'
will fail if you have a file named, e.g.,food
orbarf
in your directory. The only valid usage of such a construct is whenstring
is a glob. gniourf_gniourf Dec 26 '16 at 18:07user1009908 ,Jun 9, 2015 at 23:28
UPDATE: Don't do this, due to problems with eval.With slightly less ceremony:
IFS=', ' eval 'array=($string)'e.g.
string="foo, bar,baz" IFS=', ' eval 'array=($string)' echo ${array[1]} # -> barcaesarsol ,Oct 29, 2015 at 14:42
eval is evil! don't do this. caesarsol Oct 29 '15 at 14:42user1009908 ,Oct 30, 2015 at 4:05
Pfft. No. If you're writing scripts large enough for this to matter, you're doing it wrong. In application code, eval is evil. In shell scripting, it's common, necessary, and inconsequential. user1009908 Oct 30 '15 at 4:05caesarsol ,Nov 2, 2015 at 18:19
put a$
in your variable and you'll see... I write many scripts and I never ever had to use a singleeval
caesarsol Nov 2 '15 at 18:19Dennis Williamson ,Dec 2, 2015 at 17:00
Eval command and security issues Dennis Williamson Dec 2 '15 at 17:00user1009908 ,Dec 22, 2015 at 23:04
You're right, this is only usable when the input is known to be clean. Not a robust solution. user1009908 Dec 22 '15 at 23:04Eduardo Lucio ,Jan 31, 2018 at 20:45
Here's my hack!Splitting strings by strings is a pretty boring thing to do using bash. What happens is that we have limited approaches that only work in a few cases (split by ";", "/", "." and so on) or we have a variety of side effects in the outputs.
The approach below has required a number of maneuvers, but I believe it will work for most of our needs!
#!/bin/bash # -------------------------------------- # SPLIT FUNCTION # ---------------- F_SPLIT_R=() f_split() { : 'It does a "split" into a given string and returns an array. Args: TARGET_P (str): Target string to "split". DELIMITER_P (Optional[str]): Delimiter used to "split". If not informed the split will be done by spaces. Returns: F_SPLIT_R (array): Array with the provided string separated by the informed delimiter. ' F_SPLIT_R=() TARGET_P=$1 DELIMITER_P=$2 if [ -z "$DELIMITER_P" ] ; then DELIMITER_P=" " fi REMOVE_N=1 if [ "$DELIMITER_P" == "\n" ] ; then REMOVE_N=0 fi # NOTE: This was the only parameter that has been a problem so far! # By Questor # [Ref.: https://unix.stackexchange.com/a/390732/61742] if [ "$DELIMITER_P" == "./" ] ; then DELIMITER_P="[.]/" fi if [ ${REMOVE_N} -eq 1 ] ; then # NOTE: Due to bash limitations we have some problems getting the # output of a split by awk inside an array and so we need to use # "line break" (\n) to succeed. Seen this, we remove the line breaks # momentarily afterwards we reintegrate them. The problem is that if # there is a line break in the "string" informed, this line break will # be lost, that is, it is erroneously removed in the output! # By Questor TARGET_P=$(awk 'BEGIN {RS="dn"} {gsub("\n", "3F2C417D448C46918289218B7337FCAF"); printf $0}' <<< "${TARGET_P}") fi # NOTE: The replace of "\n" by "3F2C417D448C46918289218B7337FCAF" results # in more occurrences of "3F2C417D448C46918289218B7337FCAF" than the # amount of "\n" that there was originally in the string (one more # occurrence at the end of the string)! We can not explain the reason for # this side effect. The line below corrects this problem! By Questor TARGET_P=${TARGET_P%????????????????????????????????} SPLIT_NOW=$(awk -F"$DELIMITER_P" '{for(i=1; i<=NF; i++){printf "%s\n", $i}}' <<< "${TARGET_P}") while IFS= read -r LINE_NOW ; do if [ ${REMOVE_N} -eq 1 ] ; then # NOTE: We use "'" to prevent blank lines with no other characters # in the sequence being erroneously removed! We do not know the # reason for this side effect! By Questor LN_NOW_WITH_N=$(awk 'BEGIN {RS="dn"} {gsub("3F2C417D448C46918289218B7337FCAF", "\n"); printf $0}' <<< "'${LINE_NOW}'") # NOTE: We use the commands below to revert the intervention made # immediately above! By Questor LN_NOW_WITH_N=${LN_NOW_WITH_N%?} LN_NOW_WITH_N=${LN_NOW_WITH_N#?} F_SPLIT_R+=("$LN_NOW_WITH_N") else F_SPLIT_R+=("$LINE_NOW") fi done <<< "$SPLIT_NOW" } # -------------------------------------- # HOW TO USE # ---------------- STRING_TO_SPLIT=" * How do I list all databases and tables using psql? \" sudo -u postgres /usr/pgsql-9.4/bin/psql -c \"\l\" sudo -u postgres /usr/pgsql-9.4/bin/psql <DB_NAME> -c \"\dt\" \" \" \list or \l: list all databases \dt: list all tables in the current database \" [Ref.: https://dba.stackexchange.com/questions/1285/how-do-i-list-all-databases-and-tables-using-psql] " f_split "$STRING_TO_SPLIT" "bin/psql -c" # -------------------------------------- # OUTPUT AND TEST # ---------------- ARR_LENGTH=${#F_SPLIT_R[*]} for (( i=0; i<=$(( $ARR_LENGTH -1 )); i++ )) ; do echo " > -----------------------------------------" echo "${F_SPLIT_R[$i]}" echo " < -----------------------------------------" done if [ "$STRING_TO_SPLIT" == "${F_SPLIT_R[0]}bin/psql -c${F_SPLIT_R[1]}" ] ; then echo " > -----------------------------------------" echo "The strings are the same!" echo " < -----------------------------------------" fisel-en-ium ,May 31, 2018 at 5:56
Another way to do it without modifying IFS:read -r -a myarray <<< "${string//, /$IFS}"Rather than changing IFS to match our desired delimiter, we can replace all occurrences of our desired delimiter
", "
with contents of$IFS
via"${string//, /$IFS}"
.Maybe this will be slow for very large strings though?
This is based on Dennis Williamson's answer.
rsjethani ,Sep 13, 2016 at 16:21
Another approach can be:str="a, b, c, d" # assuming there is a space after ',' as in Q arr=(${str//,/}) # delete all occurrences of ','After this 'arr' is an array with four strings. This doesn't require dealing IFS or read or any other special stuff hence much simpler and direct.
gniourf_gniourf ,Dec 26, 2016 at 18:12
Same (sadly common) antipattern as other answers: subject to word splitting and filename expansion. gniourf_gniourf Dec 26 '16 at 18:12Safter Arslan ,Aug 9, 2017 at 3:21
Another way would be:string="Paris, France, Europe" IFS=', ' arr=(${string})Now your elements are stored in "arr" array. To iterate through the elements:
for i in ${arr[@]}; do echo $i; donebgoldst ,Aug 13, 2017 at 22:38
I cover this idea in my answer ; see Wrong answer #5 (you might be especially interested in my discussion of theeval
trick). Your solution leaves$IFS
set to the comma-space value after-the-fact. bgoldst Aug 13 '17 at 22:38
flowblok's blog
that diagram shows what happens according to the man page, and not what happens when you actually try it out in real life. This second diagram more accurately captures the insanity of bash:See how remote interactive login shells read /etc/bash.bashrc, but normal interactive login shells don't? Sigh.
Finally, here's a repository containing my implementation and the graphviz files for the above diagram. If your POSIX-compliant shell isn't listed here, or if I've made a horrible mistake (or just a tiny one), please send me a pull request or make a comment below, and I'll update this post accordingly.
[1]and since I'm writing this, I can make you say whatever I want for the purposes of narrative.
Jan 26, 2019 | flowblok.id.au
6 years late, but...
In my experience, if your bash sources /etc/bash.bashrc, odds are good it also sources /etc/bash.bash_logout or something similar on logout (after ~/.bash_logout, of course).
From bash-4.4/config-top.h:
/* System-wide .bashrc file for interactive shells. */ /* #define SYS_BASHRC "/etc/bash.bashrc" */ /* System-wide .bash_logout for login shells. */ /* #define SYS_BASH_LOGOUT "/etc/bash.bash_logout" */(Yes, they're disabled by default.)
Check the FILES section of your system's bash man page for details.
Jan 06, 2018 | zwischenzugs.com
IntroRecently I wanted to deepen my understanding of bash by researching as much of it as possible. Because I felt bash is an often-used (and under-understood) technology, I ended up writing a book on it .
A preview is available here .
You don't have to look hard on the internet to find plenty of useful one-liners in bash, or scripts. And there are guides to bash that seem somewhat intimidating through either their thoroughness or their focus on esoteric detail.
Here I've focussed on the things that either confused me or increased my power and productivity in bash significantly, and tried to communicate them (as in my book) in a way that emphasises getting the understanding right.
Enjoy!
1)``
vs$()
These two operators do the same thing. Compare these two lines:
$ echo `ls` $ echo $(ls)Why these two forms existed confused me for a long time.
If you don't know, both forms substitute the output of the command contained within it into the command.
The principal difference is that nesting is simpler.
Which of these is easier to read (and write)?
$ echo `echo \`echo \\\`echo inside\\\`\``or:
$ echo $(echo $(echo $(echo inside)))If you're interested in going deeper, see here or here .
2) globbing vs regexpsAnother one that can confuse if never thought about or researched.
While globs and regexps can look similar, they are not the same.
Consider this command:
$ rename -n 's/(.*)/new$1/' *The two asterisks are interpreted in different ways.
The first is ignored by the shell (because it is in quotes), and is interpreted as '0 or more characters' by the rename application. So it's interpreted as a regular expression.
The second is interpreted by the shell (because it is not in quotes), and gets replaced by a list of all the files in the current working folder. It is interpreted as a glob.
So by looking at
man bash
can you figure out why these two commands produce different output?$ ls * $ ls .*The second looks even more like a regular expression. But it isn't!
3) Exit CodesNot everyone knows that every time you run a shell command in bash, an 'exit code' is returned to bash.
Generally, if a command 'succeeds' you get an error code of
0
. If it doesn't succeed, you get a non-zero code.1
is a 'general error', and others can give you more information (eg which signal killed it, for example).But these rules don't always hold:
$ grep not_there /dev/null $ echo $?
$?
is a special bash variable that's set to the exit code of each command after it runs.Grep uses exit codes to indicate whether it matched or not. I have to look up every time which way round it goes: does finding a match or not return
0
?Grok this and a lot will click into place in what follows.
4)if
statements,[
and[[
Here's another 'spot the difference' similar to the backticks one above.
What will this output?
if grep not_there /dev/null then echo hi else echo lo figrep's return code makes code like this work more intuitively as a side effect of its use of exit codes.
Now what will this output?
a)
hihi
b)lolo
c) something elseif [ $(grep not_there /dev/null) = '' ] then echo -n hi else echo -n lo fi if [[ $(grep not_there /dev/null) = '' ]] then echo -n hi else echo -n lo fiThe difference between
[
and[[
was another thing I never really understood.[
is the original form for tests, and then[[
was introduced, which is more flexible and intuitive. In the firstif
block above, the if statement barfs because the$(grep not_there /dev/null)
is evaluated to nothing, resulting in this comparison:
[ = '' ]
which makes no sense. The double bracket form handles this for you.
This is why you occasionally see comparisons like this in bash scripts:
if [ x$(grep not_there /dev/null) = 'x' ]
so that if the command returns nothing it still runs. There's no need for it, but that's why it exists.
5)set
sBash has configurable options which can be set on the fly. I use two of these all the time:
set -eexits from a script if any command returned a non-zero exit code (see above).
This outputs the commands that get run as they run:
set -xSo a script might start like this:
#!/bin/bash set -e set -x grep not_there /dev/null echo $?What would that script output?
6) <()
This is my favourite. It's so under-used, perhaps because it can be initially baffling, but I use it all the time.
It's similar to
$()
in that the output of the command inside is re-used.In this case, though, the output is treated as a file. This file can be used as an argument to commands that take files as an argument.
Confused? Here's an example.
Have you ever done something like this?
$ grep somestring file1 > /tmp/a $ grep somestring file2 > /tmp/b $ diff /tmp/a /tmp/bThat works, but instead you can write:
diff <(grep somestring file1) <(grep somestring file2)Isn't that neater?
7) QuotingQuoting's a knotty subject in bash, as it is in many software contexts.
Firstly, variables in quotes:
A='123' echo "$A" echo '$A'Pretty simple – double quotes dereference variables, while single quotes go literal.
So what will this output?
mkdir -p tmp cd tmp touch a echo "*" echo '*'Surprised? I was.
8) Top three shortcutsThere are plenty of shortcuts listed in
man bash
, and it's not hard to find comprehensive lists. This list consists of the ones I use most often, in order of how often I use them.Rather than trying to memorize them all, I recommend picking one, and trying to remember to use it until it becomes unconscious. Then take the next one. I'll skip over the most obvious ones (eg
!!
– repeat last command, and~
– your home directory).
!$
I use this dozens of times a day. It repeats the last argument of the last command. If you're working on a file, and can't be bothered to re-type it command after command it can save a lot of work:
grep somestring /long/path/to/some/file/or/other.txt vi !$
!:1-$
This bit of magic takes this further. It takes all the arguments to the previous command and drops them in. So:
grep isthere /long/path/to/some/file/or/other.txt egrep !:1-$ fgrep !:1-$The
!
means 'look at the previous command', the:
is a separator, and the1
means 'take the first word', the-
means 'until' and the$
means 'the last word'.Note: you can achieve the same thing with
!*
. Knowing the above gives you the control to limit to a specific contiguous subset of arguments, eg with!:2-3
.
:h
I use this one a lot too. If you put it after a filename, it will change that filename to remove everything up to the folder. Like this:
grep isthere /long/path/to/some/file/or/other.txt cd !$:hwhich can save a lot of work in the course of the day.
9) startup orderThe order in which bash runs startup scripts can cause a lot of head-scratching. I keep this diagram handy (from this great page):
It shows which scripts bash decides to run from the top, based on decisions made about the context bash is running in (which decides the colour to follow).
So if you are in a local (non-remote), non-login, interactive shell (eg when you run bash itself from the command line), you are on the 'green' line, and these are the order of files read:
/etc/bash.bashrc ~/.bashrc [bash runs, then terminates] ~/.bash_logoutThis can save you a hell of a lot of time debugging.
10) getopts (cheapci)If you go deep with bash, you might end up writing chunky utilities in it. If you do, then getting to grips with
getopts
can pay large dividends.For fun, I once wrote a script called
cheapci
which I used to work like a Jenkins job.The code here implements the reading of the two required, and 14 non-required arguments . Better to learn this than to build up a bunch of bespoke code that can get very messy pretty quickly as your utility grows.
This is based on some of the contents of my book Learn Bash the Hard Way , available at $7 :
Dec 20, 2018 | forums.debian.net
pawRoot " 2018-10-15 17:13
Just spent some time editing .bashrc to make my life easier, and wondering if anyone has some cool "tricks" for bash as well.Here is mine:
- Code: Select all
# changing shell appearance
PS1='\[\033[0;32m\]\[\033[0m\033[0;32m\]\u\[\033[0;36m\] @ \[\033[0;36m\]\h \w\[\033[0;32m\]$(__git_ps1)\n\[\033[0;32m\]└─\[\033[0m\033[0;32m\] \$\[\033[0m\033[0;32m\] ▶\[\033[0m\] '
# aliases
alias la="ls -la --group-directories-first --color"
# clear terminal
alias cls="clear"
#
alias sup="sudo apt update && sudo apt upgrade"
# search for package
alias apts='apt-cache search'
# start x session
alias x="startx"
# download mp3 in best quality from YouTube
# usage: ytmp3 https://www.youtube.com/watch?v=LINK
alias ytmp3="youtube-dl -f bestaudio --extract-audio --audio-format mp3 --audio-quality 0"
# perform 'la' after 'cd'
alias cd="listDir"
listDir() {
builtin cd "$*"
RESULT=$?
if [ "$RESULT" -eq 0 ]; then
la
fi
}
# type "extract filename" to extract the file
extract () {
if [ -f $1 ] ; then
case $1 in
*.tar.bz2) tar xvjf $1 ;;
*.tar.gz) tar xvzf $1 ;;
*.bz2) bunzip2 $1 ;;
*.rar) unrar x $1 ;;
*.gz) gunzip $1 ;;
*.tar) tar xvf $1 ;;
*.tbz2) tar xvjf $1 ;;
*.tgz) tar xvzf $1 ;;
*.zip) unzip $1 ;;
*.Z) uncompress $1 ;;
*.7z) 7z x $1 ;;
*) echo "don't know how to extract '$1'..." ;;
esac
else
echo "'$1' is not a valid file!"
fi
}
# obvious one
alias ..="cd .."
alias ...="cd ../.."
alias ....="cd ../../.."
alias .....="cd ../../../.."
# tail all logs in /var/log
alias logs="find /var/log -type f -exec file {} \; | grep 'text' | cut -d' ' -f1 | sed -e's/:$//g' | grep -v '[0-9]$' | xargs tail -f"
Head_on_a_Stick " 2018-10-15 18:11
pawRoot wrote:Erm, did you know that `tar` autoextracts these days? This will work for pretty much anything:
- Code: Select all
extract () {
if [ -f $1 ] ; then
case $1 in
*.tar.bz2) tar xvjf $1 ;;
*.tar.gz) tar xvzf $1 ;;
*.bz2) bunzip2 $1 ;;
*.rar) unrar x $1 ;;
*.gz) gunzip $1 ;;
*.tar) tar xvf $1 ;;
*.tbz2) tar xvjf $1 ;;
*.tgz) tar xvzf $1 ;;
*.zip) unzip $1 ;;
*.Z) uncompress $1 ;;
*.7z) 7z x $1 ;;
*) echo "don't know how to extract '$1'..." ;;
esac
else
echo "'$1' is not a valid file!"
fi
}
I have these functions in my .mkshrc (bash is bloat!):
- Code: Select all
tar xf whatever.tar.whatever
The mnt function acts like a poor person's arch-chroot and will bind mount /proc /sys & /dev before chrooting then tear it down afterwards.
- Code: Select all
function mnt {
for i in proc sys dev dev/pts; do sudo mount --bind /$i "$1"$i; done &
sudo chroot "$1" /bin/bash
sudo umount -R "$1"{proc,sys,dev}
}
function mkiso {
xorriso -as mkisofs \
-iso-level 3 \
-full-iso9660-filenames \
-volid SharpBang-stretch \
-eltorito-boot isolinux/isolinux.bin \
-eltorito-catalog isolinux/boot.cat \
-no-emul-boot -boot-load-size 4 -boot-info-table \
-isohybrid-mbr isolinux/isohdpfx.bin \
-eltorito-alt-boot \
-e boot/grub/efi.img \
-no-emul-boot -isohybrid-gpt-basdat \
-output ../"$1" ./
}The mkiso function builds a UEFI-capable Debian live system (with the name of the image given as the first argument).
The only other stuff I have are aliases, not really worth posting.
dbruce wrote: Ubuntu forums try to be like a coffee shop in Seattle. Debian forums strive for the charm and ambience of a skinhead bar in Bacau. We intend to keep it that way.pawRoot " 2018-10-15 18:23
Head_on_a_Stick wrote: Erm, did you know that `tar` autoextracts these days? This will work for pretty much anything:But it won't work for zip or rar right ?
None1975 " 2018-10-16 13:02
Here is compilation of cool "tricks" for bash. This is similar to oh-my-zsh. OS: Debian Stretch / WM : Fluxbox
Debian Wiki | DontBreakDebian , My config files in github
debiman " 2018-10-21 14:38
i have a LOT of stuff in my /etc/bash.bashrc, because i want it to be available for the root user too.
i won't post everything, but here's a "best of" from both /etc/bash.bashrc and ~/.bashrc:
- Code: Select all
case ${TERM} in
xterm*|rxvt*|Eterm|aterm|kterm|gnome*)
PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033]0;%s: %s\007" "${SHELL##*/}" "${PWD/#$HOME/\~}"'
;;
screen)
PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033_%s@%s:%s\033\\" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/\~}"'
;;
linux)
setterm --blength 0
setterm --blank 4
setterm --powerdown 8
;;
esac
PS2='cont> '
PS3='Choice: '
PS4='DEBUG: '
# Bash won't get SIGWINCH if another process is in the foreground.
# Enable checkwinsize so that bash will check the terminal size when
# it regains control.
# http://cnswww.cns.cwru.edu/~chet/bash/FAQ (E11)
shopt -s checkwinsize
# forums.bunsenlabs.org/viewtopic.php?pid=27494#p27494
# also see aliases '...' and '....'
shopt -s autocd
# opensource.com/article/18/5/bash-tricks
shopt -s cdspell
# as big as possible!!!
HISTSIZE=500000
HISTFILESIZE=2000000
# unix.stackexchange.com/a/18443
# history: erase duplicates...
HISTCONTROL=ignoredups:erasedups
shopt -s histappend
# next: enables usage of CTRL-S (backward search) with CTRL-R (forward search)
# digitalocean.com/community/tutorials/how-to-use-bash-history-commands-and-expansions-on-a-linux-vps#searching-through-bash-history
stty -ixon
if [[ ${EUID} == 0 ]] ; then
# root = color=1 # red
if [ "$TERM" != "linux" ]; then
PS1="\[$(tput setaf 1)\]\[$(tput rev)\] \[$(tput sgr0)\]\[$(tput setaf 5)\]\${?#0}\[$(tput setaf 1)\] \u@\h \w\[$(tput sgr0)\]\n\[$(tput rev)\] \[$(tput sgr0)\] "
else
# adding \t = time to tty prompt
PS1="\[$(tput setaf 1)\]\[$(tput rev)\] \[$(tput sgr0)\]\[$(tput setaf 5)\]\${?#0}\[$(tput setaf 1)\] \t \u@\h \w\[$(tput sgr0)\]\n\[$(tput rev)\] \[$(tput sgr0)\] "
fi
else
if [ "$TERM" != "linux" ]; then
PS1="\[$(tput setaf 2)\]\[$(tput rev)\] \[$(tput sgr0)\]\[$(tput setaf 5)\]\${?#0}\[$(tput setaf 2)\] \u@\h \w\[$(tput sgr0)\]\n\[$(tput rev)\] \[$(tput sgr0)\] "
else
# adding \t = time to tty prompt
PS1="\[$(tput setaf 2)\]\[$(tput rev)\] \[$(tput sgr0)\]\[$(tput setaf 5)\]\${?#0}\[$(tput setaf 2)\] \t \u@\h \w\[$(tput sgr0)\]\n\[$(tput rev)\] \[$(tput sgr0)\] "
fi
fi
[ -r /usr/share/bash-completion/bash_completion ] && . /usr/share/bash-completion/bash_completion || true
export EDITOR="nano"
man() {
env LESS_TERMCAP_mb=$(printf "\e[1;31m") \
LESS_TERMCAP_md=$(printf "\e[1;31m") \
LESS_TERMCAP_me=$(printf "\e[0m") \
LESS_TERMCAP_se=$(printf "\e[0m") \
LESS_TERMCAP_so=$(printf "\e[7m") \
LESS_TERMCAP_ue=$(printf "\e[0m") \
LESS_TERMCAP_us=$(printf "\e[1;32m") \
man "$@"
}
#LESS_TERMCAP_so=$(printf "\e[1;44;33m")
# that used to be in the man function for less's annoyingly over-colorful status line.
# changed it to simple reverse video (tput rev)
alias ls='ls --group-directories-first -hF --color=auto'
alias ll='ls --group-directories-first -hF --color=auto -la'
alias mpf='/usr/bin/ls -1 | mpv --playlist=-'
alias ruler='slop -o -c 1,0.3,0'
alias xmeasure='slop -o -c 1,0.3,0'
alias obxprop='obxprop | grep -v _NET_WM_ICON'
alias sx='exec startx > ~/.local/share/xorg/xlog 2>&1'
alias pngq='pngquant --nofs --speed 1 --skip-if-larger --strip '
alias screencap='ffmpeg -r 15 -s 1680x1050 -f x11grab -i :0.0 -vcodec msmpeg4v2 -qscale 2'
alias su='su -'
alias fblc='fluxbox -list-commands | column'
alias torrench='torrench -t -k -s -x -r -l -i -b --sorted'
alias F5='while sleep 60; do notify-send -u low "Pressed F5 on:" "$(xdotool getwindowname $(xdotool getwindowfocus))"; xdotool key F5; done'
alias aurs='aurman --sort_by_name -Ss'
alias cal3='cal -3 -m -w --color'
alias mkdir='mkdir -p -v'
alias ping='ping -c 5'
alias cd..='cd ..'
alias off='systemctl poweroff'
alias xg='xgamma -gamma'
alias find='find 2>/dev/null'
alias stressme='stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout'
alias hf='history|grep'
alias du1='du -m --max-depth=1|sort -g|sed "s/\t./M\t/g ; s/\///g"'
alias zipcat='gunzip -c'
mkcd() {
mkdir -p "$1"
echo cd "$1"
cd "$1"
}
Nov 17, 2018 | www.mankier.com
hh -- easily view, navigate, sort and use your command history with shell history suggest box.
SynopsisDescription
hh [option] [arg1] [arg2]...
hstr [option] [arg1] [arg2]...hh uses shell history to provide suggest box like functionality for commands used in the past. By default it parses .bash-history file that is filtered as you type a command substring. Commands are not just filtered, but also ordered by a ranking algorithm that considers number of occurrences, length and timestamp. Favorite and frequently used commands can be bookmarked . In addition hh allows removal of commands from history - for instance with a typo or with a sensitive content.
OptionsKeys
- -h --help
- Show help
- -n --non-interactive
- Print filtered history on standard output and exit
- -f --favorites
- Show favorites view immediately
- -s --show-configuration
- Show configuration that can be added to ~/.bashrc
- -b --show-blacklist
- Show blacklist of commands to be filtered out before history processing
- -V --version
- Show version information
Environment Variables
- pattern
- Type to filter shell history.
- Ctrl-e
- Toggle regular expression and substring search.
- Ctrl-t
- Toggle case sensitive search.
- Ctrl-/ , Ctrl-7
- Rotate view of history as provided by Bash, ranked history ordered by the number of occurences/length/timestamp and favorites.
- Ctrl-f
- Add currently selected command to favorites.
- Ctrl-l
- Make search pattern lowercase or uppercase.
- Ctrl-r , UP arrow, DOWN arrow, Ctrl-n , Ctrl-p
- Navigate in the history list.
- TAB , RIGHT arrow
- Choose currently selected item for completion and let user to edit it on the command prompt.
- LEFT arrow
- Choose currently selected item for completion and let user to edit it in editor (fix command).
- ENTER
- Choose currently selected item for completion and execute it.
- DEL
- Remove currently selected item from the shell history.
- BACSKSPACE , Ctrl-h
- Delete last pattern character.
- Ctrl-u , Ctrl-w
- Delete pattern and search again.
- Ctrl-x
- Write changes to shell history and exit.
- Ctrl-g
- Exit with empty prompt.
hh defines the following environment variables:
Files
- HH_CONFIG
- Configuration options:
hicolor
Get more colors with this option (default is monochromatic).monochromatic
Ensure black and white view.prompt-bottom
Show prompt at the bottom of the screen (default is prompt at the top).regexp
Filter command history using regular expressions (substring match is default)substring
Filter command history using substring.keywords
Filter command history using keywords - item matches if contains all keywords in pattern in any order.casesensitive
Make history filtering case sensitive (it's case insensitive by default).rawhistory
Show normal history as a default view (metric-based view is shown otherwise).favorites
Show favorites as a default view (metric-based view is shown otherwise).duplicates
Show duplicates in rawhistory (duplicates are discarded by default).blacklist
Load list of commands to skip when processing history from ~/.hh_blacklist (built-in blacklist used otherwise).big-keys-skip
Skip big history entries i.e. very long lines (default).big-keys-floor
Use different sorting slot for big keys when building metrics-based view (big keys are skipped by default).big-keys-exit
Exit (fail) on presence of a big key in history (big keys are skipped by default).warning
Show warning.debug
Show debug information.Example:
export HH_CONFIG=hicolor,regexp,rawhistory- HH_PROMPT
- Change prompt string which is user@host$ by default.
Example:
export HH_PROMPT="$ "Bash Configuration
- ~/.hh_favorites
- Bookmarked favorite commands.
- ~/.hh_blacklist
- Command blacklist.
Optionally add the following lines to ~/.bashrc:
export HH_CONFIG=hicolor # get more colors shopt -s histappend # append new history items to .bash_history export HISTCONTROL=ignorespace # leading space hides commands from history export HISTFILESIZE=10000 # increase history file size (default is 500) export HISTSIZE=${HISTFILESIZE} # increase history size (default is 500) export PROMPT_COMMAND="history -a; history -n; ${PROMPT_COMMAND}" # if this is interactive shell, then bind hh to Ctrl-r (for Vi mode check doc) if [[ $- =~ .*i.* ]]; then bind '"\C-r": "\C-a hh -- \C-j"'; fiThe prompt command ensures synchronization of the history between BASH memory and history file.
ZSH ConfigurationOptionally add the following lines to ~/.zshrc:
export HISTFILE=~/.zsh_history # ensure history file visibility export HH_CONFIG=hicolor # get more colors bindkey -s "\C-r" "\eqhh\n" # bind hh to Ctrl-r (for Vi mode check doc, experiment with --)ExamplesAuthor
- hh git
- Start `hh` and show only history items containing 'git'.
- hh --non-interactive git
- Print history items containing 'git' to standard output and exit.
- hh --show-configuration >> ~/.bashrc
- Append default hh configuration to your Bash profile.
- hh --show-blacklist
- Show blacklist configured for history processing.
Written by Martin Dvorak <[email protected]>
BugsReport bugs to https://github.com/dvorka/hstr/issues
See Also
Referenced By
The man page hstr(1) is an alias of hh(1).
Nov 08, 2018 | stackoverflow.com
This question already has an answer here:Rob I , May 9, 2012 at 19:22
For your second question, see @mkb's comment to my answer below - that's definitely the way to go! – Rob I May 9 '12 at 19:22Dennis Williamson , Jul 4, 2012 at 16:14
See my edited answer for one way to read individual characters into an array. – Dennis Williamson Jul 4 '12 at 16:14Nick Weedon , Dec 31, 2015 at 11:04
Here is the same thing in a more concise form: var1=$(cut -f1 -d- <<<$STR) – Nick Weedon Dec 31 '15 at 11:04Rob I , May 9, 2012 at 17:00
If your solution doesn't have to be general, i.e. only needs to work for strings like your example, you could do:var1=$(echo $STR | cut -f1 -d-) var2=$(echo $STR | cut -f2 -d-)I chose
cut
here because you could simply extend the code for a few more variables...crunchybutternut , May 9, 2012 at 17:40
Can you look at my post again and see if you have a solution for the followup question? thanks! – crunchybutternut May 9 '12 at 17:40mkb , May 9, 2012 at 17:59
You can usecut
to cut characters too!cut -c1
for example. – mkb May 9 '12 at 17:59FSp , Nov 27, 2012 at 10:26
Although this is very simple to read and write, is a very slow solution because forces you to read twice the same data ($STR) ... if you care of your script performace, the @anubhava solution is much better – FSp Nov 27 '12 at 10:26tripleee , Jan 25, 2016 at 6:47
Apart from being an ugly last-resort solution, this has a bug: You should absolutely use double quotes inecho "$STR"
unless you specifically want the shell to expand any wildcards in the string as a side effect. See also stackoverflow.com/questions/10067266/ – tripleee Jan 25 '16 at 6:47Rob I , Feb 10, 2016 at 13:57
You're right about double quotes of course, though I did point out this solution wasn't general. However I think your assessment is a bit unfair - for some people this solution may be more readable (and hence extensible etc) than some others, and doesn't completely rely on arcane bash feature that wouldn't translate to other shells. I suspect that's why my solution, though less elegant, continues to get votes periodically... – Rob I Feb 10 '16 at 13:57Dennis Williamson , May 10, 2012 at 3:14
read
withIFS
are perfect for this:$ IFS=- read var1 var2 <<< ABCDE-123456 $ echo "$var1" ABCDE $ echo "$var2" 123456Edit:
Here is how you can read each individual character into array elements:
$ read -a foo <<<"$(echo "ABCDE-123456" | sed 's/./& /g')"Dump the array:
$ declare -p foo declare -a foo='([0]="A" [1]="B" [2]="C" [3]="D" [4]="E" [5]="-" [6]="1" [7]="2" [8]="3" [9]="4" [10]="5" [11]="6")'If there are spaces in the string:
$ IFS=$'\v' read -a foo <<<"$(echo "ABCDE 123456" | sed 's/./&\v/g')" $ declare -p foo declare -a foo='([0]="A" [1]="B" [2]="C" [3]="D" [4]="E" [5]=" " [6]="1" [7]="2" [8]="3" [9]="4" [10]="5" [11]="6")'insecure , Apr 30, 2014 at 7:51
Great, the elegant bash-only way, without unnecessary forks. – insecure Apr 30 '14 at 7:51Martin Serrano , Jan 11 at 4:34
this solution also has the benefit that if delimiter is not present, thevar2
will be empty – Martin Serrano Jan 11 at 4:34mkb , May 9, 2012 at 17:02
If you know it's going to be just two fields, you can skip the extra subprocesses like this:var1=${STR%-*} var2=${STR#*-}What does this do?
${STR%-*}
deletes the shortest substring of$STR
that matches the pattern-*
starting from the end of the string.${STR#*-}
does the same, but with the*-
pattern and starting from the beginning of the string. They each have counterparts%%
and##
which find the longest anchored pattern match. If anyone has a helpful mnemonic to remember which does which, let me know! I always have to try both to remember.Jens , Jan 30, 2015 at 15:17
Plus 1 For knowing your POSIX shell features, avoiding expensive forks and pipes, and the absence of bashisms. – Jens Jan 30 '15 at 15:17Steven Lu , May 1, 2015 at 20:19
Dunno about "absence of bashisms" considering that this is already moderately cryptic .... if your delimiter is a newline instead of a hyphen, then it becomes even more cryptic. On the other hand, it works with newlines , so there's that. – Steven Lu May 1 '15 at 20:19mkb , Mar 9, 2016 at 17:30
@KErlandsson: done – mkb Mar 9 '16 at 17:30mombip , Aug 9, 2016 at 15:58
I've finally found documentation for it: Shell-Parameter-Expansion – mombip Aug 9 '16 at 15:58DS. , Jan 13, 2017 at 19:56
Mnemonic: "#" is to the left of "%" on a standard keyboard, so "#" removes a prefix (on the left), and "%" removes a suffix (on the right). – DS. Jan 13 '17 at 19:56tripleee , May 9, 2012 at 17:57
Sounds like a job forset
with a customIFS
.IFS=- set $STR var1=$1 var2=$2(You will want to do this in a function with a
local IFS
so you don't mess up other parts of your script where you requireIFS
to be what you expect.)Rob I , May 9, 2012 at 19:20
Nice - I knew about$IFS
but hadn't seen how it could be used. – Rob I May 9 '12 at 19:20Sigg3.net , Jun 19, 2013 at 8:08
I used triplee's example and it worked exactly as advertised! Just change last two lines to <pre> myvar1=echo $1
&& myvar2=echo $2
</pre> if you need to store them throughout a script with several "thrown" variables. – Sigg3.net Jun 19 '13 at 8:08tripleee , Jun 19, 2013 at 13:25
No, don't use a uselessecho
in backticks . – tripleee Jun 19 '13 at 13:25Daniel Andersson , Mar 27, 2015 at 6:46
This is a really sweet solution if we need to write something that is not Bash specific. To handleIFS
troubles, one can addOLDIFS=$IFS
at the beginning before overwriting it, and then addIFS=$OLDIFS
just after theset
line. – Daniel Andersson Mar 27 '15 at 6:46tripleee , Mar 27, 2015 at 6:58
FWIW the link above is broken. I was lazy and careless. The canonical location still works; iki.fi/era/unix/award.html#echo – tripleee Mar 27 '15 at 6:58anubhava , May 9, 2012 at 17:09
Using bash regex capabilities:re="^([^-]+)-(.*)$" [[ "ABCDE-123456" =~ $re ]] && var1="${BASH_REMATCH[1]}" && var2="${BASH_REMATCH[2]}" echo $var1 echo $var2OUTPUT
ABCDE 123456Cometsong , Oct 21, 2016 at 13:29
Love pre-defining there
for later use(s)! – Cometsong Oct 21 '16 at 13:29Archibald , Nov 12, 2012 at 11:03
string="ABCDE-123456" IFS=- # use "local IFS=-" inside the function set $string echo $1 # >>> ABCDE echo $2 # >>> 123456tripleee , Mar 27, 2015 at 7:02
Hmmm, isn't this just a restatement of my answer ? – tripleee Mar 27 '15 at 7:02Archibald , Sep 18, 2015 at 12:36
Actually yes. I just clarified it a bit. – Archibald Sep 18 '15 at 12:36
Nov 08, 2018 | stackoverflow.com
cd1 , Jul 1, 2010 at 23:29
Suppose I have the string1:2:3:4:5
and I want to get its last field (5
in this case). How do I do that using Bash? I triedcut
, but I don't know how to specify the last field with-f
.Stephen , Jul 2, 2010 at 0:05
You can use string operators :$ foo=1:2:3:4:5 $ echo ${foo##*:} 5This trims everything from the front until a ':', greedily.
${foo <-- from variable foo ## <-- greedy front trim * <-- matches anything : <-- until the last ':' }eckes , Jan 23, 2013 at 15:23
While this is working for the given problem, the answer of William below ( stackoverflow.com/a/3163857/520162 ) also returns5
if the string is1:2:3:4:5:
(while using the string operators yields an empty result). This is especially handy when parsing paths that could contain (or not) a finishing/
character. – eckes Jan 23 '13 at 15:23Dobz , Jun 25, 2014 at 11:44
How would you then do the opposite of this? to echo out '1:2:3:4:'? – Dobz Jun 25 '14 at 11:44Mihai Danila , Jul 9, 2014 at 14:07
And how does one keep the part before the last separator? Apparently by using${foo%:*}
.#
- from beginning;%
- from end.#
,%
- shortest match;##
,%%
- longest match. – Mihai Danila Jul 9 '14 at 14:07Putnik , Feb 11, 2016 at 22:33
If i want to get the last element from path, how should I use it?echo ${pwd##*/}
does not work. – Putnik Feb 11 '16 at 22:33Stan Strum , Dec 17, 2017 at 4:22
@Putnik that command seespwd
as a variable. Trydir=$(pwd); echo ${dir##*/}
. Works for me! – Stan Strum Dec 17 '17 at 4:22a3nm , Feb 3, 2012 at 8:39
Another way is to reverse before and aftercut
:$ echo ab:cd:ef | rev | cut -d: -f1 | rev efThis makes it very easy to get the last but one field, or any range of fields numbered from the end.
Dannid , Jan 14, 2013 at 20:50
This answer is nice because it uses 'cut', which the author is (presumably) already familiar. Plus, I like this answer because I am using 'cut' and had this exact question, hence finding this thread via search. – Dannid Jan 14 '13 at 20:50funroll , Aug 12, 2013 at 19:51
Some cut-and-paste fodder for people using spaces as delimiters:echo "1 2 3 4" | rev | cut -d " " -f1 | rev
– funroll Aug 12 '13 at 19:51EdgeCaseBerg , Sep 8, 2013 at 5:01
the rev | cut -d -f1 | rev is so clever! Thanks! Helped me a bunch (my use case was rev | -d ' ' -f 2- | rev – EdgeCaseBerg Sep 8 '13 at 5:01Anarcho-Chossid , Sep 16, 2015 at 15:54
Wow. Beautiful and dark magic. – Anarcho-Chossid Sep 16 '15 at 15:54shearn89 , Aug 17, 2017 at 9:27
I always forget aboutrev
, was just what I needed!cut -b20- | rev | cut -b10- | rev
– shearn89 Aug 17 '17 at 9:27William Pursell , Jul 2, 2010 at 7:09
It's difficult to get the last field using cut, but here's (one set of) solutions in awk and perl$ echo 1:2:3:4:5 | awk -F: '{print $NF}' 5 $ echo 1:2:3:4:5 | perl -F: -wane 'print $F[-1]' 5eckes , Jan 23, 2013 at 15:20
great advantage of this solution over the accepted answer: it also matches paths that contain or do not contain a finishing/
character:/a/b/c/d
and/a/b/c/d/
yield the same result (d
) when processingpwd | awk -F/ '{print $NF}'
. The accepted answer results in an empty result in the case of/a/b/c/d/
– eckes Jan 23 '13 at 15:20stamster , May 21 at 11:52
@eckes In case of AWK solution, on GNU bash, version 4.3.48(1)-release that's not true, as it matters whenever you have trailing slash or not. Simply put AWK will use/
as delimiter, and if your path is/my/path/dir/
it will use value after last delimiter, which is simply an empty string. So it's best to avoid trailing slash if you need to do such a thing like I do. – stamster May 21 at 11:52Nicholas M T Elliott , Jul 1, 2010 at 23:39
Assuming fairly simple usage (no escaping of the delimiter, for example), you can use grep:$ echo "1:2:3:4:5" | grep -oE "[^:]+$" 5Breakdown - find all the characters not the delimiter ([^:]) at the end of the line ($). -o only prints the matching part.
Dennis Williamson , Jul 2, 2010 at 0:05
One way:var1="1:2:3:4:5" var2=${var1##*:}Another, using an array:
var1="1:2:3:4:5" saveIFS=$IFS IFS=":" var2=($var1) IFS=$saveIFS var2=${var2[@]: -1}Yet another with an array:
var1="1:2:3:4:5" saveIFS=$IFS IFS=":" var2=($var1) IFS=$saveIFS count=${#var2[@]} var2=${var2[$count-1]}Using Bash (version >= 3.2) regular expressions:
var1="1:2:3:4:5" [[ $var1 =~ :([^:]*)$ ]] var2=${BASH_REMATCH[1]}liuyang1 , Mar 24, 2015 at 6:02
Thanks so much for array style, as I need this feature, but not have cut, awk these utils. – liuyang1 Mar 24 '15 at 6:02user3133260 , Dec 24, 2013 at 19:04
$ echo "a b c d e" | tr ' ' '\n' | tail -1 eSimply translate the delimiter into a newline and choose the last entry with
tail -1
.Yajo , Jul 30, 2014 at 10:13
It will fail if the last item contains a\n
, but for most cases is the most readable solution. – Yajo Jul 30 '14 at 10:13Rafael , Nov 10, 2016 at 10:09
Usingsed
:$ echo '1:2:3:4:5' | sed 's/.*://' # => 5 $ echo '' | sed 's/.*://' # => (empty) $ echo ':' | sed 's/.*://' # => (empty) $ echo ':b' | sed 's/.*://' # => b $ echo '::c' | sed 's/.*://' # => c $ echo 'a' | sed 's/.*://' # => a $ echo 'a:' | sed 's/.*://' # => (empty) $ echo 'a:b' | sed 's/.*://' # => b $ echo 'a::c' | sed 's/.*://' # => cAb Irato , Nov 13, 2013 at 16:10
If your last field is a single character, you could do this:a="1:2:3:4:5" echo ${a: -1} echo ${a:(-1)}Check string manipulation in bash .
gniourf_gniourf , Nov 13, 2013 at 16:15
This doesn't work: it gives the last character ofa
, not the last field . – gniourf_gniourf Nov 13 '13 at 16:15Ab Irato , Nov 25, 2013 at 13:25
True, that's the idea, if you know the length of the last field it's good. If not you have to use something else... – Ab Irato Nov 25 '13 at 13:25sphakka , Jan 25, 2016 at 16:24
Interesting, I didn't know of these particular Bash string manipulations. It also resembles to Python's string/array slicing . – sphakka Jan 25 '16 at 16:24ghostdog74 , Jul 2, 2010 at 1:16
Using Bash.$ var1="1:2:3:4:0" $ IFS=":" $ set -- $var1 $ eval echo \$${#} 0Sopalajo de Arrierez , Dec 24, 2014 at 5:04
I would buy some details about this method, please :-) . – Sopalajo de Arrierez Dec 24 '14 at 5:04Rafa , Apr 27, 2017 at 22:10
Could have usedecho ${!#}
instead ofeval echo \$${#}
. – Rafa Apr 27 '17 at 22:10Crytis , Dec 7, 2016 at 6:51
echo "a:b:c:d:e"|xargs -d : -n1|tail -1First use xargs split it using ":",-n1 means every line only have one part.Then,pring the last part.
BDL , Dec 7, 2016 at 13:47
Although this might solve the problem, one should always add an explanation to it. – BDL Dec 7 '16 at 13:47Crytis , Jun 7, 2017 at 9:13
already added.. – Crytis Jun 7 '17 at 9:13021 , Apr 26, 2016 at 11:33
There are many good answers here, but still I want to share this one using basename :basename $(echo "a:b:c:d:e" | tr ':' '/')However it will fail if there are already some '/' in your string . If slash / is your delimiter then you just have to (and should) use basename.
It's not the best answer but it just shows how you can be creative using bash commands.
Nahid Akbar , Jun 22, 2012 at 2:55
for x in `echo $str | tr ";" "\n"`; do echo $x; donechepner , Jun 22, 2012 at 12:58
This runs into problems if there is whitespace in any of the fields. Also, it does not directly address the question of retrieving the last field. – chepner Jun 22 '12 at 12:58Christoph Böddeker , Feb 19 at 15:50
For those that comfortable with Python, https://github.com/Russell91/pythonpy is a nice choice to solve this problem.$ echo "a:b:c:d:e" | py -x 'x.split(":")[-1]'From the pythonpy help:
-x treat each row of stdin as x
.With that tool, it is easy to write python code that gets applied to the input.
baz , Nov 24, 2017 at 19:27
a solution using the read builtinIFS=':' read -a field <<< "1:2:3:4:5" echo ${field[4]}
Nov 08, 2018 | stackoverflow.com
stefanB , May 28, 2009 at 2:03
I have this string stored in a variable:IN="[email protected];[email protected]"Now I would like to split the strings by
;
delimiter so that I have:ADDR1="[email protected]" ADDR2="[email protected]"I don't necessarily need the
ADDR1
andADDR2
variables. If they are elements of an array that's even better.
After suggestions from the answers below, I ended up with the following which is what I was after:
#!/usr/bin/env bash IN="[email protected];[email protected]" mails=$(echo $IN | tr ";" "\n") for addr in $mails do echo "> [$addr]" doneOutput:
> [[email protected]] > [[email protected]]There was a solution involving setting Internal_field_separator (IFS) to
;
. I am not sure what happened with that answer, how do you resetIFS
back to default?RE:
IFS
solution, I tried this and it works, I keep the oldIFS
and then restore it:IN="[email protected];[email protected]" OIFS=$IFS IFS=';' mails2=$IN for x in $mails2 do echo "> [$x]" done IFS=$OIFSBTW, when I tried
mails2=($IN)I only got the first string when printing it in loop, without brackets around
$IN
it works.Brooks Moses , May 1, 2012 at 1:26
With regards to your "Edit2": You can simply "unset IFS" and it will return to the default state. There's no need to save and restore it explicitly unless you have some reason to expect that it's already been set to a non-default value. Moreover, if you're doing this inside a function (and, if you aren't, why not?), you can set IFS as a local variable and it will return to its previous value once you exit the function. – Brooks Moses May 1 '12 at 1:26dubiousjim , May 31, 2012 at 5:21
@BrooksMoses: (a) +1 for usinglocal IFS=...
where possible; (b) -1 forunset IFS
, this doesn't exactly reset IFS to its default value, though I believe an unset IFS behaves the same as the default value of IFS ($' \t\n'), however it seems bad practice to be assuming blindly that your code will never be invoked with IFS set to a custom value; (c) another idea is to invoke a subshell:(IFS=$custom; ...)
when the subshell exits IFS will return to whatever it was originally. – dubiousjim May 31 '12 at 5:21nicooga , Mar 7, 2016 at 15:32
I just want to have a quick look at the paths to decide where to throw an executable, so I resorted to runruby -e "puts ENV.fetch('PATH').split(':')"
. If you want to stay pure bash won't help but using any scripting language that has a built-in split is easier. – nicooga Mar 7 '16 at 15:32Jeff , Apr 22 at 17:51
This is kind of a drive-by comment, but since the OP used email addresses as the example, has anyone bothered to answer it in a way that is fully RFC 5322 compliant, namely that any quoted string can appear before the @ which means you're going to need regular expressions or some other kind of parser instead of naive use of IFS or other simplistic splitter functions. – Jeff Apr 22 at 17:51user2037659 , Apr 26 at 20:15
for x in $(IFS=';';echo $IN); do echo "> [$x]"; done
– user2037659 Apr 26 at 20:15Johannes Schaub - litb , May 28, 2009 at 2:23
You can set the internal field separator (IFS) variable, and then let it parse into an array. When this happens in a command, then the assignment toIFS
only takes place to that single command's environment (toread
). It then parses the input according to theIFS
variable value into an array, which we can then iterate over.IFS=';' read -ra ADDR <<< "$IN" for i in "${ADDR[@]}"; do # process "$i" doneIt will parse one line of items separated by
;
, pushing it into an array. Stuff for processing whole of$IN
, each time one line of input separated by;
:while IFS=';' read -ra ADDR; do for i in "${ADDR[@]}"; do # process "$i" done done <<< "$IN"Chris Lutz , May 28, 2009 at 2:25
This is probably the best way. How long will IFS persist in it's current value, can it mess up my code by being set when it shouldn't be, and how can I reset it when I'm done with it? – Chris Lutz May 28 '09 at 2:25Johannes Schaub - litb , May 28, 2009 at 3:04
now after the fix applied, only within the duration of the read command :) – Johannes Schaub - litb May 28 '09 at 3:04lhunath , May 28, 2009 at 6:14
You can read everything at once without using a while loop: read -r -d '' -a addr <<< "$in" # The -d '' is key here, it tells read not to stop at the first newline (which is the default -d) but to continue until EOF or a NULL byte (which only occur in binary data). – lhunath May 28 '09 at 6:14Charles Duffy , Jul 6, 2013 at 14:39
@LucaBorrione SettingIFS
on the same line as theread
with no semicolon or other separator, as opposed to in a separate command, scopes it to that command -- so it's always "restored"; you don't need to do anything manually. – Charles Duffy Jul 6 '13 at 14:39chepner , Oct 2, 2014 at 3:50
@imagineerThis There is a bug involving herestrings and local changes to IFS that requires$IN
to be quoted. The bug is fixed inbash
4.3. – chepner Oct 2 '14 at 3:50palindrom , Mar 10, 2011 at 9:00
Taken from Bash shell script split array :IN="[email protected];[email protected]" arrIN=(${IN//;/ })Explanation:
This construction replaces all occurrences of
';'
(the initial//
means global replace) in the stringIN
with' '
(a single space), then interprets the space-delimited string as an array (that's what the surrounding parentheses do).The syntax used inside of the curly braces to replace each
';'
character with a' '
character is called Parameter Expansion .There are some common gotchas:
Oz123 , Mar 21, 2011 at 18:50
I just want to add: this is the simplest of all, you can access array elements with ${arrIN[1]} (starting from zeros of course) – Oz123 Mar 21 '11 at 18:50KomodoDave , Jan 5, 2012 at 15:13
Found it: the technique of modifying a variable within a ${} is known as 'parameter expansion'. – KomodoDave Jan 5 '12 at 15:13qbolec , Feb 25, 2013 at 9:12
Does it work when the original string contains spaces? – qbolec Feb 25 '13 at 9:12Ethan , Apr 12, 2013 at 22:47
No, I don't think this works when there are also spaces present... it's converting the ',' to ' ' and then building a space-separated array. – Ethan Apr 12 '13 at 22:47Charles Duffy , Jul 6, 2013 at 14:39
This is a bad approach for other reasons: For instance, if your string contains;*;
, then the*
will be expanded to a list of filenames in the current directory. -1 – Charles Duffy Jul 6 '13 at 14:39Chris Lutz , May 28, 2009 at 2:09
If you don't mind processing them immediately, I like to do this:for i in $(echo $IN | tr ";" "\n") do # process doneYou could use this kind of loop to initialize an array, but there's probably an easier way to do it. Hope this helps, though.
Chris Lutz , May 28, 2009 at 2:42
You should have kept the IFS answer. It taught me something I didn't know, and it definitely made an array, whereas this just makes a cheap substitute. – Chris Lutz May 28 '09 at 2:42Johannes Schaub - litb , May 28, 2009 at 2:59
I see. Yeah i find doing these silly experiments, i'm going to learn new things each time i'm trying to answer things. I've edited stuff based on #bash IRC feedback and undeleted :) – Johannes Schaub - litb May 28 '09 at 2:59lhunath , May 28, 2009 at 6:12
-1, you're obviously not aware of wordsplitting, because it's introducing two bugs in your code. one is when you don't quote $IN and the other is when you pretend a newline is the only delimiter used in wordsplitting. You are iterating over every WORD in IN, not every line, and DEFINATELY not every element delimited by a semicolon, though it may appear to have the side-effect of looking like it works. – lhunath May 28 '09 at 6:12Johannes Schaub - litb , May 28, 2009 at 17:00
You could change it to echo "$IN" | tr ';' '\n' | while read -r ADDY; do # process "$ADDY"; done to make him lucky, i think :) Note that this will fork, and you can't change outer variables from within the loop (that's why i used the <<< "$IN" syntax) then – Johannes Schaub - litb May 28 '09 at 17:00mklement0 , Apr 24, 2013 at 14:13
To summarize the debate in the comments: Caveats for general use : the shell applies word splitting and expansions to the string, which may be undesired; just try it with.IN="[email protected];[email protected];*;broken apart"
. In short: this approach will break, if your tokens contain embedded spaces and/or chars. such as*
that happen to make a token match filenames in the current folder. – mklement0 Apr 24 '13 at 14:13F. Hauri , Apr 13, 2013 at 14:20
Compatible answerTo this SO question, there is already a lot of different way to do this in bash . But bash has many special features, so called bashism that work well, but that won't work in any other shell .
In particular, arrays , associative array , and pattern substitution are pure bashisms and may not work under other shells .
On my Debian GNU/Linux , there is a standard shell called dash , but I know many people who like to use ksh .
Finally, in very small situation, there is a special tool called busybox with his own shell interpreter ( ash ).
Requested stringThe string sample in SO question is:
IN="[email protected];[email protected]"As this could be useful with whitespaces and as whitespaces could modify the result of the routine, I prefer to use this sample string:
IN="[email protected];[email protected];Full Name <[email protected]>"Split string based on delimiter in bash (version >=4.2)Under pure bash, we may use arrays and IFS :
var="[email protected];[email protected];Full Name <[email protected]>"oIFS="$IFS" IFS=";" declare -a fields=($var) IFS="$oIFS" unset oIFSIFS=\; read -a fields <<<"$var"Using this syntax under recent bash don't change
$IFS
for current session, but only for the current command:set | grep ^IFS= IFS=$' \t\n'Now the string
var
is split and stored into an array (namedfields
):set | grep ^fields=\\\|^var= fields=([0]="[email protected]" [1]="[email protected]" [2]="Full Name <[email protected]>") var='[email protected];[email protected];Full Name <[email protected]>'We could request for variable content with
declare -p
:declare -p var fields declare -- var="[email protected];[email protected];Full Name <[email protected]>" declare -a fields=([0]="[email protected]" [1]="[email protected]" [2]="Full Name <[email protected]>")
read
is the quickiest way to do the split, because there is no forks and no external resources called.From there, you could use the syntax you already know for processing each field:
for x in "${fields[@]}";do echo "> [$x]" done > [[email protected]] > [[email protected]] > [Full Name <[email protected]>]or drop each field after processing (I like this shifting approach):
while [ "$fields" ] ;do echo "> [$fields]" fields=("${fields[@]:1}") done > [[email protected]] > [[email protected]] > [Full Name <[email protected]>]or even for simple printout (shorter syntax):
printf "> [%s]\n" "${fields[@]}" > [[email protected]] > [[email protected]] > [Full Name <[email protected]>]Split string based on delimiter in shellBut if you would write something usable under many shells, you have to not use bashisms .
There is a syntax, used in many shells, for splitting a string across first or last occurrence of a substring:
${var#*SubStr} # will drop begin of string up to first occur of `SubStr` ${var##*SubStr} # will drop begin of string up to last occur of `SubStr` ${var%SubStr*} # will drop part of string from last occur of `SubStr` to the end ${var%%SubStr*} # will drop part of string from first occur of `SubStr` to the end(The missing of this is the main reason of my answer publication ;)
As pointed out by Score_Under :
#
and%
delete the shortest possible matching string, and
##
and%%
delete the longest possible.This little sample script work well under bash , dash , ksh , busybox and was tested under Mac-OS's bash too:
var="[email protected];[email protected];Full Name <[email protected]>" while [ "$var" ] ;do iter=${var%%;*} echo "> [$iter]" [ "$var" = "$iter" ] && \ var='' || \ var="${var#*;}" done > [[email protected]] > [[email protected]] > [Full Name <[email protected]>]Have fun!
Score_Under , Apr 28, 2015 at 16:58
The#
,##
,%
, and%%
substitutions have what is IMO an easier explanation to remember (for how much they delete):#
and%
delete the shortest possible matching string, and##
and%%
delete the longest possible. – Score_Under Apr 28 '15 at 16:58sorontar , Oct 26, 2016 at 4:36
TheIFS=\; read -a fields <<<"$var"
fails on newlines and add a trailing newline. The other solution removes a trailing empty field. – sorontar Oct 26 '16 at 4:36Eric Chen , Aug 30, 2017 at 17:50
The shell delimiter is the most elegant answer, period. – Eric Chen Aug 30 '17 at 17:50sancho.s , Oct 4 at 3:42
Could the last alternative be used with a list of field separators set somewhere else? For instance, I mean to use this as a shell script, and pass a list of field separators as a positional parameter. – sancho.s Oct 4 at 3:42F. Hauri , Oct 4 at 7:47
Yes, in a loop:for sep in "#" "ł" "@" ; do ... var="${var#*$sep}" ...
– F. Hauri Oct 4 at 7:47DougW , Apr 27, 2015 at 18:20
I've seen a couple of answers referencing thecut
command, but they've all been deleted. It's a little odd that nobody has elaborated on that, because I think it's one of the more useful commands for doing this type of thing, especially for parsing delimited log files.In the case of splitting this specific example into a bash script array,
tr
is probably more efficient, butcut
can be used, and is more effective if you want to pull specific fields from the middle.Example:
$ echo "[email protected];[email protected]" | cut -d ";" -f 1 [email protected] $ echo "[email protected];[email protected]" | cut -d ";" -f 2 [email protected]You can obviously put that into a loop, and iterate the -f parameter to pull each field independently.
This gets more useful when you have a delimited log file with rows like this:
2015-04-27|12345|some action|an attribute|meta data
cut
is very handy to be able tocat
this file and select a particular field for further processing.MisterMiyagi , Nov 2, 2016 at 8:42
Kudos for usingcut
, it's the right tool for the job! Much cleared than any of those shell hacks. – MisterMiyagi Nov 2 '16 at 8:42uli42 , Sep 14, 2017 at 8:30
This approach will only work if you know the number of elements in advance; you'd need to program some more logic around it. It also runs an external tool for every element. – uli42 Sep 14 '17 at 8:30Louis Loudog Trottier , May 10 at 4:20
Excatly waht i was looking for trying to avoid empty string in a csv. Now i can point the exact 'column' value as well. Work with IFS already used in a loop. Better than expected for my situation. – Louis Loudog Trottier May 10 at 4:20, May 28, 2009 at 10:31
How about this approach:IN="[email protected];[email protected]" set -- "$IN" IFS=";"; declare -a Array=($*) echo "${Array[@]}" echo "${Array[0]}" echo "${Array[1]}"Yzmir Ramirez , Sep 5, 2011 at 1:06
+1 ... but I wouldn't name the variable "Array" ... pet peev I guess. Good solution. – Yzmir Ramirez Sep 5 '11 at 1:06ata , Nov 3, 2011 at 22:33
+1 ... but the "set" and declare -a are unnecessary. You could as well have used justIFS";" && Array=($IN)
– ata Nov 3 '11 at 22:33Luca Borrione , Sep 3, 2012 at 9:26
+1 Only a side note: shouldn't it be recommendable to keep the old IFS and then restore it? (as shown by stefanB in his edit3) people landing here (sometimes just copying and pasting a solution) might not think about this – Luca Borrione Sep 3 '12 at 9:26Charles Duffy , Jul 6, 2013 at 14:44
-1: First, @ata is right that most of the commands in this do nothing. Second, it uses word-splitting to form the array, and doesn't do anything to inhibit glob-expansion when doing so (so if you have glob characters in any of the array elements, those elements are replaced with matching filenames). – Charles Duffy Jul 6 '13 at 14:44John_West , Jan 8, 2016 at 12:29
Suggest to use$'...'
:IN=$'[email protected];[email protected];bet <d@\ns* kl.com>'
. Thenecho "${Array[2]}"
will print a string with newline.set -- "$IN"
is also neccessary in this case. Yes, to prevent glob expansion, the solution should includeset -f
. – John_West Jan 8 '16 at 12:29Steven Lizarazo , Aug 11, 2016 at 20:45
This worked for me:string="1;2" echo $string | cut -d';' -f1 # output is 1 echo $string | cut -d';' -f2 # output is 2Pardeep Sharma , Oct 10, 2017 at 7:29
this is sort and sweet :) – Pardeep Sharma Oct 10 '17 at 7:29space earth , Oct 17, 2017 at 7:23
Thanks...Helped a lot – space earth Oct 17 '17 at 7:23mojjj , Jan 8 at 8:57
cut works only with a single char as delimiter. – mojjj Jan 8 at 8:57lothar , May 28, 2009 at 2:12
echo "[email protected];[email protected]" | sed -e 's/;/\n/g' [email protected] [email protected]Luca Borrione , Sep 3, 2012 at 10:08
-1 what if the string contains spaces? for exampleIN="this is first line; this is second line" arrIN=( $( echo "$IN" | sed -e 's/;/\n/g' ) )
will produce an array of 8 elements in this case (an element for each word space separated), rather than 2 (an element for each line semi colon separated) – Luca Borrione Sep 3 '12 at 10:08lothar , Sep 3, 2012 at 17:33
@Luca No the sed script creates exactly two lines. What creates the multiple entries for you is when you put it into a bash array (which splits on white space by default) – lothar Sep 3 '12 at 17:33Luca Borrione , Sep 4, 2012 at 7:09
That's exactly the point: the OP needs to store entries into an array to loop over it, as you can see in his edits. I think your (good) answer missed to mention to usearrIN=( $( echo "$IN" | sed -e 's/;/\n/g' ) )
to achieve that, and to advice to change IFS toIFS=$'\n'
for those who land here in the future and needs to split a string containing spaces. (and to restore it back afterwards). :) – Luca Borrione Sep 4 '12 at 7:09lothar , Sep 4, 2012 at 16:55
@Luca Good point. However the array assignment was not in the initial question when I wrote up that answer. – lothar Sep 4 '12 at 16:55Ashok , Sep 8, 2012 at 5:01
This also works:IN="[email protected];[email protected]" echo ADD1=`echo $IN | cut -d \; -f 1` echo ADD2=`echo $IN | cut -d \; -f 2`Be careful, this solution is not always correct. In case you pass "[email protected]" only, it will assign it to both ADD1 and ADD2.
fersarr , Mar 3, 2016 at 17:17
You can use -s to avoid the mentioned problem: superuser.com/questions/896800/ "-f, --fields=LIST select only these fields; also print any line that contains no delimiter character, unless the -s option is specified" – fersarr Mar 3 '16 at 17:17Tony , Jan 14, 2013 at 6:33
I think AWK is the best and efficient command to resolve your problem. AWK is included in Bash by default in almost every Linux distribution.echo "[email protected];[email protected]" | awk -F';' '{print $1,$2}'will give
[email protected] [email protected]Of course your can store each email address by redefining the awk print field.
Jaro , Jan 7, 2014 at 21:30
Or even simpler: echo "[email protected];[email protected]" | awk 'BEGIN{RS=";"} {print}' – Jaro Jan 7 '14 at 21:30Aquarelle , May 6, 2014 at 21:58
@Jaro This worked perfectly for me when I had a string with commas and needed to reformat it into lines. Thanks. – Aquarelle May 6 '14 at 21:58Eduardo Lucio , Aug 5, 2015 at 12:59
It worked in this scenario -> "echo "$SPLIT_0" | awk -F' inode=' '{print $1}'"! I had problems when trying to use atrings (" inode=") instead of characters (";"). $ 1, $ 2, $ 3, $ 4 are set as positions in an array! If there is a way of setting an array... better! Thanks! – Eduardo Lucio Aug 5 '15 at 12:59Tony , Aug 6, 2015 at 2:42
@EduardoLucio, what I'm thinking about is maybe you can first replace your delimiterinode=
into;
for example bysed -i 's/inode\=/\;/g' your_file_to_process
, then define-F';'
when applyawk
, hope that can help you. – Tony Aug 6 '15 at 2:42nickjb , Jul 5, 2011 at 13:41
A different take on Darron's answer , this is how I do it:IN="[email protected];[email protected]" read ADDR1 ADDR2 <<<$(IFS=";"; echo $IN)ColinM , Sep 10, 2011 at 0:31
This doesn't work. – ColinM Sep 10 '11 at 0:31nickjb , Oct 6, 2011 at 15:33
I think it does! Run the commands above and then "echo $ADDR1 ... $ADDR2" and i get "[email protected] ... [email protected]" output – nickjb Oct 6 '11 at 15:33Nick , Oct 28, 2011 at 14:36
This worked REALLY well for me... I used it to itterate over an array of strings which contained comma separated DB,SERVER,PORT data to use mysqldump. – Nick Oct 28 '11 at 14:36dubiousjim , May 31, 2012 at 5:28
Diagnosis: theIFS=";"
assignment exists only in the$(...; echo $IN)
subshell; this is why some readers (including me) initially think it won't work. I assumed that all of $IN was getting slurped up by ADDR1. But nickjb is correct; it does work. The reason is thatecho $IN
command parses its arguments using the current value of $IFS, but then echoes them to stdout using a space delimiter, regardless of the setting of $IFS. So the net effect is as though one had calledread ADDR1 ADDR2 <<< "[email protected] [email protected]"
(note the input is space-separated not ;-separated). – dubiousjim May 31 '12 at 5:28sorontar , Oct 26, 2016 at 4:43
This fails on spaces and newlines, and also expand wildcards*
in theecho $IN
with an unquoted variable expansion. – sorontar Oct 26 '16 at 4:43gniourf_gniourf , Jun 26, 2014 at 9:11
In Bash, a bullet proof way, that will work even if your variable contains newlines:IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in")Look:
$ in=$'one;two three;*;there is\na newline\nin this field' $ IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in") $ declare -p array declare -a array='([0]="one" [1]="two three" [2]="*" [3]="there is a newline in this field")'The trick for this to work is to use the
-d
option ofread
(delimiter) with an empty delimiter, so thatread
is forced to read everything it's fed. And we feedread
with exactly the content of the variablein
, with no trailing newline thanks toprintf
. Note that's we're also putting the delimiter inprintf
to ensure that the string passed toread
has a trailing delimiter. Without it,read
would trim potential trailing empty fields:$ in='one;two;three;' # there's an empty field $ IFS=';' read -d '' -ra array < <(printf '%s;\0' "$in") $ declare -p array declare -a array='([0]="one" [1]="two" [2]="three" [3]="")'the trailing empty field is preserved.
Update for Bash≥4.4Since Bash 4.4, the builtin
mapfile
(akareadarray
) supports the-d
option to specify a delimiter. Hence another canonical way is:mapfile -d ';' -t array < <(printf '%s;' "$in")John_West , Jan 8, 2016 at 12:10
I found it as the rare solution on that list that works correctly with\n
, spaces and*
simultaneously. Also, no loops; array variable is accessible in the shell after execution (contrary to the highest upvoted answer). Note,in=$'...'
, it does not work with double quotes. I think, it needs more upvotes. – John_West Jan 8 '16 at 12:10Darron , Sep 13, 2010 at 20:10
How about this one liner, if you're not using arrays:IFS=';' read ADDR1 ADDR2 <<<$INdubiousjim , May 31, 2012 at 5:36
Consider usingread -r ...
to ensure that, for example, the two characters "\t" in the input end up as the same two characters in your variables (instead of a single tab char). – dubiousjim May 31 '12 at 5:36Luca Borrione , Sep 3, 2012 at 10:07
-1 This is not working here (ubuntu 12.04). Addingecho "ADDR1 $ADDR1"\n echo "ADDR2 $ADDR2"
to your snippet will outputADDR1 [email protected] [email protected]\nADDR2
(\n is newline) – Luca Borrione Sep 3 '12 at 10:07chepner , Sep 19, 2015 at 13:59
This is probably due to a bug involvingIFS
and here strings that was fixed inbash
4.3. Quoting$IN
should fix it. (In theory,$IN
is not subject to word splitting or globbing after it expands, meaning the quotes should be unnecessary. Even in 4.3, though, there's at least one bug remaining--reported and scheduled to be fixed--so quoting remains a good idea.) – chepner Sep 19 '15 at 13:59sorontar , Oct 26, 2016 at 4:55
This breaks if $in contain newlines even if $IN is quoted. And adds a trailing newline. – sorontar Oct 26 '16 at 4:55kenorb , Sep 11, 2015 at 20:54
Here is a clean 3-liner:in="foo@bar;bizz@buzz;fizz@buzz;buzz@woof" IFS=';' list=($in) for item in "${list[@]}"; do echo $item; donewhere
IFS
delimit words based on the separator and()
is used to create an array . Then[@]
is used to return each item as a separate word.If you've any code after that, you also need to restore
$IFS
, e.g.unset IFS
.sorontar , Oct 26, 2016 at 5:03
The use of$in
unquoted allows wildcards to be expanded. – sorontar Oct 26 '16 at 5:03user2720864 , Sep 24 at 13:46
+ for the unset command – user2720864 Sep 24 at 13:46Emilien Brigand , Aug 1, 2016 at 13:15
Without setting the IFSIf you just have one colon you can do that:
a="foo:bar" b=${a%:*} c=${a##*:}you will get:
b = foo c = barVictor Choy , Sep 16, 2015 at 3:34
There is a simple and smart way like this:echo "add:sfff" | xargs -d: -i echo {}But you must use gnu xargs, BSD xargs cant support -d delim. If you use apple mac like me. You can install gnu xargs :
brew install findutilsthen
echo "add:sfff" | gxargs -d: -i echo {}Halle Knast , May 24, 2017 at 8:42
The following Bash/zsh function splits its first argument on the delimiter given by the second argument:split() { local string="$1" local delimiter="$2" if [ -n "$string" ]; then local part while read -d "$delimiter" part; do echo $part done <<< "$string" echo $part fi }For instance, the command
$ split 'a;b;c' ';'yields
a b cThis output may, for instance, be piped to other commands. Example:
$ split 'a;b;c' ';' | cat -n 1 a 2 b 3 cCompared to the other solutions given, this one has the following advantages:
IFS
is not overriden: Due to dynamic scoping of even local variables, overridingIFS
over a loop causes the new value to leak into function calls performed from within the loop.- Arrays are not used: Reading a string into an array using
read
requires the flag-a
in Bash and-A
in zsh.If desired, the function may be put into a script as follows:
#!/usr/bin/env bash split() { # ... } split "$@"sandeepkunkunuru , Oct 23, 2017 at 16:10
works and neatly modularized. – sandeepkunkunuru Oct 23 '17 at 16:10Prospero , Sep 25, 2011 at 1:09
This is the simplest way to do it.spo='one;two;three' OIFS=$IFS IFS=';' spo_array=($spo) IFS=$OIFS echo ${spo_array[*]}rashok , Oct 25, 2016 at 12:41
IN="[email protected];[email protected]" IFS=';' read -a IN_arr <<< "${IN}" for entry in "${IN_arr[@]}" do echo $entry doneOutput
[email protected] [email protected]System : Ubuntu 12.04.1
codeforester , Jan 2, 2017 at 5:37
IFS is not getting set in the specific context ofread
here and hence it can upset rest of the code, if any. – codeforester Jan 2 '17 at 5:37shuaihanhungry , Jan 20 at 15:54
you can apply awk to many situationsecho "[email protected];[email protected]"|awk -F';' '{printf "%s\n%s\n", $1, $2}'also you can use this
echo "[email protected];[email protected]"|awk -F';' '{print $1,$2}' OFS="\n"ghost , Apr 24, 2013 at 13:13
If no space, Why not this?IN="[email protected];[email protected]" arr=(`echo $IN | tr ';' ' '`) echo ${arr[0]} echo ${arr[1]}eukras , Oct 22, 2012 at 7:10
There are some cool answers here (errator esp.), but for something analogous to split in other languages -- which is what I took the original question to mean -- I settled on this:IN="[email protected];[email protected]" declare -a a="(${IN/;/ })";Now
${a[0]}
,${a[1]}
, etc, are as you would expect. Use${#a[*]}
for number of terms. Or to iterate, of course:for i in ${a[*]}; do echo $i; doneIMPORTANT NOTE:
This works in cases where there are no spaces to worry about, which solved my problem, but may not solve yours. Go with the
$IFS
solution(s) in that case.olibre , Oct 7, 2013 at 13:33
Does not work whenIN
contains more than two e-mail addresses. Please refer to same idea (but fixed) at palindrom's answer – olibre Oct 7 '13 at 13:33sorontar , Oct 26, 2016 at 5:14
Better use${IN//;/ }
(double slash) to make it also work with more than two values. Beware that any wildcard (*?[
) will be expanded. And a trailing empty field will be discarded. – sorontar Oct 26 '16 at 5:14jeberle , Apr 30, 2013 at 3:10
Use theset
built-in to load up the$@
array:IN="[email protected];[email protected]" IFS=';'; set $IN; IFS=$' \t\n'Then, let the party begin:
echo $# for a; do echo $a; done ADDR1=$1 ADDR2=$2sorontar , Oct 26, 2016 at 5:17
Better useset -- $IN
to avoid some issues with "$IN" starting with dash. Still, the unquoted expansion of$IN
will expand wildcards (*?[
). – sorontar Oct 26 '16 at 5:17NevilleDNZ , Sep 2, 2013 at 6:30
Two bourne-ish alternatives where neither require bash arrays:Case 1 : Keep it nice and simple: Use a NewLine as the Record-Separator... eg.
IN="[email protected] [email protected]" while read i; do # process "$i" ... eg. echo "[email:$i]" done <<< "$IN"Note: in this first case no sub-process is forked to assist with list manipulation.
Idea: Maybe it is worth using NL extensively internally , and only converting to a different RS when generating the final result externally .
Case 2 : Using a ";" as a record separator... eg.
NL=" " IRS=";" ORS=";" conv_IRS() { exec tr "$1" "$NL" } conv_ORS() { exec tr "$NL" "$1" } IN="[email protected];[email protected]" IN="$(conv_IRS ";" <<< "$IN")" while read i; do # process "$i" ... eg. echo -n "[email:$i]$ORS" done <<< "$IN"In both cases a sub-list can be composed within the loop is persistent after the loop has completed. This is useful when manipulating lists in memory, instead storing lists in files. {p.s. keep calm and carry on B-) }
fedorqui , Jan 8, 2015 at 10:21
Apart from the fantastic answers that were already provided, if it is just a matter of printing out the data you may consider usingawk
:awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "$IN"This sets the field separator to
Test;
, so that it can loop through the fields with afor
loop and print accordingly.$ IN="[email protected];[email protected]" $ awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "$IN" > [[email protected]] > [[email protected]]With another input:
$ awk -F";" '{for (i=1;i<=NF;i++) printf("> [%s]\n", $i)}' <<< "a;b;c d;e_;f" > [a] > [b] > [c d] > [e_] > [f]18446744073709551615 , Feb 20, 2015 at 10:49
In Android shell, most of the proposed methods just do not work:$ IFS=':' read -ra ADDR <<<"$PATH" /system/bin/sh: can't create temporary file /sqlite_stmt_journals/mksh.EbNoR10629: No such file or directoryWhat does work is:
$ for i in ${PATH//:/ }; do echo $i; done /sbin /vendor/bin /system/sbin /system/bin /system/xbinwhere
//
means global replacement.sorontar , Oct 26, 2016 at 5:08
Fails if any part of $PATH contains spaces (or newlines). Also expands wildcards (asterisk *, question mark ? and braces [ ]). – sorontar Oct 26 '16 at 5:08Eduardo Lucio , Apr 4, 2016 at 19:54
Okay guys!Here's my answer!
DELIMITER_VAL='=' read -d '' F_ABOUT_DISTRO_R <<"EOF" DISTRIB_ID=Ubuntu DISTRIB_RELEASE=14.04 DISTRIB_CODENAME=trusty DISTRIB_DESCRIPTION="Ubuntu 14.04.4 LTS" NAME="Ubuntu" VERSION="14.04.4 LTS, Trusty Tahr" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 14.04.4 LTS" VERSION_ID="14.04" HOME_URL="http://www.ubuntu.com/" SUPPORT_URL="http://help.ubuntu.com/" BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/" EOF SPLIT_NOW=$(awk -F$DELIMITER_VAL '{for(i=1;i<=NF;i++){printf "%s\n", $i}}' <<<"${F_ABOUT_DISTRO_R}") while read -r line; do SPLIT+=("$line") done <<< "$SPLIT_NOW" for i in "${SPLIT[@]}"; do echo "$i" doneWhy this approach is "the best" for me?
Because of two reasons:
- You do not need to escape the delimiter;
- You will not have problem with blank spaces . The value will be properly separated in the array!
[]'s
gniourf_gniourf , Jan 30, 2017 at 8:26
FYI,/etc/os-release
and/etc/lsb-release
are meant to be sourced, and not parsed. So your method is really wrong. Moreover, you're not quite answering the question about spiltting a string on a delimiter. – gniourf_gniourf Jan 30 '17 at 8:26Michael Hale , Jun 14, 2012 at 17:38
A one-liner to split a string separated by ';' into an array is:IN="[email protected];[email protected]" ADDRS=( $(IFS=";" echo "$IN") ) echo ${ADDRS[0]} echo ${ADDRS[1]}This only sets IFS in a subshell, so you don't have to worry about saving and restoring its value.
Luca Borrione , Sep 3, 2012 at 10:04
-1 this doesn't work here (ubuntu 12.04). it prints only the first echo with all $IN value in it, while the second is empty. you can see it if you put echo "0: "${ADDRS[0]}\n echo "1: "${ADDRS[1]} the output is0: [email protected];[email protected]\n 1:
(\n is new line) – Luca Borrione Sep 3 '12 at 10:04Luca Borrione , Sep 3, 2012 at 10:05
please refer to nickjb's answer at for a working alternative to this idea stackoverflow.com/a/6583589/1032370 – Luca Borrione Sep 3 '12 at 10:05Score_Under , Apr 28, 2015 at 17:09
-1, 1. IFS isn't being set in that subshell (it's being passed to the environment of "echo", which is a builtin, so nothing is happening anyway). 2.$IN
is quoted so it isn't subject to IFS splitting. 3. The process substitution is split by whitespace, but this may corrupt the original data. – Score_Under Apr 28 '15 at 17:09ajaaskel , Oct 10, 2014 at 11:33
IN='[email protected];[email protected];Charlie Brown <[email protected];!"#$%&/()[]{}*? are no problem;simple is beautiful :-)' set -f oldifs="$IFS" IFS=';'; arrayIN=($IN) IFS="$oldifs" for i in "${arrayIN[@]}"; do echo "$i" done set +fOutput:
[email protected] [email protected] Charlie Brown <[email protected] !"#$%&/()[]{}*? are no problem simple is beautiful :-)Explanation: Simple assignment using parenthesis () converts semicolon separated list into an array provided you have correct IFS while doing that. Standard FOR loop handles individual items in that array as usual. Notice that the list given for IN variable must be "hard" quoted, that is, with single ticks.
IFS must be saved and restored since Bash does not treat an assignment the same way as a command. An alternate workaround is to wrap the assignment inside a function and call that function with a modified IFS. In that case separate saving/restoring of IFS is not needed. Thanks for "Bize" for pointing that out.
gniourf_gniourf , Feb 20, 2015 at 16:45
!"#$%&/()[]{}*? are no problem
well... not quite:[]*?
are glob characters. So what about creating this directory and file: `mkdir '!"#$%&'; touch '!"#$%&/()[]{} got you hahahaha - are no problem' and running your command? simple may be beautiful, but when it's broken, it's broken. – gniourf_gniourf Feb 20 '15 at 16:45ajaaskel , Feb 25, 2015 at 7:20
@gniourf_gniourf The string is stored in a variable. Please see the original question. – ajaaskel Feb 25 '15 at 7:20gniourf_gniourf , Feb 25, 2015 at 7:26
@ajaaskel you didn't fully understand my comment. Go in a scratch directory and issue these commands:mkdir '!"#$%&'; touch '!"#$%&/()[]{} got you hahahaha - are no problem'
. They will only create a directory and a file, with weird looking names, I must admit. Then run your commands with the exactIN
you gave:IN='[email protected];[email protected];Charlie Brown <[email protected];!"#$%&/()[]{}*? are no problem;simple is beautiful :-)'
. You'll see that you won't get the output you expect. Because you're using a method subject to pathname expansions to split your string. – gniourf_gniourf Feb 25 '15 at 7:26gniourf_gniourf , Feb 25, 2015 at 7:29
This is to demonstrate that the characters*
,?
,[...]
and even, ifextglob
is set,!(...)
,@(...)
,?(...)
,+(...)
are problems with this method! – gniourf_gniourf Feb 25 '15 at 7:29ajaaskel , Feb 26, 2015 at 15:26
@gniourf_gniourf Thanks for detailed comments on globbing. I adjusted the code to have globbing off. My point was however just to show that rather simple assignment can do the splitting job. – ajaaskel Feb 26 '15 at 15:26> , Dec 19, 2013 at 21:39
Maybe not the most elegant solution, but works with*
and spaces:IN="bla@so me.com;*;[email protected]" for i in `delims=${IN//[^;]}; seq 1 $((${#delims} + 1))` do echo "> [`echo $IN | cut -d';' -f$i`]" doneOutputs
> [bla@so me.com] > [*] > [[email protected]]Other example (delimiters at beginning and end):
IN=";bla@so me.com;*;[email protected];" > [] > [bla@so me.com] > [*] > [[email protected]] > []Basically it removes every character other than
;
makingdelims
eg.;;;
. Then it doesfor
loop from1
tonumber-of-delimiters
as counted by${#delims}
. The final step is to safely get the$i
th part usingcut
.
Oct 17, 2018 | linuxconfig.org
Create indexed arrays on the fly We can create indexed arrays with a more concise syntax, by simply assign them some values:
$ my_array=(foo bar)In this case we assigned multiple items at once to the array, but we can also insert one value at a time, specifying its index:$ my_array[0]=fooArray operations Once an array is created, we can perform some useful operations on it, like displaying its keys and values or modifying it by appending or removing elements: Print the values of an array To display all the values of an array we can use the following shell expansion syntax:${my_array[@]}Or even:${my_array[*]}Both syntax let us access all the values of the array and produce the same results, unless the expansion it's quoted. In this case a difference arises: in the first case, when using@
, the expansion will result in a word for each element of the array. This becomes immediately clear when performing afor loop
. As an example, imagine we have an array with two elements, "foo" and "bar":$ my_array=(foo bar)Performing afor
loop on it will produce the following result:$ for i in "${my_array[@]}"; do echo "$i"; done foo barWhen using*
, and the variable is quoted, instead, a single "result" will be produced, containing all the elements of the array:$ for i in "${my_array[*]}"; do echo "$i"; done foo bar
Print the keys of an array It's even possible to retrieve and print the keys used in an indexed or associative array, instead of their respective values. The syntax is almost identical, but relies on the use of the!
operator:$ my_array=(foo bar baz) $ for index in "${!my_array[@]}"; do echo "$index"; done 0 1 2The same is valid for associative arrays:$ declare -A my_array $ my_array=([foo]=bar [baz]=foobar) $ for key in "${!my_array[@]}"; do echo "$key"; done baz fooAs you can see, being the latter an associative array, we can't count on the fact that retrieved values are returned in the same order in which they were declared. Getting the size of an array We can retrieve the size of an array (the number of elements contained in it), by using a specific shell expansion:$ my_array=(foo bar baz) $ echo "the array contains ${#my_array[@]} elements" the array contains 3 elementsWe have created an array which contains three elements, "foo", "bar" and "baz", then by using the syntax above, which differs from the one we saw before to retrieve the array values only for the#
character before the array name, we retrieved the number of the elements in the array instead of its content. Adding elements to an array As we saw, we can add elements to an indexed or associative array by specifying respectively their index or associative key. In the case of indexed arrays, we can also simply add an element, by appending to the end of the array, using the+=
operator:$ my_array=(foo bar) $ my_array+=(baz)If we now print the content of the array we see that the element has been added successfully:$ echo "${my_array[@]}" foo bar bazMultiple elements can be added at a time:$ my_array=(foo bar) $ my_array+=(baz foobar) $ echo "${my_array[@]}" foo bar baz foobarTo add elements to an associative array, we are bound to specify also their associated keys:$ declare -A my_array # Add single element $ my_array[foo]="bar" # Add multiple elements at a time $ my_array+=([baz]=foobar [foobarbaz]=baz)
Deleting an element from the array To delete an element from the array we need to know it's index or its key in the case of an associative array, and use theunset
command. Let's see an example:$ my_array=(foo bar baz) $ unset my_array[1] $ echo ${my_array[@]} foo bazWe have created a simple array containing three elements, "foo", "bar" and "baz", then we deleted "bar" from it runningunset
and referencing the index of "bar" in the array: in this case we know it was1
, since bash arrays start at 0. If we check the indexes of the array, we can now see that1
is missing:$ echo ${!my_array[@]} 0 2The same thing it's valid for associative arrays:$ declare -A my_array $ my_array+=([foo]=bar [baz]=foobar) $ unset my_array[foo] $ echo ${my_array[@]} foobarIn the example above, the value referenced by the "foo" key has been deleted, leaving only "foobar" in the array.Deleting an entire array, it's even simpler: we just pass the array name as an argument to the
unset
command without specifying any index or key:$ unset my_array $ echo ${!my_array[@]}After executingunset
against the entire array, when trying to print its content an empty result is returned: the array doesn't exist anymore. Conclusions In this tutorial we saw the difference between indexed and associative arrays in bash, how to initialize them and how to perform fundamental operations, like displaying their keys and values and appending or removing items. Finally we saw how to unset them completely. Bash syntax can sometimes be pretty weird, but using arrays in scripts can be really useful. When a script starts to become more complex than expected, my advice is, however, to switch to a more capable scripting language such as python.
Oct 13, 2018 | gist.github.com
Instantly share code, notes, and snippets.
mbadran / gist:130469 CreatedJun 16, 2009 Embed
What would you like to do? Learn more about clone URLs Download ZIP replace cd in bash to (silent) pushd Raw gistfile1.sh
alias cd= " pushd $@ > /dev/null " This comment has been minimized. Show comment Hide comment
Copy link bobbydavidSep 19, 2012 One annoyance with this alias is that simply typing "cd" will twiddle the directory stack instead of bringing you to your home directory. bobbydavid commented
Copy link Sep 19, 2012
One annoyance with this alias is that simply typing "cd" will twiddle the directory stack instead of bringing you to your home directory. This comment has been minimized. Show comment Hide comment
Copy link didelerMar 9, 2013 @bobbydavid makes a good point. This would be better as a function. function cd { if (("$#" > 0)); then pushd "$@" > /dev/null else cd $HOME fi }By the way, I found this gist by googling "silence pushd".
dideler commented
Copy link Mar 9, 2013
@bobbydavid makes a good point. This would be better as a function. function cd { if (("$#" > 0)); then pushd "$@" > /dev/null else cd $HOME fi }By the way, I found this gist by googling "silence pushd".
This comment has been minimized. Show comment Hide comment
Copy link ghostMay 30, 2013 Don't you miss something? function cd { if (("$#" > 0)); then if [ "$1" == "-" ]; then popd > /dev/null else pushd "$@" > /dev/null fi else cd $HOME fi }You can always mimic the "cd -" functionality by using pushd alone.
Btw, I also found this gist by googling "silent pushd" ;)ghost commented
Copy link May 30, 2013
Don't you miss something? function cd { if (("$#" > 0)); then if [ "$1" == "-" ]; then popd > /dev/null else pushd "$@" > /dev/null fi else cd $HOME fi }You can always mimic the "cd -" functionality by using pushd alone.
Btw, I also found this gist by googling "silent pushd" ;)This comment has been minimized. Show comment Hide comment
Copy link craJul 1, 2014 And thanks to your last comment, I found this gist by googling "silent cd -" :) cra commented
Copy link Jul 1, 2014
And thanks to your last comment, I found this gist by googling "silent cd -" :) This comment has been minimized. Show comment Hide comment
Copy link keltrothJun 25, 2015 With bash completion activated a can't get rid of this error :
"bash: pushd: cd: No such file or directory"...Any clue ?
keltroth commented
Copy link Jun 25, 2015
With bash completion activated a can't get rid of this error :
"bash: pushd: cd: No such file or directory"...Any clue ?
This comment has been minimized. Show comment Hide comment
Copy link keltrothJun 25, 2015 Got it !
One have to add :complete -d cdAfter making the alias !
My complete code here :
function _cd { if (("$#" > 0)); then if [ "$1" == "-" ]; then popd > /dev/null else pushd "$@" > /dev/null fi else cd $HOME fi } alias cd=_cd complete -d cdkeltroth commented
Copy link Jun 25, 2015
Got it !
One have to add :complete -d cdAfter making the alias !
My complete code here :
function _cd { if (("$#" > 0)); then if [ "$1" == "-" ]; then popd > /dev/null else pushd "$@" > /dev/null fi else cd $HOME fi } alias cd=_cd complete -d cdThis comment has been minimized. Show comment Hide comment
Copy link jan-warcholNov 29, 2015 I wanted to be able to go back by a given number of history items by typing cd -n
, and I came up with this:function _cd { # typing just `_cd` will take you $HOME ;) if [ "$1" == "" ]; then pushd "$HOME" > /dev/null # use `_cd -` to visit previous directory elif [ "$1" == "-" ]; then pushd $OLDPWD > /dev/null # use `_cd -n` to go n directories back in history elif [[ "$1" =~ ^-[0-9]+$ ]]; then for i in `seq 1 ${1/-/}`; do popd > /dev/null done # use `_cd -- <path>` if your path begins with a dash elif [ "$1" == "--" ]; then shift pushd -- "$@" > /dev/null # basic case: move to a dir and add it to history else pushd "$@" > /dev/null fi } # replace standard `cd` with enhanced version, ensure tab-completion works alias cd=_cd complete -d cdI think you may find this interesting.
jan-warchol commented
Copy link Nov 29, 2015
I wanted to be able to go back by a given number of history items by typing cd -n
, and I came up with this:function _cd { # typing just `_cd` will take you $HOME ;) if [ "$1" == "" ]; then pushd "$HOME" > /dev/null # use `_cd -` to visit previous directory elif [ "$1" == "-" ]; then pushd $OLDPWD > /dev/null # use `_cd -n` to go n directories back in history elif [[ "$1" =~ ^-[0-9]+$ ]]; then for i in `seq 1 ${1/-/}`; do popd > /dev/null done # use `_cd -- <path>` if your path begins with a dash elif [ "$1" == "--" ]; then shift pushd -- "$@" > /dev/null # basic case: move to a dir and add it to history else pushd "$@" > /dev/null fi } # replace standard `cd` with enhanced version, ensure tab-completion works alias cd=_cd complete -d cdI think you may find this interesting.
This comment has been minimized. Show comment Hide comment
Copy link 3v1n0Oct 25, 2017 Another improvement over @jan-warchol version, to make cd -
to alternatively usepushd $OLDPWD
andpopd
depending on what we called before.This allows to avoid to fill your history with elements when you often do
cd -; cd - # repeated as long you want
. This could be applied when using this alias also for$OLDPWD
, but in that case it might be that you want it repeated there, so I didn't touch it.Also added
cd -l
as alias fordir -v
and usecd -g X
to go to theX
th directory in your history (without popping, that's possible too of course, but it' something more an addition in this case).# Replace cd with pushd https://gist.github.com/mbadran/130469 function push_cd() { # typing just `push_cd` will take you $HOME ;) if [ -z "$1" ]; then push_cd "$HOME" # use `push_cd -` to visit previous directory elif [ "$1" == "-" ]; then if [ "$(dirs -p | wc -l)" -gt 1 ]; then current_dir="$PWD" popd > /dev/null pushd -n $current_dir > /dev/null elif [ -n "$OLDPWD" ]; then push_cd $OLDPWD fi # use `push_cd -l` or `push_cd -s` to print current stack of folders elif [ "$1" == "-l" ] || [ "$1" == "-s" ]; then dirs -v # use `push_cd -l N` to go to the Nth directory in history (pushing) elif [ "$1" == "-g" ] && [[ "$2" =~ ^[0-9]+$ ]]; then indexed_path=$(dirs -p | sed -n $(($2+1))p) push_cd $indexed_path # use `push_cd +N` to go to the Nth directory in history (pushing) elif [[ "$1" =~ ^+[0-9]+$ ]]; then push_cd -g ${1/+/} # use `push_cd -N` to go n directories back in history elif [[ "$1" =~ ^-[0-9]+$ ]]; then for i in `seq 1 ${1/-/}`; do popd > /dev/null done # use `push_cd -- <path>` if your path begins with a dash elif [ "$1" == "--" ]; then shift pushd -- "$@" > /dev/null # basic case: move to a dir and add it to history else pushd "$@" > /dev/null if [ "$1" == "." ] || [ "$1" == "$PWD" ]; then popd -n > /dev/null fi fi if [ -n "$CD_SHOW_STACK" ]; then dirs -v fi } # replace standard `cd` with enhanced version, ensure tab-completion works alias cd=push_cd complete -d cd```3v1n0 commented
Copy link Oct 25, 2017 edited
Another improvement over @jan-warchol version, to make cd -
to alternatively usepushd $OLDPWD
andpopd
depending on what we called before.This allows to avoid to fill your history with elements when you often do
cd -; cd - # repeated as long you want
. This could be applied when using this alias also for$OLDPWD
, but in that case it might be that you want it repeated there, so I didn't touch it.Also added
cd -l
as alias fordir -v
and usecd -g X
to go to theX
th directory in your history (without popping, that's possible too of course, but it' something more an addition in this case).# Replace cd with pushd https://gist.github.com/mbadran/130469 function push_cd() { # typing just `push_cd` will take you $HOME ;) if [ -z "$1" ]; then push_cd "$HOME" # use `push_cd -` to visit previous directory elif [ "$1" == "-" ]; then if [ "$(dirs -p | wc -l)" -gt 1 ]; then current_dir="$PWD" popd > /dev/null pushd -n $current_dir > /dev/null elif [ -n "$OLDPWD" ]; then push_cd $OLDPWD fi # use `push_cd -l` or `push_cd -s` to print current stack of folders elif [ "$1" == "-l" ] || [ "$1" == "-s" ]; then dirs -v # use `push_cd -l N` to go to the Nth directory in history (pushing) elif [ "$1" == "-g" ] && [[ "$2" =~ ^[0-9]+$ ]]; then indexed_path=$(dirs -p | sed -n $(($2+1))p) push_cd $indexed_path # use `push_cd +N` to go to the Nth directory in history (pushing) elif [[ "$1" =~ ^+[0-9]+$ ]]; then push_cd -g ${1/+/} # use `push_cd -N` to go n directories back in history elif [[ "$1" =~ ^-[0-9]+$ ]]; then for i in `seq 1 ${1/-/}`; do popd > /dev/null done # use `push_cd -- <path>` if your path begins with a dash elif [ "$1" == "--" ]; then shift pushd -- "$@" > /dev/null # basic case: move to a dir and add it to history else pushd "$@" > /dev/null if [ "$1" == "." ] || [ "$1" == "$PWD" ]; then popd -n > /dev/null fi fi if [ -n "$CD_SHOW_STACK" ]; then dirs -v fi } # replace standard `cd` with enhanced version, ensure tab-completion works alias cd=push_cd complete -d cd```
Oct 10, 2018 | www.cyberciti.biz
- Abhijeet Vaidya says: March 11, 2010 at 11:41 am End single quote is missing.
Correct command is:echo 'export HISTTIMEFORMAT="%d/%m/%y %T "' >> ~/.bash_profile- izaak says: March 12, 2010 at 11:06 am I would also add
$ echo 'export HISTSIZE=10000' >> ~/.bash_profile
It's really useful, I think.
- Dariusz says: March 12, 2010 at 2:31 pm you can add it to /etc/profile so it is available to all users. I also add:
# Make sure all terminals save history shopt -s histappend histreedit histverify shopt -s no_empty_cmd_completion # bash>=2.04 only# Whenever displaying the prompt, write the previous line to disk:
PROMPT_COMMAND='history -a'#Use GREP color features by default: This will highlight the matched words / regexes
export GREP_OPTIONS='color=auto'
export GREP_COLOR='1;37;41′- Babar Haq says: March 15, 2010 at 6:25 am Good tip. We have multiple users connecting as root using ssh and running different commands. Is there a way to log the IP that command was run from?
Thanks in advance.
- Anthony says: August 21, 2014 at 9:01 pm Just for anyone who might still find this thread (like I did today):
export HISTTIMEFORMAT="%F %T : $(echo $SSH_CONNECTION | cut -d\ -f1) : "
will give you the time format, plus the IP address culled from the ssh_connection environment variable (thanks for pointing that out, Cadrian, I never knew about that before), all right there in your history output.
You could even add in $(whoami)@ right to get if you like (although if everyone's logging in with the root account that's not helpful).
- cadrian says: March 16, 2010 at 5:55 pm Yup, you can export one of this
env | grep SSH
SSH_CLIENT=192.168.78.22 42387 22
SSH_TTY=/dev/pts/0
SSH_CONNECTION=192.168.78.22 42387 192.168.36.76 22As their bash history filename
set |grep -i hist
HISTCONTROL=ignoreboth
HISTFILE=/home/cadrian/.bash_history
HISTFILESIZE=1000000000
HISTSIZE=10000000So in profile you can so something like HISTFILE=/root/.bash_history_$(echo $SSH_CONNECTION| cut -d\ -f1)
- TSI says: March 21, 2010 at 10:29 am bash 4 can syslog every command bat afaik, you have to recompile it (check file config-top.h). See the news file of bash: http://tiswww.case.edu/php/chet/bash/NEWS
If you want to safely export history of your luser, you can ssl-syslog them to a central syslog server.- Dinesh Jadhav says: November 12, 2010 at 11:00 am This is good command, It helps me a lot.
- Indie says: September 19, 2011 at 11:41 am You only need to use
export HISTTIMEFORMAT='%F %T 'in your .bash_profile
- lalit jain says: October 3, 2011 at 9:58 am -- show history with date & time
# HISTTIMEFORMAT='%c '
#history- Sohail says: January 13, 2012 at 7:05 am Hi
Nice trick but unfortunately, the commands which were executed in the past few days also are carrying the current day's (today's) timestamp.Please advice.
Regards
- Raymond says: March 15, 2012 at 9:05 am Hi Sohail,
Yes indeed that will be the behavior of the system since you have just enabled on that day the HISTTIMEFORMAT feature. In other words, the system recall or record the commands which were inputted prior enabling of this feature. Hope this answers your concern.
Thanks!
- Raymond says: March 15, 2012 at 9:08 am Hi Sohail,
Yes, that will be the behavior of the system since you have just enabled on that day the HISTTIMEFORMAT feature. In other words, the system can't recall or record the commands which were inputted prior enabling of this feature, thus it will just reflect on the printed output (upon execution of "history") the current day and time. Hope this answers your concern.
Thanks!
- Sohail says: February 24, 2012 at 6:45 am Hi
The command only lists the current date (Today) even for those commands which were executed on earlier days.
Any solutions ?
Regards
- nitiratna nikalje says: August 24, 2012 at 5:24 pm hi vivek.do u know any openings for freshers in linux field? I m doing rhce course from rajiv banergy. My samba,nfs-nis,dhcp,telnet,ftp,http,ssh,squid,cron,quota and system administration is over.iptables ,sendmail and dns is remaining.
-9029917299(Nitiratna)
- JMathew says: August 26, 2012 at 10:51 pm Hi,
Is there anyway to log username also along with the Command Which we typed
Thanks in Advance
- suresh says: May 22, 2013 at 1:42 pm How can i get full comman along with data and path as we het in history command.
- rajesh says: December 6, 2013 at 5:56 am Thanks it worked..
- Krishan says: February 7, 2014 at 6:18 am The command is not working properly. It is displaying the date and time of todays for all the commands where as I ran the some command three before.
How come it is displaying the today date
- PR says: April 29, 2014 at 5:18 pm Hi..
I want to collect the history of particular user everyday and want to send an email.I wrote below script.
for collecting everyday history by time shall i edit .profile file of that user
echo 'export HISTTIMEFORMAT="%d/%m/%y %T "' >> ~/.bash_profile
Script:#!/bin/bash #This script sends email of particular user history >/tmp/history if [ -s /tmp/history ] then mailx -s "history 29042014" </tmp/history fi rm /tmp/history #END OF THE SCRIPTCan any one suggest better way to collect particular user history for everyday
- lefty.crupps says: October 24, 2014 at 7:10 pm Love it, but using the ISO date format is always recommended (YYYY-MM-DD), just as every other sorted group goes from largest sorting (Year) to smallest sorting (day)
https://en.wikipedia.org/wiki/ISO_8601#Calendar_datesIn that case, myne looks like this:
echo 'export HISTTIMEFORMAT="%YY-%m-%d/ %T "' >> ~/.bashrcThanks for the tip!
- lefty.crupps says: October 24, 2014 at 7:11 pm please delete post 33, my command is messed up.
- lefty.crupps says: October 24, 2014 at 7:11 pm Love it, but using the ISO date format is always recommended (YYYY-MM-DD), just as every other sorted group goes from largest sorting (Year) to smallest sorting (day)
https://en.wikipedia.org/wiki/ISO_8601#Calendar_datesIn that case, myne looks like this:
echo ‘export HISTTIMEFORMAT=%Y-%m-%d %T “‘ >> ~/.bashrcThanks for the tip!
- Vanathu says: October 30, 2014 at 1:01 am its show only current date for all the command history
- lefty.crupps says: October 30, 2014 at 2:08 am it's marking all of your current history with today's date. Try checking again in a few days.
- tinu says: October 14, 2015 at 3:30 pm Hi All,
I Have enabled my history with the command given :
echo 'export HISTTIMEFORMAT="%d/%m/%y %T "' >> ~/.bash_profilei need to know how i can add the ip's also , from which the commands are fired to the system.
Jul 04, 2018 | stackoverflow.com
Lawrence Johnston ,Oct 10, 2008 at 16:57
Say, I have a script that gets called with this line:./myscript -vfd ./foo/bar/someFile -o /fizz/someOtherFileor this one:
./myscript -v -f -d -o /fizz/someOtherFile ./foo/bar/someFileWhat's the accepted way of parsing this such that in each case (or some combination of the two)
$v
,$f
, and$d
will all be set totrue
and$outFile
will be equal to/fizz/someOtherFile
?Inanc Gumus ,Apr 15, 2016 at 19:11
See my very easy and no-dependency answer here: stackoverflow.com/a/33826763/115363 – Inanc Gumus Apr 15 '16 at 19:11dezza ,Aug 2, 2016 at 2:13
For zsh-users there's a great builtin called zparseopts which can do:zparseopts -D -E -M -- d=debug -debug=d
And have both-d
and--debug
in the$debug
arrayecho $+debug[1]
will return 0 or 1 if one of those are used. Ref: zsh.org/mla/users/2011/msg00350.html – dezza Aug 2 '16 at 2:13Bruno Bronosky ,Jan 7, 2013 at 20:01
Preferred Method: Using straight bash without getopt[s]I originally answered the question as the OP asked. This Q/A is getting a lot of attention, so I should also offer the non-magic way to do this. I'm going to expand upon guneysus's answer to fix the nasty sed and include Tobias Kienzler's suggestion .
Two of the most common ways to pass key value pair arguments are:
Straight Bash Space SeparatedUsage
./myscript.sh -e conf -s /etc -l /usr/lib /etc/hosts
#!/bin/bash POSITIONAL=() while [[ $# -gt 0 ]] do key="$1" case $key in -e|--extension) EXTENSION="$2" shift # past argument shift # past value ;; -s|--searchpath) SEARCHPATH="$2" shift # past argument shift # past value ;; -l|--lib) LIBPATH="$2" shift # past argument shift # past value ;; --default) DEFAULT=YES shift # past argument ;; *) # unknown option POSITIONAL+=("$1") # save it in an array for later shift # past argument ;; esac done set -- "${POSITIONAL[@]}" # restore positional parameters echo FILE EXTENSION = "${EXTENSION}" echo SEARCH PATH = "${SEARCHPATH}" echo LIBRARY PATH = "${LIBPATH}" echo DEFAULT = "${DEFAULT}" echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l) if [[ -n $1 ]]; then echo "Last line of file specified as non-opt/last argument:" tail -1 "$1" fiStraight Bash Equals SeparatedUsage
./myscript.sh -e=conf -s=/etc -l=/usr/lib /etc/hosts
#!/bin/bash for i in "$@" do case $i in -e=*|--extension=*) EXTENSION="${i#*=}" shift # past argument=value ;; -s=*|--searchpath=*) SEARCHPATH="${i#*=}" shift # past argument=value ;; -l=*|--lib=*) LIBPATH="${i#*=}" shift # past argument=value ;; --default) DEFAULT=YES shift # past argument with no value ;; *) # unknown option ;; esac done echo "FILE EXTENSION = ${EXTENSION}" echo "SEARCH PATH = ${SEARCHPATH}" echo "LIBRARY PATH = ${LIBPATH}" echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l) if [[ -n $1 ]]; then echo "Last line of file specified as non-opt/last argument:" tail -1 $1 fiTo better understand
Using getopt[s]${i#*=}
search for "Substring Removal" in this guide . It is functionally equivalent to`sed 's/[^=]*=//' <<< "$i"`
which calls a needless subprocess or`echo "$i" | sed 's/[^=]*=//'`
which calls two needless subprocesses.from: http://mywiki.wooledge.org/BashFAQ/035#getopts
Never use getopt(1).
getopt
cannot handle empty arguments strings, or arguments with embedded whitespace. Please forget that it ever existed.The POSIX shell (and others) offer
getopts
which is safe to use instead. Here is a simplisticgetopts
example:#!/bin/sh # A POSIX variable OPTIND=1 # Reset in case getopts has been used previously in the shell. # Initialize our own variables: output_file="" verbose=0 while getopts "h?vf:" opt; do case "$opt" in h|\?) show_help exit 0 ;; v) verbose=1 ;; f) output_file=$OPTARG ;; esac done shift $((OPTIND-1)) [ "${1:-}" = "--" ] && shift echo "verbose=$verbose, output_file='$output_file', Leftovers: $@" # End of fileThe advantages of
getopts
are:
- It's portable, and will work in e.g. dash.
- It can handle things like
-vf filename
in the expected Unix way, automatically.The disadvantage of
getopts
is that it can only handle short options (-h
, not--help
) without trickery.There is a getopts tutorial which explains what all of the syntax and variables mean. In bash, there is also
help getopts
, which might be informative.Livven ,Jun 6, 2013 at 21:19
Is this really true? According to Wikipedia there's a newer GNU enhanced version ofgetopt
which includes all the functionality ofgetopts
and then some.man getopt
on Ubuntu 13.04 outputsgetopt - parse command options (enhanced)
as the name, so I presume this enhanced version is standard now. – Livven Jun 6 '13 at 21:19szablica ,Jul 17, 2013 at 15:23
That something is a certain way on your system is a very weak premise to base asumptions of "being standard" on. – szablica Jul 17 '13 at 15:23Stephane Chazelas ,Aug 20, 2014 at 19:55
@Livven, thatgetopt
is not a GNU utility, it's part ofutil-linux
. – Stephane Chazelas Aug 20 '14 at 19:55Nicolas Mongrain-Lacombe ,Jun 19, 2016 at 21:22
If you use-gt 0
, remove yourshift
after theesac
, augment all theshift
by 1 and add this case:*) break;;
you can handle non optionnal arguments. Ex: pastebin.com/6DJ57HTc – Nicolas Mongrain-Lacombe Jun 19 '16 at 21:22kolydart ,Jul 10, 2017 at 8:11
You do not echo–default
. In the first example, I notice that if–default
is the last argument, it is not processed (considered as non-opt), unlesswhile [[ $# -gt 1 ]]
is set aswhile [[ $# -gt 0 ]]
– kolydart Jul 10 '17 at 8:11Robert Siemer ,Apr 20, 2015 at 17:47
No answer mentions enhanced getopt . And the top-voted answer is misleading: It ignores-vfd
style short options (requested by the OP), options after positional arguments (also requested by the OP) and it ignores parsing-errors. Instead:
- Use enhanced
getopt
from util-linux or formerly GNU glibc . 1- It works with
getopt_long()
the C function of GNU glibc.- Has all useful distinguishing features (the others don't have them):
- handles spaces, quoting characters and even binary in arguments 2
- it can handle options at the end:
script.sh -o outFile file1 file2 -v
- allows
=
-style long options:script.sh --outfile=fileOut --infile fileIn
- Is so old already 3 that no GNU system is missing this (e.g. any Linux has it).
- You can test for its existence with:
getopt --test
→ return value 4.- Other
getopt
or shell-builtingetopts
are of limited use.The following calls
myscript -vfd ./foo/bar/someFile -o /fizz/someOtherFile myscript -v -f -d -o/fizz/someOtherFile -- ./foo/bar/someFile myscript --verbose --force --debug ./foo/bar/someFile -o/fizz/someOtherFile myscript --output=/fizz/someOtherFile ./foo/bar/someFile -vfd myscript ./foo/bar/someFile -df -v --output /fizz/someOtherFileall return
verbose: y, force: y, debug: y, in: ./foo/bar/someFile, out: /fizz/someOtherFilewith the following
myscript
#!/bin/bash getopt --test > /dev/null if [[ $? -ne 4 ]]; then echo "I'm sorry, `getopt --test` failed in this environment." exit 1 fi OPTIONS=dfo:v LONGOPTIONS=debug,force,output:,verbose # -temporarily store output to be able to check for errors # -e.g. use "--options" parameter by name to activate quoting/enhanced mode # -pass arguments only via -- "$@" to separate them correctly PARSED=$(getopt --options=$OPTIONS --longoptions=$LONGOPTIONS --name "$0" -- "$@") if [[ $? -ne 0 ]]; then # e.g. $? == 1 # then getopt has complained about wrong arguments to stdout exit 2 fi # read getopt's output this way to handle the quoting right: eval set -- "$PARSED" # now enjoy the options in order and nicely split until we see -- while true; do case "$1" in -d|--debug) d=y shift ;; -f|--force) f=y shift ;; -v|--verbose) v=y shift ;; -o|--output) outFile="$2" shift 2 ;; --) shift break ;; *) echo "Programming error" exit 3 ;; esac done # handle non-option arguments if [[ $# -ne 1 ]]; then echo "$0: A single input file is required." exit 4 fi echo "verbose: $v, force: $f, debug: $d, in: $1, out: $outFile"
1 enhanced getopt is available on most "bash-systems", including Cygwin; on OS X try brew install gnu-getopt
2 the POSIXexec()
conventions have no reliable way to pass binary NULL in command line arguments; those bytes prematurely end the argument
3 first version released in 1997 or before (I only tracked it back to 1997)johncip ,Jan 12, 2017 at 2:00
Thanks for this. Just confirmed from the feature table at en.wikipedia.org/wiki/Getopts , if you need support for long options, and you're not on Solaris,getopt
is the way to go. – johncip Jan 12 '17 at 2:00Kaushal Modi ,Apr 27, 2017 at 14:02
I believe that the only caveat withgetopt
is that it cannot be used conveniently in wrapper scripts where one might have few options specific to the wrapper script, and then pass the non-wrapper-script options to the wrapped executable, intact. Let's say I have agrep
wrapper calledmygrep
and I have an option--foo
specific tomygrep
, then I cannot domygrep --foo -A 2
, and have the-A 2
passed automatically togrep
; I need to domygrep --foo -- -A 2
. Here is my implementation on top of your solution. – Kaushal Modi Apr 27 '17 at 14:02bobpaul ,Mar 20 at 16:45
Alex, I agree and there's really no way around that since we need to know the actual return value ofgetopt --test
. I'm a big fan of "Unofficial Bash Strict mode", (which includesset -e
), and I just put the check for getopt ABOVEset -euo pipefail
andIFS=$'\n\t'
in my script. – bobpaul Mar 20 at 16:45Robert Siemer ,Mar 21 at 9:10
@bobpaul Oh, there is a way around that. And I'll edit my answer soon to reflect my collections regarding this issue (set -e
)... – Robert Siemer Mar 21 at 9:10Robert Siemer ,Mar 21 at 9:16
@bobpaul Your statement about util-linux is wrong and misleading as well: the package is marked "essential" on Ubuntu/Debian. As such, it is always installed. – Which distros are you talking about (where you say it needs to be installed on purpose)? – Robert Siemer Mar 21 at 9:16guneysus ,Nov 13, 2012 at 10:31
from :digitalpeer.comwith minor modificationsUsage
myscript.sh -p=my_prefix -s=dirname -l=libname
#!/bin/bash for i in "$@" do case $i in -p=*|--prefix=*) PREFIX="${i#*=}" ;; -s=*|--searchpath=*) SEARCHPATH="${i#*=}" ;; -l=*|--lib=*) DIR="${i#*=}" ;; --default) DEFAULT=YES ;; *) # unknown option ;; esac done echo PREFIX = ${PREFIX} echo SEARCH PATH = ${SEARCHPATH} echo DIRS = ${DIR} echo DEFAULT = ${DEFAULT}To better understand
${i#*=}
search for "Substring Removal" in this guide . It is functionally equivalent to`sed 's/[^=]*=//' <<< "$i"`
which calls a needless subprocess or`echo "$i" | sed 's/[^=]*=//'`
which calls two needless subprocesses.Tobias Kienzler ,Nov 12, 2013 at 12:48
Neat! Though this won't work for space-separated arguments à lamount -t tempfs ...
. One can probably fix this via something likewhile [ $# -ge 1 ]; do param=$1; shift; case $param in; -p) prefix=$1; shift;;
etc – Tobias Kienzler Nov 12 '13 at 12:48Robert Siemer ,Mar 19, 2016 at 15:23
This can't handle-vfd
style combined short options. – Robert Siemer Mar 19 '16 at 15:23bekur ,Dec 19, 2017 at 23:27
link is broken! – bekur Dec 19 '17 at 23:27Matt J ,Oct 10, 2008 at 17:03
getopt()
/getopts()
is a good option. Stolen from here :The simple use of "getopt" is shown in this mini-script:
#!/bin/bash echo "Before getopt" for i do echo $i done args=`getopt abc:d $*` set -- $args echo "After getopt" for i do echo "-->$i" doneWhat we have said is that any of -a, -b, -c or -d will be allowed, but that -c is followed by an argument (the "c:" says that).
If we call this "g" and try it out:
bash-2.05a$ ./g -abc foo Before getopt -abc foo After getopt -->-a -->-b -->-c -->foo -->--We start with two arguments, and "getopt" breaks apart the options and puts each in its own argument. It also added "--".
Robert Siemer ,Apr 16, 2016 at 14:37
Using$*
is broken usage ofgetopt
. (It hoses arguments with spaces.) See my answer for proper usage. – Robert Siemer Apr 16 '16 at 14:37SDsolar ,Aug 10, 2017 at 14:07
Why would you want to make it more complicated? – SDsolar Aug 10 '17 at 14:07thebunnyrules ,Jun 1 at 1:57
@Matt J, the first part of the script (for i) would be able to handle arguments with spaces in them if you use "$i" instead of $i. The getopts does not seem to be able to handle arguments with spaces. What would be the advantage of using getopt over the for i loop? – thebunnyrules Jun 1 at 1:57bronson ,Jul 15, 2015 at 23:43
At the risk of adding another example to ignore, here's my scheme.
- handles
-n arg
and--name=arg
- allows arguments at the end
- shows sane errors if anything is misspelled
- compatible, doesn't use bashisms
- readable, doesn't require maintaining state in a loop
Hope it's useful to someone.
while [ "$#" -gt 0 ]; do case "$1" in -n) name="$2"; shift 2;; -p) pidfile="$2"; shift 2;; -l) logfile="$2"; shift 2;; --name=*) name="${1#*=}"; shift 1;; --pidfile=*) pidfile="${1#*=}"; shift 1;; --logfile=*) logfile="${1#*=}"; shift 1;; --name|--pidfile|--logfile) echo "$1 requires an argument" >&2; exit 1;; -*) echo "unknown option: $1" >&2; exit 1;; *) handle_argument "$1"; shift 1;; esac donerhombidodecahedron ,Sep 11, 2015 at 8:40
What is the "handle_argument" function? – rhombidodecahedron Sep 11 '15 at 8:40bronson ,Oct 8, 2015 at 20:41
Sorry for the delay. In my script, the handle_argument function receives all the non-option arguments. You can replace that line with whatever you'd like, maybe*) die "unrecognized argument: $1"
or collect the args into a variable*) args+="$1"; shift 1;;
. – bronson Oct 8 '15 at 20:41Guilherme Garnier ,Apr 13 at 16:10
Amazing! I've tested a couple of answers, but this is the only one that worked for all cases, including many positional parameters (both before and after flags) – Guilherme Garnier Apr 13 at 16:10Shane Day ,Jul 1, 2014 at 1:20
I'm about 4 years late to this question, but want to give back. I used the earlier answers as a starting point to tidy up my old adhoc param parsing. I then refactored out the following template code. It handles both long and short params, using = or space separated arguments, as well as multiple short params grouped together. Finally it re-inserts any non-param arguments back into the $1,$2.. variables. I hope it's useful.#!/usr/bin/env bash # NOTICE: Uncomment if your script depends on bashisms. #if [ -z "$BASH_VERSION" ]; then bash $0 $@ ; exit $? ; fi echo "Before" for i ; do echo - $i ; done # Code template for parsing command line parameters using only portable shell # code, while handling both long and short params, handling '-f file' and # '-f=file' style param data and also capturing non-parameters to be inserted # back into the shell positional parameters. while [ -n "$1" ]; do # Copy so we can modify it (can't modify $1) OPT="$1" # Detect argument termination if [ x"$OPT" = x"--" ]; then shift for OPT ; do REMAINS="$REMAINS \"$OPT\"" done break fi # Parse current opt while [ x"$OPT" != x"-" ] ; do case "$OPT" in # Handle --flag=value opts like this -c=* | --config=* ) CONFIGFILE="${OPT#*=}" shift ;; # and --flag value opts like this -c* | --config ) CONFIGFILE="$2" shift ;; -f* | --force ) FORCE=true ;; -r* | --retry ) RETRY=true ;; # Anything unknown is recorded for later * ) REMAINS="$REMAINS \"$OPT\"" break ;; esac # Check for multiple short options # NOTICE: be sure to update this pattern to match valid options NEXTOPT="${OPT#-[cfr]}" # try removing single short opt if [ x"$OPT" != x"$NEXTOPT" ] ; then OPT="-$NEXTOPT" # multiple short opts, keep going else break # long form, exit inner loop fi done # Done with that param. move to next shift done # Set the non-parameters back into the positional parameters ($1 $2 ..) eval set -- $REMAINS echo -e "After: \n configfile='$CONFIGFILE' \n force='$FORCE' \n retry='$RETRY' \n remains='$REMAINS'" for i ; do echo - $i ; doneRobert Siemer ,Dec 6, 2015 at 13:47
This code can't handle options with arguments like this:-c1
. And the use of=
to separate short options from their arguments is unusual... – Robert Siemer Dec 6 '15 at 13:47sfnd ,Jun 6, 2016 at 19:28
I ran into two problems with this useful chunk of code: 1) the "shift" in the case of "-c=foo" ends up eating the next parameter; and 2) 'c' should not be included in the "[cfr]" pattern for combinable short options. – sfnd Jun 6 '16 at 19:28Inanc Gumus ,Nov 20, 2015 at 12:28
More succinct wayscript.sh
#!/bin/bash while [[ "$#" > 0 ]]; do case $1 in -d|--deploy) deploy="$2"; shift;; -u|--uglify) uglify=1;; *) echo "Unknown parameter passed: $1"; exit 1;; esac; shift; done echo "Should deploy? $deploy" echo "Should uglify? $uglify"Usage:
./script.sh -d dev -u # OR: ./script.sh --deploy dev --uglifyhfossli ,Apr 7 at 20:58
This is what I am doing. Have towhile [[ "$#" > 1 ]]
if I want to support ending the line with a boolean flag./script.sh --debug dev --uglify fast --verbose
. Example: gist.github.com/hfossli/4368aa5a577742c3c9f9266ed214aa58 – hfossli Apr 7 at 20:58hfossli ,Apr 7 at 21:09
I sent an edit request. I just tested this and it works perfectly. – hfossli Apr 7 at 21:09hfossli ,Apr 7 at 21:10
Wow! Simple and clean! This is how I'm using this: gist.github.com/hfossli/4368aa5a577742c3c9f9266ed214aa58 – hfossli Apr 7 at 21:10Ponyboy47 ,Sep 8, 2016 at 18:59
My answer is largely based on the answer by Bruno Bronosky , but I sort of mashed his two pure bash implementations into one that I use pretty frequently.# As long as there is at least one more argument, keep looping while [[ $# -gt 0 ]]; do key="$1" case "$key" in # This is a flag type option. Will catch either -f or --foo -f|--foo) FOO=1 ;; # Also a flag type option. Will catch either -b or --bar -b|--bar) BAR=1 ;; # This is an arg value type option. Will catch -o value or --output-file value -o|--output-file) shift # past the key and to the value OUTPUTFILE="$1" ;; # This is an arg=value type option. Will catch -o=value or --output-file=value -o=*|--output-file=*) # No need to shift here since the value is part of the same string OUTPUTFILE="${key#*=}" ;; *) # Do whatever you want with extra options echo "Unknown option '$key'" ;; esac # Shift after checking all the cases to get the next option shift doneThis allows you to have both space separated options/values, as well as equal defined values.
So you could run your script using:
./myscript --foo -b -o /fizz/file.txtas well as:
./myscript -f --bar -o=/fizz/file.txtand both should have the same end result.
PROS:
- Allows for both -arg=value and -arg value
- Works with any arg name that you can use in bash
- Meaning -a or -arg or --arg or -a-r-g or whatever
- Pure bash. No need to learn/use getopt or getopts
CONS:
- Can't combine args
- Meaning no -abc. You must do -a -b -c
These are the only pros/cons I can think of off the top of my head
bubla ,Jul 10, 2016 at 22:40
I have found the matter to write portable parsing in scripts so frustrating that I have written Argbash - a FOSS code generator that can generate the arguments-parsing code for your script plus it has some nice features:RichVel ,Aug 18, 2016 at 5:34
Thanks for writing argbash, I just used it and found it works well. I mostly went for argbash because it's a code generator supporting the older bash 3.x found on OS X 10.11 El Capitan. The only downside is that the code-generator approach means quite a lot of code in your main script, compared to calling a module. – RichVel Aug 18 '16 at 5:34bubla ,Aug 23, 2016 at 20:40
You can actually use Argbash in a way that it produces tailor-made parsing library just for you that you can have included in your script or you can have it in a separate file and just source it. I have added an example to demonstrate that and I have made it more explicit in the documentation, too. – bubla Aug 23 '16 at 20:40RichVel ,Aug 24, 2016 at 5:47
Good to know. That example is interesting but still not really clear - maybe you can change name of the generated script to 'parse_lib.sh' or similar and show where the main script calls it (like in the wrapping script section which is more complex use case). – RichVel Aug 24 '16 at 5:47bubla ,Dec 2, 2016 at 20:12
The issues were addressed in recent version of argbash: Documentation has been improved, a quickstart argbash-init script has been introduced and you can even use argbash online at argbash.io/generate – bubla Dec 2 '16 at 20:12Alek ,Mar 1, 2012 at 15:15
I think this one is simple enough to use:#!/bin/bash # readopt='getopts $opts opt;rc=$?;[ $rc$opt == 0? ]&&exit 1;[ $rc == 0 ]||{ shift $[OPTIND-1];false; }' opts=vfdo: # Enumerating options while eval $readopt do echo OPT:$opt ${OPTARG+OPTARG:$OPTARG} done # Enumerating arguments for arg do echo ARG:$arg doneInvocation example:
./myscript -v -do /fizz/someOtherFile -f ./foo/bar/someFile OPT:v OPT:d OPT:o OPTARG:/fizz/someOtherFile OPT:f ARG:./foo/bar/someFileerm3nda ,May 20, 2015 at 22:50
I read all and this one is my preferred one. I don't like to use-a=1
as argc style. I prefer to put first the main option -options and later the special ones with single spacing-o option
. Im looking for the simplest-vs-better way to read argvs. – erm3nda May 20 '15 at 22:50erm3nda ,May 20, 2015 at 23:25
It's working really well but if you pass an argument to a non a: option all the following options would be taken as arguments. You can check this line./myscript -v -d fail -o /fizz/someOtherFile -f ./foo/bar/someFile
with your own script. -d option is not set as d: – erm3nda May 20 '15 at 23:25unsynchronized ,Jun 9, 2014 at 13:46
Expanding on the excellent answer by @guneysus, here is a tweak that lets user use whichever syntax they prefer, egcommand -x=myfilename.ext --another_switchvs
command -x myfilename.ext --another_switchThat is to say the equals can be replaced with whitespace.
This "fuzzy interpretation" might not be to your liking, but if you are making scripts that are interchangeable with other utilities (as is the case with mine, which must work with ffmpeg), the flexibility is useful.
STD_IN=0 prefix="" key="" value="" for keyValue in "$@" do case "${prefix}${keyValue}" in -i=*|--input_filename=*) key="-i"; value="${keyValue#*=}";; -ss=*|--seek_from=*) key="-ss"; value="${keyValue#*=}";; -t=*|--play_seconds=*) key="-t"; value="${keyValue#*=}";; -|--stdin) key="-"; value=1;; *) value=$keyValue;; esac case $key in -i) MOVIE=$(resolveMovie "${value}"); prefix=""; key="";; -ss) SEEK_FROM="${value}"; prefix=""; key="";; -t) PLAY_SECONDS="${value}"; prefix=""; key="";; -) STD_IN=${value}; prefix=""; key="";; *) prefix="${keyValue}=";; esac donevangorra ,Feb 12, 2015 at 21:50
getopts works great if #1 you have it installed and #2 you intend to run it on the same platform. OSX and Linux (for example) behave differently in this respect.Here is a (non getopts) solution that supports equals, non-equals, and boolean flags. For example you could run your script in this way:
./script --arg1=value1 --arg2 value2 --shouldClean # parse the arguments. COUNTER=0 ARGS=("$@") while [ $COUNTER -lt $# ] do arg=${ARGS[$COUNTER]} let COUNTER=COUNTER+1 nextArg=${ARGS[$COUNTER]} if [[ $skipNext -eq 1 ]]; then echo "Skipping" skipNext=0 continue fi argKey="" argVal="" if [[ "$arg" =~ ^\- ]]; then # if the format is: -key=value if [[ "$arg" =~ \= ]]; then argVal=$(echo "$arg" | cut -d'=' -f2) argKey=$(echo "$arg" | cut -d'=' -f1) skipNext=0 # if the format is: -key value elif [[ ! "$nextArg" =~ ^\- ]]; then argKey="$arg" argVal="$nextArg" skipNext=1 # if the format is: -key (a boolean flag) elif [[ "$nextArg" =~ ^\- ]] || [[ -z "$nextArg" ]]; then argKey="$arg" argVal="" skipNext=0 fi # if the format has not flag, just a value. else argKey="" argVal="$arg" skipNext=0 fi case "$argKey" in --source-scmurl) SOURCE_URL="$argVal" ;; --dest-scmurl) DEST_URL="$argVal" ;; --version-num) VERSION_NUM="$argVal" ;; -c|--clean) CLEAN_BEFORE_START="1" ;; -h|--help|-help|--h) showUsage exit ;; esac doneakostadinov ,Jul 19, 2013 at 7:50
This is how I do in a function to avoid breaking getopts run at the same time somewhere higher in stack:function waitForWeb () { local OPTIND=1 OPTARG OPTION local host=localhost port=8080 proto=http while getopts "h:p:r:" OPTION; do case "$OPTION" in h) host="$OPTARG" ;; p) port="$OPTARG" ;; r) proto="$OPTARG" ;; esac done ... }Renato Silva ,Jul 4, 2016 at 16:47
EasyOptions does not require any parsing:## Options: ## --verbose, -v Verbose mode ## --output=FILE Output filename source easyoptions || exit if test -n "${verbose}"; then echo "output file is ${output}" echo "${arguments[@]}" fiOleksii Chekulaiev ,Jul 1, 2016 at 20:56
I give you The Functionparse_params
that will parse params:
- Without polluting global scope.
- Effortlessly returns to you ready to use variables so that you could build further logic on them
- Amount of dashes before params does not matter (
--all
equals-all
equalsall=all
)The script below is a copy-paste working demonstration. See
show_use
function to understand how to useparse_params
.Limitations:
- Does not support space delimited params (
-d 1
)- Param names will lose dashes so
--any-param
and-anyparam
are equivalenteval $(parse_params "$@")
must be used inside bash function (it will not work in the global scope)
#!/bin/bash # Universal Bash parameter parsing # Parse equal sign separated params into named local variables # Standalone named parameter value will equal its param name (--force creates variable $force=="force") # Parses multi-valued named params into an array (--path=path1 --path=path2 creates ${path[*]} array) # Parses un-named params into ${ARGV[*]} array # Additionally puts all named params into ${ARGN[*]} array # Additionally puts all standalone "option" params into ${ARGO[*]} array # @author Oleksii Chekulaiev # @version v1.3 (May-14-2018) parse_params () { local existing_named local ARGV=() # un-named params local ARGN=() # named params local ARGO=() # options (--params) echo "local ARGV=(); local ARGN=(); local ARGO=();" while [[ "$1" != "" ]]; do # Escape asterisk to prevent bash asterisk expansion _escaped=${1/\*/\'\"*\"\'} # If equals delimited named parameter if [[ "$1" =~ ^..*=..* ]]; then # Add to named parameters array echo "ARGN+=('$_escaped');" # key is part before first = local _key=$(echo "$1" | cut -d = -f 1) # val is everything after key and = (protect from param==value error) local _val="${1/$_key=}" # remove dashes from key name _key=${_key//\-} # search for existing parameter name if (echo "$existing_named" | grep "\b$_key\b" >/dev/null); then # if name already exists then it's a multi-value named parameter # re-declare it as an array if needed if ! (declare -p _key 2> /dev/null | grep -q 'declare \-a'); then echo "$_key=(\"\$$_key\");" fi # append new value echo "$_key+=('$_val');" else # single-value named parameter echo "local $_key=\"$_val\";" existing_named=" $_key" fi # If standalone named parameter elif [[ "$1" =~ ^\-. ]]; then # Add to options array echo "ARGO+=('$_escaped');" # remove dashes local _key=${1//\-} echo "local $_key=\"$_key\";" # non-named parameter else # Escape asterisk to prevent bash asterisk expansion _escaped=${1/\*/\'\"*\"\'} echo "ARGV+=('$_escaped');" fi shift done } #--------------------------- DEMO OF THE USAGE ------------------------------- show_use () { eval $(parse_params "$@") # -- echo "${ARGV[0]}" # print first unnamed param echo "${ARGV[1]}" # print second unnamed param echo "${ARGN[0]}" # print first named param echo "${ARG0[0]}" # print first option param (--force) echo "$anyparam" # print --anyparam value echo "$k" # print k=5 value echo "${multivalue[0]}" # print first value of multi-value echo "${multivalue[1]}" # print second value of multi-value [[ "$force" == "force" ]] && echo "\$force is set so let the force be with you" } show_use "param 1" --anyparam="my value" param2 k=5 --force --multi-value=test1 --multi-value=test2Oleksii Chekulaiev ,Sep 28, 2016 at 12:55
To use the demo to parse params that come into your bash script you just doshow_use "$@"
– Oleksii Chekulaiev Sep 28 '16 at 12:55Oleksii Chekulaiev ,Sep 28, 2016 at 12:58
Basically I found out that github.com/renatosilva/easyoptions does the same in the same way but is a bit more massive than this function. – Oleksii Chekulaiev Sep 28 '16 at 12:58galmok ,Jun 24, 2015 at 10:54
I'd like to offer my version of option parsing, that allows for the following:-s p1 --stage p1 -w somefolder --workfolder somefolder -sw p1 somefolder -e=helloAlso allows for this (could be unwanted):
-s--workfolder p1 somefolder -se=hello p1 -swe=hello p1 somefolderYou have to decide before use if = is to be used on an option or not. This is to keep the code clean(ish).
while [[ $# > 0 ]] do key="$1" while [[ ${key+x} ]] do case $key in -s*|--stage) STAGE="$2" shift # option has parameter ;; -w*|--workfolder) workfolder="$2" shift # option has parameter ;; -e=*) EXAMPLE="${key#*=}" break # option has been fully handled ;; *) # unknown option echo Unknown option: $key #1>&2 exit 10 # either this: my preferred way to handle unknown options break # or this: do this to signal the option has been handled (if exit isn't used) ;; esac # prepare for next option in this key, if any [[ "$key" = -? || "$key" == --* ]] && unset key || key="${key/#-?/-}" done shift # option(s) fully processed, proceed to next input argument doneLuca Davanzo ,Nov 14, 2016 at 17:56
what's the meaning for "+x" on ${key+x} ? – Luca Davanzo Nov 14 '16 at 17:56galmok ,Nov 15, 2016 at 9:10
It is a test to see if 'key' is present or not. Further down I unset key and this breaks the inner while loop. – galmok Nov 15 '16 at 9:10Mark Fox ,Apr 27, 2015 at 2:42
Mixing positional and flag-based arguments --param=arg (equals delimited)Freely mixing flags between positional arguments:
./script.sh dumbo 127.0.0.1 --environment=production -q -d ./script.sh dumbo --environment=production 127.0.0.1 --quiet -dcan be accomplished with a fairly concise approach:
# process flags pointer=1 while [[ $pointer -le $# ]]; do param=${!pointer} if [[ $param != "-"* ]]; then ((pointer++)) # not a parameter flag so advance pointer else case $param in # paramter-flags with arguments -e=*|--environment=*) environment="${param#*=}";; --another=*) another="${param#*=}";; # binary flags -q|--quiet) quiet=true;; -d) debug=true;; esac # splice out pointer frame from positional list [[ $pointer -gt 1 ]] \ && set -- ${@:1:((pointer - 1))} ${@:((pointer + 1)):$#} \ || set -- ${@:((pointer + 1)):$#}; fi done # positional remain node_name=$1 ip_address=$2--param arg (space delimited)It's usualy clearer to not mix
--flag=value
and--flag value
styles../script.sh dumbo 127.0.0.1 --environment production -q -dThis is a little dicey to read, but is still valid
./script.sh dumbo --environment production 127.0.0.1 --quiet -dSource
# process flags pointer=1 while [[ $pointer -le $# ]]; do if [[ ${!pointer} != "-"* ]]; then ((pointer++)) # not a parameter flag so advance pointer else param=${!pointer} ((pointer_plus = pointer + 1)) slice_len=1 case $param in # paramter-flags with arguments -e|--environment) environment=${!pointer_plus}; ((slice_len++));; --another) another=${!pointer_plus}; ((slice_len++));; # binary flags -q|--quiet) quiet=true;; -d) debug=true;; esac # splice out pointer frame from positional list [[ $pointer -gt 1 ]] \ && set -- ${@:1:((pointer - 1))} ${@:((pointer + $slice_len)):$#} \ || set -- ${@:((pointer + $slice_len)):$#}; fi done # positional remain node_name=$1 ip_address=$2schily ,Oct 19, 2015 at 13:59
Note thatgetopt(1)
was a short living mistake from AT&T.getopt was created in 1984 but already buried in 1986 because it was not really usable.
A proof for the fact that
getopt
is very outdated is that thegetopt(1)
man page still mentions"$*"
instead of"$@"
, that was added to the Bourne Shell in 1986 together with thegetopts(1)
shell builtin in order to deal with arguments with spaces inside.BTW: if you are interested in parsing long options in shell scripts, it may be of interest to know that the
getopt(3)
implementation from libc (Solaris) andksh93
both added a uniform long option implementation that supports long options as aliases for short options. This causesksh93
and theBourne Shell
to implement a uniform interface for long options viagetopts
.An example for long options taken from the Bourne Shell man page:
getopts "f:(file)(input-file)o:(output-file)" OPTX "$@"
shows how long option aliases may be used in both Bourne Shell and ksh93.
See the man page of a recent Bourne Shell:
http://schillix.sourceforge.net/man/man1/bosh.1.html
and the man page for getopt(3) from OpenSolaris:
http://schillix.sourceforge.net/man/man3c/getopt.3c.html
and last, the getopt(1) man page to verify the outdated $*:
Volodymyr M. Lisivka ,Jul 9, 2013 at 16:51
Use module "arguments" from bash-modulesExample:
#!/bin/bash . import.sh log arguments NAME="world" parse_arguments "-n|--name)NAME;S" -- "$@" || { error "Cannot parse command line." exit 1 } info "Hello, $NAME!"Mike Q ,Jun 14, 2014 at 18:01
This also might be useful to know, you can set a value and if someone provides input, override the default with that value..myscript.sh -f ./serverlist.txt or just ./myscript.sh (and it takes defaults)
#!/bin/bash # --- set the value, if there is inputs, override the defaults. HOME_FOLDER="${HOME}/owned_id_checker" SERVER_FILE_LIST="${HOME_FOLDER}/server_list.txt" while [[ $# > 1 ]] do key="$1" shift case $key in -i|--inputlist) SERVER_FILE_LIST="$1" shift ;; esac done echo "SERVER LIST = ${SERVER_FILE_LIST}"phk ,Oct 17, 2015 at 21:17
Another solution without getopt[s], POSIX, old Unix styleSimilar to the solution Bruno Bronosky posted this here is one without the usage of
getopt(s)
.Main differentiating feature of my solution is that it allows to have options concatenated together just like
Code with example optionstar -xzf foo.tar.gz
is equal totar -x -z -f foo.tar.gz
. And just like intar
,ps
etc. the leading hyphen is optional for a block of short options (but this can be changed easily). Long options are supported as well (but when a block starts with one then two leading hyphens are required).#!/bin/sh echo echo "POSIX-compliant getopt(s)-free old-style-supporting option parser from phk@[se.unix]" echo print_usage() { echo "Usage: $0 {a|b|c} [ARG...] Options: --aaa-0-args -a Option without arguments. --bbb-1-args ARG -b ARG Option with one argument. --ccc-2-args ARG1 ARG2 -c ARG1 ARG2 Option with two arguments. " >&2 } if [ $# -le 0 ]; then print_usage exit 1 fi opt= while :; do if [ $# -le 0 ]; then # no parameters remaining -> end option parsing break elif [ ! "$opt" ]; then # we are at the beginning of a fresh block # remove optional leading hyphen and strip trailing whitespaces opt=$(echo "$1" | sed 's/^-\?\([a-zA-Z0-9\?-]*\)/\1/') fi # get the first character -> check whether long option first_chr=$(echo "$opt" | awk '{print substr($1, 1, 1)}') [ "$first_chr" = - ] && long_option=T || long_option=F # note to write the options here with a leading hyphen less # also do not forget to end short options with a star case $opt in -) # end of options shift break ;; a*|-aaa-0-args) echo "Option AAA activated!" ;; b*|-bbb-1-args) if [ "$2" ]; then echo "Option BBB with argument '$2' activated!" shift else echo "BBB parameters incomplete!" >&2 print_usage exit 1 fi ;; c*|-ccc-2-args) if [ "$2" ] && [ "$3" ]; then echo "Option CCC with arguments '$2' and '$3' activated!" shift 2 else echo "CCC parameters incomplete!" >&2 print_usage exit 1 fi ;; h*|\?*|-help) print_usage exit 0 ;; *) if [ "$long_option" = T ]; then opt=$(echo "$opt" | awk '{print substr($1, 2)}') else opt=$first_chr fi printf 'Error: Unknown option: "%s"\n' "$opt" >&2 print_usage exit 1 ;; esac if [ "$long_option" = T ]; then # if we had a long option then we are going to get a new block next shift opt= else # if we had a short option then just move to the next character opt=$(echo "$opt" | awk '{print substr($1, 2)}') # if block is now empty then shift to the next one [ "$opt" ] || shift fi done echo "Doing something..." exit 0For the example usage please see the examples further below.
Position of options with argumentsFor what its worth there the options with arguments don't be the last (only long options need to be). So while e.g. in
Multiple options with argumentstar
(at least in some implementations) thef
options needs to be last because the file name follows (tar xzf bar.tar.gz
works buttar xfz bar.tar.gz
does not) this is not the case here (see the later examples).As another bonus the option parameters are consumed in the order of the options by the parameters with required options. Just look at the output of my script here with the command line
abc X Y Z
(or-abc X Y Z
):Option AAA activated! Option BBB with argument 'X' activated! Option CCC with arguments 'Y' and 'Z' activated!Long options concatenated as wellAlso you can also have long options in option block given that they occur last in the block. So the following command lines are all equivalent (including the order in which the options and its arguments are being processed):
-cba Z Y X
cba Z Y X
-cb-aaa-0-args Z Y X
-c-bbb-1-args Z Y X -a
--ccc-2-args Z Y -ba X
c Z Y b X a
-c Z Y -b X -a
--ccc-2-args Z Y --bbb-1-args X --aaa-0-args
All of these lead to:
Option CCC with arguments 'Z' and 'Y' activated! Option BBB with argument 'X' activated! Option AAA activated! Doing something...Not in this solution Optional argumentsOptions with optional arguments should be possible with a bit of work, e.g. by looking forward whether there is a block without a hyphen; the user would then need to put a hyphen in front of every block following a block with a parameter having an optional parameter. Maybe this is too complicated to communicate to the user so better just require a leading hyphen altogether in this case.
Things get even more complicated with multiple possible parameters. I would advise against making the options trying to be smart by determining whether the an argument might be for it or not (e.g. with an option just takes a number as an optional argument) because this might break in the future.
I personally favor additional options instead of optional arguments.
Option arguments introduced with an equal signJust like with optional arguments I am not a fan of this (BTW, is there a thread for discussing the pros/cons of different parameter styles?) but if you want this you could probably implement it yourself just like done at http://mywiki.wooledge.org/BashFAQ/035#Manual_loop with a
Other notes--long-with-arg=?*
case statement and then stripping the equal sign (this is BTW the site that says that making parameter concatenation is possible with some effort but "left [it] as an exercise for the reader" which made me take them at their word but I started from scratch).POSIX-compliant, works even on ancient Busybox setups I had to deal with (with e.g.
cut
,head
andgetopts
missing).Noah ,Aug 29, 2016 at 3:44
Solution that preserves unhandled arguments. Demos Included.Here is my solution. It is VERY flexible and unlike others, shouldn't require external packages and handles leftover arguments cleanly.
Usage is:
./myscript -flag flagvariable -otherflag flagvar2
All you have to do is edit the validflags line. It prepends a hyphen and searches all arguments. It then defines the next argument as the flag name e.g.
./myscript -flag flagvariable -otherflag flagvar2 echo $flag $otherflag flagvariable flagvar2The main code (short version, verbose with examples further down, also a version with erroring out):
#!/usr/bin/env bash #shebang.io validflags="rate time number" count=1 for arg in $@ do match=0 argval=$1 for flag in $validflags do sflag="-"$flag if [ "$argval" == "$sflag" ] then declare $flag=$2 match=1 fi done if [ "$match" == "1" ] then shift 2 else leftovers=$(echo $leftovers $argval) shift fi count=$(($count+1)) done #Cleanup then restore the leftovers shift $# set -- $leftoversThe verbose version with built in echo demos:
#!/usr/bin/env bash #shebang.io rate=30 time=30 number=30 echo "all args $@" validflags="rate time number" count=1 for arg in $@ do match=0 argval=$1 # argval=$(echo $@ | cut -d ' ' -f$count) for flag in $validflags do sflag="-"$flag if [ "$argval" == "$sflag" ] then declare $flag=$2 match=1 fi done if [ "$match" == "1" ] then shift 2 else leftovers=$(echo $leftovers $argval) shift fi count=$(($count+1)) done #Cleanup then restore the leftovers echo "pre final clear args: $@" shift $# echo "post final clear args: $@" set -- $leftovers echo "all post set args: $@" echo arg1: $1 arg2: $2 echo leftovers: $leftovers echo rate $rate time $time number $numberFinal one, this one errors out if an invalid -argument is passed through.
#!/usr/bin/env bash #shebang.io rate=30 time=30 number=30 validflags="rate time number" count=1 for arg in $@ do argval=$1 match=0 if [ "${argval:0:1}" == "-" ] then for flag in $validflags do sflag="-"$flag if [ "$argval" == "$sflag" ] then declare $flag=$2 match=1 fi done if [ "$match" == "0" ] then echo "Bad argument: $argval" exit 1 fi shift 2 else leftovers=$(echo $leftovers $argval) shift fi count=$(($count+1)) done #Cleanup then restore the leftovers shift $# set -- $leftovers echo rate $rate time $time number $number echo leftovers: $leftoversPros: What it does, it handles very well. It preserves unused arguments which a lot of the other solutions here don't. It also allows for variables to be called without being defined by hand in the script. It also allows prepopulation of variables if no corresponding argument is given. (See verbose example).
Cons: Can't parse a single complex arg string e.g. -xcvf would process as a single argument. You could somewhat easily write additional code into mine that adds this functionality though.
Daniel Bigham ,Aug 8, 2016 at 12:42
The top answer to this question seemed a bit buggy when I tried it -- here's my solution which I've found to be more robust:boolean_arg="" arg_with_value="" while [[ $# -gt 0 ]] do key="$1" case $key in -b|--boolean-arg) boolean_arg=true shift ;; -a|--arg-with-value) arg_with_value="$2" shift shift ;; -*) echo "Unknown option: $1" exit 1 ;; *) arg_num=$(( $arg_num + 1 )) case $arg_num in 1) first_normal_arg="$1" shift ;; 2) second_normal_arg="$1" shift ;; *) bad_args=TRUE esac ;; esac done # Handy to have this here when adding arguments to # see if they're working. Just edit the '0' to be '1'. if [[ 0 == 1 ]]; then echo "first_normal_arg: $first_normal_arg" echo "second_normal_arg: $second_normal_arg" echo "boolean_arg: $boolean_arg" echo "arg_with_value: $arg_with_value" exit 0 fi if [[ $bad_args == TRUE || $arg_num < 2 ]]; then echo "Usage: $(basename "$0") <first-normal-arg> <second-normal-arg> [--boolean-arg] [--arg-with-value VALUE]" exit 1 fiphyatt ,Sep 7, 2016 at 18:25
This example shows how to usegetopt
andeval
andHEREDOC
andshift
to handle short and long parameters with and without a required value that follows. Also the switch/case statement is concise and easy to follow.#!/usr/bin/env bash # usage function function usage() { cat << HEREDOC Usage: $progname [--num NUM] [--time TIME_STR] [--verbose] [--dry-run] optional arguments: -h, --help show this help message and exit -n, --num NUM pass in a number -t, --time TIME_STR pass in a time string -v, --verbose increase the verbosity of the bash script --dry-run do a dry run, don't change any files HEREDOC } # initialize variables progname=$(basename $0) verbose=0 dryrun=0 num_str= time_str= # use getopt and store the output into $OPTS # note the use of -o for the short options, --long for the long name options # and a : for any option that takes a parameter OPTS=$(getopt -o "hn:t:v" --long "help,num:,time:,verbose,dry-run" -n "$progname" -- "$@") if [ $? != 0 ] ; then echo "Error in command line arguments." >&2 ; usage; exit 1 ; fi eval set -- "$OPTS" while true; do # uncomment the next line to see how shift is working # echo "\$1:\"$1\" \$2:\"$2\"" case "$1" in -h | --help ) usage; exit; ;; -n | --num ) num_str="$2"; shift 2 ;; -t | --time ) time_str="$2"; shift 2 ;; --dry-run ) dryrun=1; shift ;; -v | --verbose ) verbose=$((verbose + 1)); shift ;; -- ) shift; break ;; * ) break ;; esac done if (( $verbose > 0 )); then # print out all the parameters we read in cat <<-EOM num=$num_str time=$time_str verbose=$verbose dryrun=$dryrun EOM fi # The rest of your script belowThe most significant lines of the script above are these:
OPTS=$(getopt -o "hn:t:v" --long "help,num:,time:,verbose,dry-run" -n "$progname" -- "$@") if [ $? != 0 ] ; then echo "Error in command line arguments." >&2 ; exit 1 ; fi eval set -- "$OPTS" while true; do case "$1" in -h | --help ) usage; exit; ;; -n | --num ) num_str="$2"; shift 2 ;; -t | --time ) time_str="$2"; shift 2 ;; --dry-run ) dryrun=1; shift ;; -v | --verbose ) verbose=$((verbose + 1)); shift ;; -- ) shift; break ;; * ) break ;; esac doneShort, to the point, readable, and handles just about everything (IMHO).
Hope that helps someone.
Emeric Verschuur ,Feb 20, 2017 at 21:30
I have write a bash helper to write a nice bash toolproject home: https://gitlab.mbedsys.org/mbedsys/bashopts
example:
#!/bin/bash -ei # load the library . bashopts.sh # Enable backtrace dusplay on error trap 'bashopts_exit_handle' ERR # Initialize the library bashopts_setup -n "$0" -d "This is myapp tool description displayed on help message" -s "$HOME/.config/myapprc" # Declare the options bashopts_declare -n first_name -l first -o f -d "First name" -t string -i -s -r bashopts_declare -n last_name -l last -o l -d "Last name" -t string -i -s -r bashopts_declare -n display_name -l display-name -t string -d "Display name" -e "\$first_name \$last_name" bashopts_declare -n age -l number -d "Age" -t number bashopts_declare -n email_list -t string -m add -l email -d "Email adress" # Parse arguments bashopts_parse_args "$@" # Process argument bashopts_process_argswill give help:
NAME: ./example.sh - This is myapp tool description displayed on help message USAGE: [options and commands] [-- [extra args]] OPTIONS: -h,--help Display this help -n,--non-interactive true Non interactive mode - [$bashopts_non_interactive] (type:boolean, default:false) -f,--first "John" First name - [$first_name] (type:string, default:"") -l,--last "Smith" Last name - [$last_name] (type:string, default:"") --display-name "John Smith" Display name - [$display_name] (type:string, default:"$first_name $last_name") --number 0 Age - [$age] (type:number, default:0) --email Email adress - [$email_list] (type:string, default:"")enjoy :)
Josh Wulf ,Jun 24, 2017 at 18:07
I get this on Mac OS X: ``` lib/bashopts.sh: line 138: declare: -A: invalid option declare: usage: declare [-afFirtx] [-p] [name[=value] ...] Error in lib/bashopts.sh:138. 'declare -x -A bashopts_optprop_name' exited with status 2 Call tree: 1: lib/controller.sh:4 source(...) Exiting with status 1 ``` – Josh Wulf Jun 24 '17 at 18:07Josh Wulf ,Jun 24, 2017 at 18:17
You need Bash version 4 to use this. On Mac, the default version is 3. You can use home brew to install bash 4. – Josh Wulf Jun 24 '17 at 18:17a_z ,Mar 15, 2017 at 13:24
Here is my approach - using regexp.
- no getopts
- it handles block of short parameters
-qwerty
- it handles short parameters
-q -w -e
- it handles long options
--qwerty
- you can pass attribute to short or long option (if you are using block of short options, attribute is attached to the last option)
- you can use spaces or
=
to provide attributes, but attribute matches until encountering hyphen+space "delimiter", so in--q=qwe ty
qwe ty
is one attribute- it handles mix of all above so
-o a -op attr ibute --option=att ribu te --op-tion attribute --option att-ribute
is validscript:
#!/usr/bin/env sh help_menu() { echo "Usage: ${0##*/} [-h][-l FILENAME][-d] Options: -h, --help display this help and exit -l, --logfile=FILENAME filename -d, --debug enable debug " } parse_options() { case $opt in h|help) help_menu exit ;; l|logfile) logfile=${attr} ;; d|debug) debug=true ;; *) echo "Unknown option: ${opt}\nRun ${0##*/} -h for help.">&2 exit 1 esac } options=$@ until [ "$options" = "" ]; do if [[ $options =~ (^ *(--([a-zA-Z0-9-]+)|-([a-zA-Z0-9-]+))(( |=)(([\_\.\?\/\\a-zA-Z0-9]?[ -]?[\_\.\?a-zA-Z0-9]+)+))?(.*)|(.+)) ]]; then if [[ ${BASH_REMATCH[3]} ]]; then # for --option[=][attribute] or --option[=][attribute] opt=${BASH_REMATCH[3]} attr=${BASH_REMATCH[7]} options=${BASH_REMATCH[9]} elif [[ ${BASH_REMATCH[4]} ]]; then # for block options -qwert[=][attribute] or single short option -a[=][attribute] pile=${BASH_REMATCH[4]} while (( ${#pile} > 1 )); do opt=${pile:0:1} attr="" pile=${pile/${pile:0:1}/} parse_options done opt=$pile attr=${BASH_REMATCH[7]} options=${BASH_REMATCH[9]} else # leftovers that don't match opt=${BASH_REMATCH[10]} options="" fi parse_options fi donemauron85 ,Jun 21, 2017 at 6:03
Like this one. Maybe just add -e param to echo with new line. – mauron85 Jun 21 '17 at 6:03John ,Oct 10, 2017 at 22:49
Assume we create a shell script namedtest_args.sh
as follow#!/bin/sh until [ $# -eq 0 ] do name=${1:1}; shift; if [[ -z "$1" || $1 == -* ]] ; then eval "export $name=true"; else eval "export $name=$1"; shift; fi done echo "year=$year month=$month day=$day flag=$flag"After we run the following command:
sh test_args.sh -year 2017 -flag -month 12 -day 22The output would be:
year=2017 month=12 day=22 flag=trueWill Barnwell ,Oct 10, 2017 at 23:57
This takes the same approach as Noah's answer , but has less safety checks / safeguards. This allows us to write arbitrary arguments into the script's environment and I'm pretty sure your use of eval here may allow command injection. – Will Barnwell Oct 10 '17 at 23:57Masadow ,Oct 6, 2015 at 8:53
Here is my improved solution of Bruno Bronosky's answer using variable arrays.it lets you mix parameters position and give you a parameter array preserving the order without the options
#!/bin/bash echo $@ PARAMS=() SOFT=0 SKIP=() for i in "$@" do case $i in -n=*|--skip=*) SKIP+=("${i#*=}") ;; -s|--soft) SOFT=1 ;; *) # unknown option PARAMS+=("$i") ;; esac done echo "SKIP = ${SKIP[@]}" echo "SOFT = $SOFT" echo "Parameters:" echo ${PARAMS[@]}Will output for example:
$ ./test.sh parameter -s somefile --skip=.c --skip=.obj parameter -s somefile --skip=.c --skip=.obj SKIP = .c .obj SOFT = 1 Parameters: parameter somefileJason S ,Dec 3, 2017 at 1:01
You use shift on the known arguments and not on the unknown ones so your remaining$@
will be all but the first two arguments (in the order they are passed in), which could lead to some mistakes if you try to use$@
later. You don't need the shift for the = parameters, since you're not handling spaces and you're getting the value with the substring removal#*=
– Jason S Dec 3 '17 at 1:01Masadow ,Dec 5, 2017 at 9:17
You're right, in fact, since I build a PARAMS variable, I don't need to use shift at all – Masadow Dec 5 '17 at 9:17
Jun 09, 2018 | opensource.com
Changing an executed command
history
also allows you to rerun a command with different syntax. For example, if I wanted to change my previous commandhistory | grep dnf
tohistory | grep ssh
, I can execute the following at the prompt:$ ^dnf^ssh^Removing history
history
will rerun the command, but replacednf
withssh
, and execute it.There may come a time that you want to remove some or all the commands in your history file. If you want to delete a particular command, enter
history -d <line number>
. To clear the entire contents of the history file, executehistory -c
.The history file is stored in a file that you can modify, as well. Bash shell users will find it in their Home directory as
Next steps.bash_history
.There are a number of other things that you can do with
history
:
- Set the size of your history buffer to a certain number of commands
- Record the date and time for each line in history
- Prevent certain commands from being recorded in history
For more information about the
history
command and other interesting things you can do with it, take a look at the GNU Bash Manual .
Jun 01, 2018 | opensource.com
... ... ...
Looping through arrays
Although in the examples above we used integer indices in our arrays, let's consider two occasions when that won't be the case: First, if we wanted the
Looping through array elements$i
-th element of the array, where$i
is a variable containing the index of interest, we can retrieve that element using:echo ${allThreads[$i]}
. Second, to output all the elements of an array, we replace the numeric index with the@
symbol (you can think of@
as standing forall
):echo ${allThreads[@]}
.With that in mind, let's loop through
for t in ${allThreads[@]} ; do$allThreads
and launch the pipeline for each value of--threads
:
. / pipeline --threads $t
doneLooping through array indices
Next, let's consider a slightly different approach. Rather than looping over array elements , we can loop over array indices :
for i in ${!allThreads[@]} ; do
. / pipeline --threads ${allThreads[$i]}
doneLet's break that down: As we saw above,
${allThreads[@]}
represents all the elements in our array. Adding an exclamation mark to make it${!allThreads[@]}
will return the list of all array indices (in our case 0 to 7). In other words, thefor
loop is looping through all indices$i
and reading the$i
-th element from$allThreads
to set the value of the--threads
parameter.This is much harsher on the eyes, so you may be wondering why I bother introducing it in the first place. That's because there are times where you need to know both the index and the value within a loop, e.g., if you want to ignore the first element of an array, using indices saves you from creating an additional variable that you then increment inside the loop.
Populating arraysSo far, we've been able to launch the pipeline for each
Some useful syntax--threads
of interest. Now, let's assume the output to our pipeline is the runtime in seconds. We would like to capture that output at each iteration and save it in another array so we can do various manipulations with it at the end.But before diving into the code, we need to introduce some more syntax. First, we need to be able to retrieve the output of a Bash command. To do so, use the following syntax:
output=$( ./my_script.sh )
, which will store the output of our commands into the variable$output
.The second bit of syntax we need is how to append the value we just retrieved to an array. The syntax to do that will look familiar:
myArray+=( "newElement1" "newElement2" )The parameter sweepPutting everything together, here is our script for launching our parameter sweep:
allThreads = ( 1 2 4 8 16 32 64 128 )
allRuntimes = ()
for t in ${allThreads[@]} ; do
runtime =$ ( . / pipeline --threads $t )
allRuntimes+= ( $runtime )
doneAnd voilà!
What else you got?In this article, we covered the scenario of using arrays for parameter sweeps. But I promise there are more reasons to use Bash arrays -- here are two more examples.
Log alertingIn this scenario, your app is divided into modules, each with its own log file. We can write a cron job script to email the right person when there are signs of trouble in certain modules:
# List of logs and who should be notified of issues
logPaths = ( "api.log" "auth.log" "jenkins.log" "data.log" )
logEmails = ( "jay@email" "emma@email" "jon@email" "sophia@email" )# Look for signs of trouble in each log
for i in ${!logPaths[@]} ;
do
log = ${logPaths[$i]}
stakeholder = ${logEmails[$i]}
numErrors =$ ( tail -n 100 " $log " | grep "ERROR" | wc -l )# Warn stakeholders if recently saw > 5 errors
API queries
if [[ " $numErrors " -gt 5 ]] ;
then
emailRecipient = " $stakeholder "
emailSubject = "WARNING: ${log} showing unusual levels of errors"
emailBody = " ${numErrors} errors found in log ${log} "
echo " $emailBody " | mailx -s " $emailSubject " " $emailRecipient "
fi
doneSay you want to generate some analytics about which users comment the most on your Medium posts. Since we don't have direct database access, SQL is out of the question, but we can use APIs!
To avoid getting into a long discussion about API authentication and tokens, we'll instead use JSONPlaceholder , a public-facing API testing service, as our endpoint. Once we query each post and retrieve the emails of everyone who commented, we can append those emails to our results array:
endpoint = "https://jsonplaceholder.typicode.com/comments"
allEmails = ()# Query first 10 posts
for postId in { 1 .. 10 } ;
do
# Make API call to fetch emails of this posts's commenters
response =$ ( curl " ${endpoint} ?postId= ${postId} " )# Use jq to parse the JSON response into an array
allEmails+= ( $ ( jq '.[].email' <<< " $response " ) )
doneNote here that I'm using the
jq
tool to parse JSON from the command line. The syntax ofjq
is beyond the scope of this article, but I highly recommend you look into it.As you might imagine, there are countless other scenarios in which using Bash arrays can help, and I hope the examples outlined in this article have given you some food for thought. If you have other examples to share from your own work, please leave a comment below.
But wait, there's more!Since we covered quite a bit of array syntax in this article, here's a summary of what we covered, along with some more advanced tricks we did not cover:
One last thought
Syntax Result arr=()
Create an empty array arr=(1 2 3)
Initialize array ${arr[2]}
Retrieve third element ${arr[@]}
Retrieve all elements ${!arr[@]}
Retrieve array indices ${#arr[@]}
Calculate array size arr[0]=3
Overwrite 1st element arr+=(4)
Append value(s) str=$(ls)
Save ls
output as a stringarr=( $(ls) )
Save ls
output as an array of files${arr[@]:s:n}
Retrieve elements at indices n
tos+n
As we've discovered, Bash arrays sure have strange syntax, but I hope this article convinced you that they are extremely powerful. Once you get the hang of the syntax, you'll find yourself using Bash arrays quite often.
... ... ...
Robert Aboukhalil is a Bioinformatics Software Engineer. In his work, he develops cloud applications for the analysis and interactive visualization of genomics data. Robert holds a Ph.D. in Bioinformatics from Cold Spring Harbor Laboratory and a B.Eng. in Computer Engineering from McGill.
May 28, 2018 | www.tecmint.com
In this article, we will share a number of Bash command-line shortcuts useful for any Linux user. These shortcuts allow you to easily and in a fast manner, perform certain activities such as accessing and running previously executed commands, opening an editor, editing/deleting/changing text on the command line, moving the cursor, controlling processes etc. on the command line.
Although this article will mostly benefit Linux beginners getting their way around with command line basics, those with intermediate skills and advanced users might also find it practically helpful. We will group the bash keyboard shortcuts according to categories as follows.
Launch an EditorOpen a terminal and press
Ctrl+X
andCtrl+E
to open an editor ( nano editor ) with an empty buffer. Bash will try to launch the editor defined by the $EDITOR environment variable.Nano Editor Controlling The Screen
These shortcuts are used to control terminal screen output:
Move Cursor on The Command Line
Ctrl+L
– clears the screen (same effect as the " clear " command).Ctrl+S
– pause all command output to the screen. If you have executed a command that produces verbose, long output, use this to pause the output scrolling down the screen.Ctrl+Q
– resume output to the screen after pausing it with Ctrl+S .The next shortcuts are used for moving the cursor within the command-line:
Search Through Bash History
Ctrl+A
orHome
– moves the cursor to the start of a line.Ctrl+E
orEnd
– moves the cursor to the end of the line.Ctrl+B
orLeft Arrow
– moves the cursor back one character at a time.Ctrl+F
orRight Arrow
– moves the cursor forward one character at a time.Ctrl
+Left Arrow
orAlt+B
orEsc
and thenB
– moves the cursor back one word at a time.Ctrl
+Right Arrow
orAlt+C
orEsc
and thenF
– moves the cursor forward one word at a time.The following shortcuts are used for searching for commands in the bash history:
Delete Text on the Command Line
Up arrow key
– retrieves the previous command. If you press it constantly, it takes you through multiple commands in history, so you can find the one you want. Use the Down arrow to move in the reverse direction through the history.Ctrl+P
andCtrl+N
– alternatives for the Up and Down arrow keys, respectively.Ctrl+R
– starts a reverse search, through the bash history, simply type characters that should be unique to the command you want to find in the history.Ctrl+S
– launches a forward search, through the bash history.Ctrl+G
– quits reverse or forward search, through the bash history.The following shortcuts are used for deleting text on the command line:
Transpose Text or Change Case on the Command Line
Ctrl+D
orDelete
– remove or deletes the character under the cursor.Ctrl+K
– removes all text from the cursor to the end of the line.Ctrl+X
and thenBackspace
– removes all the text from the cursor to the beginning of the line.These shortcuts will transpose or change the case of letters or words on the command line:
Working With Processes in Linux
Ctrl+T
– transposes the character before the cursor with the character under the cursor.Esc
and thenT
– transposes the two words immediately before (or under) the cursor.Esc
and thenU
– transforms the text from the cursor to the end of the word to uppercase.Esc
and thenL
– transforms the text from the cursor to the end of the word to lowercase.Esc
and thenC
– changes the letter under the cursor (or the first letter of the next word) to uppercase, leaving the rest of the word unchanged.The following shortcuts help you to control running Linux processes.
Ctrl+Z
– suspend the current foreground process. This sends the SIGTSTP signal to the process. You can get the process back to the foreground later using the fg process_name (or %bgprocess_number like %1 , %2 and so on) command.Ctrl+C
– interrupt the current foreground process, by sending the SIGINT signal to it. The default behavior is to terminate a process gracefully, but the process can either honor or ignore it.Ctrl+D
– exit the bash shell (same as running the exit command).Learn more about: All You Need To Know About Processes in Linux [Comprehensive Guide]
Bash Bang (!) CommandsIn the final part of this article, we will explain some useful
!
(bang) operations:
!!
– execute last command.!top
– execute the most recent command that starts with 'top' (e.g. ! ).!top:p
– displays the command that !top would run (also adds it as the latest command in the command history).!$
– execute the last word of the previous command (same as Alt + ., e.g. if last command is ' cat tecmint.txt ', then !$ would try to run ' tecmint.txt ').!$:p
– displays the word that !$ would execute.!*
– displays the last word of the previous command.!*:p
– displays the last word that !* would substitute.For more information, see the bash man page:
$ man bashThat's all for now! In this article, we shared some common and useful Bash command-line shortcuts and operations. Use the comment form below to make any additions or ask questions.
Apr 26, 2018 | linuxhint.com
Bash Range: How to iterate over sequences generated on the shell 2 days ago You can iterate the sequence of numbers in bash by two ways. One is by using seq command and another is by specifying range in for loop. In seq command, the sequence starts from one, the number increments by one in each step and print each number in each line up to the upper limit by default. If the number starts from upper limit then it decrements by one in each step. Normally, all numbers are interpreted as floating point but if the sequence starts from integer then the list of decimal integers will print. If seq command can execute successfully then it returns 0, otherwise it returns any non-zero number. You can also iterate the sequence of numbers using for loop with range. Both seq command and for loop with range are shown in this tutorial by using examples.
The options of seq command:
You can use seq command by using the following options.
Examples of seq command:
- -w This option is used to pad the numbers with leading zeros to print all numbers with equal width.
- -f format This option is used to print number with particular format. Floating number can be formatted by using %f, %g and %e as conversion characters. %g is used as default.
- -s string This option is used to separate the numbers with string. The default value is newline ('\n').
You can apply seq command by three ways. You can use only upper limit or upper and lower limit or upper and lower limit with increment or decrement value of each step . Different uses of the seq command with options are shown in the following examples.
Example-1: seq command without optionWhen only upper limit is used then the number will start from 1 and increment by one in each step. The following command will print the number from 1 to 4.
$ seq 4When the two values are used with seq command then first value will be used as starting number and second value will be used as ending number. The following command will print the number from 7 to 15.
$ seq 7 15When you will use three values with seq command then the second value will be used as increment or decrement value for each step. For the following command, the starting number is 10, ending number is 1 and each step will be counted by decrementing 2.
$ seq 10 -2 1Example-2: seq with w optionThe following command will print the output by adding leading zero for the number from 1 to 9.
$ seq -w 0110Example-3: seq with s optionThe following command uses "-" as separator for each sequence number. The sequence of numbers will print by adding "-" as separator.
$ seq -s - 8Example-4: seq with -f option
The following command will print 10 date values starting from 1. Here, "%g" option is used to add sequence number with other string value.
$ seq -f "%g/04/2018" 10The following command is used to generate the sequence of floating point number using "%f" . Here, the number will start from 3 and increment by 0.8 in each step and the last number will be less than or equal to 6.
$ seq -f "%f" 3 0.8 6Example-5: Write the sequence in a file
If you want to save the sequence of number into a file without printing in the console then you can use the following commands. The first command will print the numbers to a file named " seq.txt ". The number will generate from 5 to 20 and increment by 10 in each step. The second command is used to view the content of " seq.txt" file.
seq 5 10 20 | cat > seq.txt cat seq.txtExample-6: Using seq in for loop
Suppose, you want to create files named fn1 to fn10 using for loop with seq. Create a file named "sq1.bash" and add the following code. For loop will iterate for 10 times using seq command and create 10 files in the sequence fn1, fn2,fn3 ..fn10.
#!/bin/bash
for i in ` seq 10 ` ; do touch fn. $i doneRun the following commands to execute the code of the bash file and check the files are created or not.
bash sq1.bash lsExamples of for loop with range: Example-7: For loop with range
The alternative of seq command is range. You can use range in for loop to generate sequence of numbers like seq. Write the following code in a bash file named " sq2.bash ". The loop will iterate for 5 times and print the square root of each number in each step.
#!/bin/bash
for n in { 1 .. 5 } ; do (( result =n * n ))
echo $n square = $result
doneRun the command to execute the script of the file.
bash sq2.bashExample-8: For loop with range and increment value
By default, the number is increment by one in each step in range like seq. You can also change the increment value in range. Write the following code in a bash file named " sq3.bash ". The for loop in the script will iterate for 5 times, each step is incremented by 2 and print all odd numbers between 1 to 10.
#!/bin/bash
echo "all odd numbers from 1 to 10 are"
for i in { 1 .. 10 .. 2 }; do echo $i ; doneRun the command to execute the script of the file.
bash sq3.bashIf you want to work with the sequence of numbers then you can use any of the options that are shown in this tutorial. After completing this tutorial, you will be able to use seq command and for loop with range more efficiently in your bash script.
Apr 26, 2018 | opensource.com
Bash completion is a functionality through which Bash helps users type their commands more quickly and easily. It does this by presenting possible options when users press the Tab key while typing a command.
$ git < tab >< tab >
How it works
git git-receive-pack git-upload-archive
gitk git-shell git-upload-pack
$ git-s < tab >
$ git-shellMore Linux resources
The completion script is code that uses the builtin Bash command
- What is Linux?
- What are Linux containers?
- Download Now: Linux commands cheat sheet
- Advanced Linux commands cheat sheet
- Our latest Linux articles
complete
to define which completion suggestions can be displayed for a given executable . The nature of the completion options vary, from simple static to highly sophisticated. Why bother?This functionality helps users by:
Hands-on
- saving them from typing text when it can be auto-completed
- helping them know the available continuations to their commands
- preventing errors and improving their experience by hiding or showing options based on what they have already typed
Here's what we will do in this tutorial:
We will first create a dummy executable script called
dothis
. All it does is execute the command that resides on the number that was passed as an argument in the user's history. For example, the following command will simply execute thels -a
command, given that it exists in history with number235
:dothis 235Then we will create a Bash completion script that will display commands along with their number from the user's history, and we will "bind" it to the
$ dothis < tab >< tab >dothis
executable.
215 ls
216 ls -la
217 cd ~
218 man history
219 git status
220 history | cut -c 8 - bash_screen.png
You can see a gif demonstrating the functionality at this tutorial's code repository on GitHub .
Let the show begin.
Creating the executable scriptCreate a file named
if [ -z "$1" ] ; thendothis
in your working directory and add the following code:
echo "No command number passed"
exit 2
fiexists =$ ( fc -l -1000 | grep ^ $1 -- 2 >/ dev / null )
if [ -n " $exists " ] ; then
fc -s -- "$1"
else
echo "Command with number $1 was not found in recent history"
exit 2
fiNotes:
- We first check if the script was called with an argument
- We then check if the specific number is included in the last 1000 commands
- if it exists, we execute the command using the
fc
functionality- otherwise, we display an error message
Make the script executable with:
chmod +x ./dothisWe will execute this script many times in this tutorial, so I suggest you place it in a folder that is included in your path so that we can access it from anywhere by typing
dothis
.I installed it in my home bin folder using:
install ./dothis ~/bin/dothisYou can do the same given that you have a
~/bin
folder and it is included in yourPATH
variable.Check to see if it's working:
dothisYou should see this:
$ dothis
No command number passedDone.
Creating the completion scriptCreate a file named
dothis-completion.bash
. From now on, we will refer to this file with the term completion script .Once we add some code to it, we will
source
it to allow the completion to take effect. We mustsource
this file every single time we change something in it .Later in this tutorial, we will discuss our options for registering this script whenever a Bash shell opens.
Static completionSuppose that the
dothis
program supported a list of commands, for example:
now
tomorrow
never
Let's use the
complete
command to register this list for completion. To use the proper terminology, we say we use thecomplete
command to define a completion specification ( compspec ) for our program.Add this to the completion script.
#/usr/bin/env bash
complete -W "now tomorrow never" dothisHere's what we specified with the
complete
command above:
- we used the
-W
( wordlist ) option to provide a list of words for completion.- we defined to which "program" these completion words will be used (the
dothis
parameter)Source the file:
source ./dothis-completion.bashNow try pressing Tab twice in the command line, as shown below:
$ dothis < tab >< tab >
never now tomorrowTry again after typing the
$ dothis n < tab >< tab >n
:
never nowMagic! The completion options are automatically filtered to match only those starting with
n
.Note: The options are not displayed in the order that we defined them in the word list; they are automatically sorted.
There are many other options to be used instead of the
-W
that we used in this section. Most produce completions in a fixed manner, meaning that we don't intervene dynamically to filter their output.For example, if we want to have directory names as completion words for the
dothis
program, we would change the complete command to the following:complete -A directory dothisPressing Tab after the
$ dothis < tab >< tab >dothis
program would get us a list of the directories in the current directory from which we execute the script:
dir1 / dir2 / dir3 /Find the complete list of the available flags in the Bash Reference Manual .
Dynamic completionWe will be producing the completions of the
dothis
executable with the following logic:
- If the user presses the Tab key right after the command, we will show the last 50 executed commands along with their numbers in history
- If the user presses the Tab key after typing a number that matches more than one command from history, we will show only those commands along with their numbers in history
- If the user presses the Tab key after a number that matches exactly one command in history, we auto-complete the number without appending the command's literal (if this is confusing, no worries -- you will understand later)
Let's start by defining a function that will execute each time the user requests completion on a
#/usr/bin/env bashdothis
command. Change the completion script to this:
_dothis_completions ()
{
COMPREPLY+= ( "now" )
COMPREPLY+= ( "tomorrow" )
COMPREPLY+= ( "never" )
}complete -F _dothis_completions dothis
Note the following:
- we used the
-F
flag in the complete command defining that the_dothis_completions
is the function that will provide the completions of thedothis
executableCOMPREPLY
is an array variable used to store the completions -- the completion mechanism uses this variable to display its contents as completionsNow source the script and go for completion:
$ dothis < tab >< tab >
never now tomorrowPerfect. We produce the same completions as in the previous section with the word list. Or not? Try this:
$ dothis nev < tab >< tab >
never now tomorrowAs you can see, even though we type nev and then request for completion, the available options are always the same and nothing gets completed automatically. Why is this happening?
- The contents of the
COMPREPLY
variable are always displayed. The function is now responsible for adding/removing entries from there.- If the
COMPREPLY
variable had only one element, then that word would be automatically completed in the command. Since the current implementation always returns the same three words, this will not happen.Enter
compgen
: a builtin command that generates completions supporting most of the options of thecomplete
command (ex.-W
for word list,-d
for directories) and filtering them based on what the user has already typed.Don't worry if you feel confused; everything will become clear later.
Type the following in the console to better understand what
$ compgen -W "now tomorrow never"compgen
does:
now
tomorrow
never
$ compgen -W "now tomorrow never" n
now
never
$ compgen -W "now tomorrow never" t
tomorrowSo now we can use it, but we need to find a way to know what has been typed after the
dothis
command. We already have the way: The Bash completion facilities provide Bash variables related to the completion taking place. Here are the more important ones:
COMP_WORDS
: an array of all the words typed after the name of the program thecompspec
belongs toCOMP_CWORD
: an index of theCOMP_WORDS
array pointing to the word the current cursor is at -- in other words, the index of the word the cursor was when the tab key was pressedCOMP_LINE
: the current command lineTo access the word just after the
dothis
word, we can use the value ofCOMP_WORDS[1]
Change the completion script again:
#/usr/bin/env bash
_dothis_completions ()
{
COMPREPLY = ( $ ( compgen -W "now tomorrow never" " ${COMP_WORDS[1]} " ))
}complete -F _dothis_completions dothis
Source, and there you are:
$ dothis
never now tomorrow
$ dothis n
never nowNow, instead of the words now, tomorrow, never , we would like to see actual numbers from the command history.
The
fc -l
command followed by a negative number-n
displays the last n commands. So we will use:fc -l -50which lists the last 50 executed commands along with their numbers. The only manipulation we need to do is replace tabs with spaces to display them properly from the completion mechanism.
sed
to the rescue.Change the completion script as follows:
#/usr/bin/env bash
_dothis_completions ()
{
COMPREPLY = ( $ ( compgen -W " $(fc -l -50 | sed 's/\t//') " -- " ${COMP_WORDS[1]} " ))
}complete -F _dothis_completions dothis
Source and test in the console:
$ dothis < tab >< tab >
632 source dothis-completion.bash 649 source dothis-completion.bash 666 cat ~ / .bash_profile
633 clear 650 clear 667 cat ~ / .bashrc
634 source dothis-completion.bash 651 source dothis-completion.bash 668 clear
635 source dothis-completion.bash 652 source dothis-completion.bash 669 install . / dothis ~ / bin / dothis
636 clear 653 source dothis-completion.bash 670 dothis
637 source dothis-completion.bash 654 clear 671 dothis 6546545646
638 clear 655 dothis 654 672 clear
639 source dothis-completion.bash 656 dothis 631 673 dothis
640 source dothis-completion.bash 657 dothis 150 674 dothis 651
641 source dothis-completion.bash 658 dothis 675 source dothis-completion.bash
642 clear 659 clear 676 dothis 651
643 dothis 623 ls -la 660 dothis 677 dothis 659
644 clear 661 install . / dothis ~ / bin / dothis 678 clear
645 source dothis-completion.bash 662 dothis 679 dothis 665
646 clear 663 install . / dothis ~ / bin / dothis 680 clear
647 source dothis-completion.bash 664 dothis 681 clear
648 clear 665 cat ~ / .bashrcNot bad.
We do have a problem, though. Try typing a number as you see it in your completion list and then press the key again.
$ dothis 623 < tab >
$ dothis 623 ls 623 ls -la
...
$ dothis 623 ls 623 ls 623 ls 623 ls 623 ls -laThis is happening because in our completion script, we used the
${COMP_WORDS[1]}
to always check the first typed word after thedothis
command (the number623
in the above snippet). Hence the completion continues to suggest the same completion again and again when the Tab key is pressed.To fix this, we will not allow any kind of completion to take place if at least one argument has already been typed. We will add a condition in our function that checks the size of the aforementioned
#/usr/bin/env bashCOMP_WORDS
array.
_dothis_completions ()
{
if [ " ${#COMP_WORDS[@]} " ! = "2" ] ; then
return
fiCOMPREPLY = ( $ ( compgen -W " $(fc -l -50 | sed 's/\t//') " -- " ${COMP_WORDS[1]} " ))
}complete -F _dothis_completions dothis
Source and retry.
$ dothis 623 < tab >
$ dothis 623 ls -la < tab > # SUCCESS: nothing happens hereThere is another thing we don't like, though. We do want to display the numbers along with the corresponding commands to help users decide which one is desired, but when there is only one completion suggestion and it gets automatically picked by the completion mechanism, we shouldn't append the command literal too .
In other words, our
dothis
executable accepts only a number, and we haven't added any functionality to check or expect other arguments. When our completion function gives only one result, we should trim the command literal and respond only with the command number.To accomplish this, we will keep the response of the
compgen
command in an array variable, and if its size is 1 , we will trim the one and only element to keep just the number. Otherwise, we'll let the array as is.Change the completion script to this:
#/usr/bin/env bash
_dothis_completions ()
{
if [ " ${#COMP_WORDS[@]} " ! = "2" ] ; then
return
fi# keep the suggestions in a local variable
local suggestions = ( $ ( compgen -W " $(fc -l -50 | sed 's/\t/ /') " -- " ${COMP_WORDS[1]} " ))if [ " ${#suggestions[@]} " == "1" ] ; then
# if there's only one match, we remove the command literal
# to proceed with the automatic completion of the number
local number =$ ( echo ${suggestions[0]/%\ */} )
COMPREPLY = ( " $number " )
else
# more than one suggestions resolved,
# respond with the suggestions intact
COMPREPLY = ( " ${suggestions[@]} " )
fi
}complete -F _dothis_completions dothis
Registering the completion scriptIf you want to enable the completion just for you on your machine, all you have to do is add a line in your
.bashrc
file sourcing the script:source <path-to-your-script>/dothis-completion.bashIf you want to enable the completion for all users, you can just copy the script under
Fine-tuning the completion script/etc/bash_completion.d/
and it will automatically be loaded by Bash.Here are some extra steps for better results:
Displaying each entry in a new lineIn the Bash completion script I was working on, I too had to present suggestions consisting of two parts. I wanted to display the first part in the default color and the second part in gray to distinguish it as help text. In this tutorial's example, it would be nice to present the numbers in the default color and the command literal in a less fancy one.
Unfortunately, this is not possible, at least for now, because the completions are displayed as plain text and color directives are not processed (for example:
\e[34mBlue
).What we can do to improve the user experience (or not) is to display each entry in a new line. This solution is not that obvious since we can't just append a new line character in each
COMPREPLY
entry. We will follow a rather hackish method and pad suggestion literals to a width that fills the terminal.Enter
#/usr/bin/env bashprintf
. If you want to display each suggestion on each own line, change the completion script to the following:
_dothis_completions ()
{
if [ " ${#COMP_WORDS[@]} " ! = "2" ] ; then
return
filocal IFS =$ '\n'
local suggestions = ( $ ( compgen -W " $(fc -l -50 | sed 's/\t//') " -- " ${COMP_WORDS[1]} " ))if [ " ${#suggestions[@]} " == "1" ] ; then
local number = " ${suggestions[0]/%\ */} "
COMPREPLY = ( " $number " )
else
for i in " ${!suggestions[@]} " ; do
suggestions [ $i ] = " $(printf '%*s' "-$COLUMNS" "${suggestions[$i]}") "
doneCOMPREPLY = ( " ${suggestions[@]} " )
fi
}complete -F _dothis_completions dothis
Source and test:
dothis < tab >< tab >
...
499 source dothis-completion.bash
500 clear
...
503 dothis 500Customizable behaviorIn our case, we hard-coded to display the last 50 commands for completion. This is not a good practice. We should first respect what each user might prefer. If he/she hasn't made any preference, we should default to 50.
To accomplish that, we will check if an environment variable
DOTHIS_COMPLETION_COMMANDS_NUMBER
has been set.Change the completion script one last time:
#/usr/bin/env bash
_dothis_completions ()
{
if [ " ${#COMP_WORDS[@]} " ! = "2" ] ; then
return
filocal commands_number = ${DOTHIS_COMPLETION_COMMANDS_NUMBER:-50}
local IFS =$ '\n'
local suggestions = ( $ ( compgen -W " $(fc -l -$commands_number | sed 's/\t//') " -- " ${COMP_WORDS[1]} " ))if [ " ${#suggestions[@]} " == "1" ] ; then
local number = " ${suggestions[0]/%\ */} "
COMPREPLY = ( " $number " )
else
for i in " ${!suggestions[@]} " ; do
suggestions [ $i ] = " $(printf '%*s' "-$COLUMNS" "${suggestions[$i]}") "
doneCOMPREPLY = ( " ${suggestions[@]} " )
fi
}complete -F _dothis_completions dothis
Source and test:
export DOTHIS_COMPLETION_COMMANDS_NUMBER = 5
$ dothis < tab >< tab >
505 clear
506 source . / dothis-completion.bash
507 dothis clear
508 clear
509 export DOTHIS_COMPLETION_COMMANDS_NUMBER = 5 Useful linksCode and comments
- Git's completion script
- Bash Reference Manual: Programmable Completion
- Bash Reference Manual: Programmable Completion Builtins
- Bash Reference Manual: A Programmable Completion Example
- Bash Reference Manual: Bash Variables
You can find the code of this tutorial on GitHub .
For feedback, comments, typos, etc., please open an issue in the repository.
Lazarus Lazaridis - I am an open source enthusiast and I like helping developers with tutorials and tools . I usually code in Ruby especially when it's on Rails but I also speak Java, Go, bash & C#. I have studied CS at Athens University of Economics and Business and I live in Athens, Greece. My nickname is iridakos and I publish tech related posts on my personal blog iridakos.com .
Mar 19, 2018 | www.tecmint.com
For example, if you have a directory ~/Documents/Phone-Backup/Linux-Docs/Ubuntu/ , using gogo , you can create an alias (a shortcut name), for instance
Ubuntu
to access it without typing the whole path anymore. No matter your current working directory, you can move into ~/cd Documents/Phone-Backup/Linux-Docs/Ubuntu/ by simply using the aliasUbuntu
.Read Also : bd – Quickly Go Back to a Parent Directory Instead of Typing "cd ../../.." Redundantly
In addition, it also allows you to create aliases for connecting directly into directories on remote Linux servers.
How to Install Gogo in Linux SystemsTo install Gogo , first clone the gogo repository from Github and then copy the
gogo.py
to any directory in your PATH environmental variable (if you already have the~/bin/
directory, you can place it here, otherwise create it).$ git clone https://github.com/mgoral/gogo.git $ cd gogo/ $ mkdir -p ~/bin #run this if you do not have ~/bin directory $ cp gogo.py ~/bin/... ... ...
To start using gogo , you need to logout and login back to use it. Gogo stores its configuration in~/.config/gogo/gogo.conf
file (which should be auto created if it doesn't exist) and has the following syntax.# Comments are lines that start from '#' character. default = ~/something alias = /desired/path alias2 = /desired/path with space alias3 = "/this/also/works" zażółć = "unicode/is/also/supported/zażółć gęślą jaźń"If you run gogo run without any arguments, it will go to the directory specified in default; this alias is always available, even if it's not in the configuration file, and points to $HOME directory.
To display the current aliases, use the
-l
switch
Jan 08, 2018 | www.linuxjournal.com
Triggering scripts with incron and systemd.
It is, at times, important to know when things change in the Linux OS. The uses to which systems are placed often include high-priority data that must be processed as soon as it is seen. The conventional method of finding and processing new file data is to poll for it, usually with cron. This is inefficient, and it can tax performance unreasonably if too many polling events are forked too often.
Linux has an efficient method for alerting user-space processes to changes impacting files of interest. The inotify Linux system calls were first discussed here in Linux Journal in a 2005 article by Robert Love who primarily addressed the behavior of the new features from the perspective of C.
However, there also are stable shell-level utilities and new classes of monitoring dæmons for registering filesystem watches and reporting events. Linux installations using systemd also can access basic inotify functionality with path units. The inotify interface does have limitations -- it can't monitor remote, network-mounted filesystems (that is, NFS); it does not report the userid involved in the event; it does not work with /proc or other pseudo-filesystems; and mmap() operations do not trigger it, among other concerns. Even with these limitations, it is a tremendously useful feature.
This article completes the work begun by Love and gives everyone who can write a Bourne shell script or set a crontab the ability to react to filesystem changes.
The inotifywait UtilityWorking under Oracle Linux 7 (or similar versions of Red Hat/CentOS/Scientific Linux), the inotify shell tools are not installed by default, but you can load them with yum:
# yum install inotify-tools Loaded plugins: langpacks, ulninfo ol7_UEKR4 | 1.2 kB 00:00 ol7_latest | 1.4 kB 00:00 Resolving Dependencies --> Running transaction check ---> Package inotify-tools.x86_64 0:3.14-8.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ============================================================== Package Arch Version Repository Size ============================================================== Installing: inotify-tools x86_64 3.14-8.el7 ol7_latest 50 k Transaction Summary ============================================================== Install 1 Package Total download size: 50 k Installed size: 111 k Is this ok [y/d/N]: y Downloading packages: inotify-tools-3.14-8.el7.x86_64.rpm | 50 kB 00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Warning: RPMDB altered outside of yum. Installing : inotify-tools-3.14-8.el7.x86_64 1/1 Verifying : inotify-tools-3.14-8.el7.x86_64 1/1 Installed: inotify-tools.x86_64 0:3.14-8.el7 Complete!The package will include two utilities (inotifywait and inotifywatch), documentation and a number of libraries. The inotifywait program is of primary interest.
Some derivatives of Red Hat 7 may not include inotify in their base repositories. If you find it missing, you can obtain it from Fedora's EPEL repository , either by downloading the inotify RPM for manual installation or adding the EPEL repository to yum.
Any user on the system who can launch a shell may register watches -- no special privileges are required to use the interface. This example watches the /tmp directory:
$ inotifywait -m /tmp Setting up watches. Watches established.If another session on the system performs a few operations on the files in /tmp:
$ touch /tmp/hello $ cp /etc/passwd /tmp $ rm /tmp/passwd $ touch /tmp/goodbye $ rm /tmp/hello /tmp/goodbyethose changes are immediately visible to the user running inotifywait:
/tmp/ CREATE hello /tmp/ OPEN hello /tmp/ ATTRIB hello /tmp/ CLOSE_WRITE,CLOSE hello /tmp/ CREATE passwd /tmp/ OPEN passwd /tmp/ MODIFY passwd /tmp/ CLOSE_WRITE,CLOSE passwd /tmp/ DELETE passwd /tmp/ CREATE goodbye /tmp/ OPEN goodbye /tmp/ ATTRIB goodbye /tmp/ CLOSE_WRITE,CLOSE goodbye /tmp/ DELETE hello /tmp/ DELETE goodbyeA few relevant sections of the manual page explain what is happening:
$ man inotifywait | col -b | sed -n '/diagnostic/,/helpful/p' inotifywait will output diagnostic information on standard error and event information on standard output. The event output can be config- ured, but by default it consists of lines of the following form: watched_filename EVENT_NAMES event_filename watched_filename is the name of the file on which the event occurred. If the file is a directory, a trailing slash is output. EVENT_NAMES are the names of the inotify events which occurred, separated by commas. event_filename is output only when the event occurred on a directory, and in this case the name of the file within the directory which caused this event is output. By default, any special characters in filenames are not escaped in any way. This can make the output of inotifywait difficult to parse in awk scripts or similar. The --csv and --format options will be helpful in this case.It also is possible to filter the output by registering particular events of interest with the
-e
option, the list of which is shown here:
access create move_self attrib delete moved_to close_write delete_self moved_from close_nowrite modify open close move unmount A common application is testing for the arrival of new files. Since inotify must be given the name of an existing filesystem object to watch, the directory containing the new files is provided. A trigger of interest is also easy to provide -- new files should be complete and ready for processing when the
close_write
trigger fires. Below is an example script to watch for these events:#!/bin/sh unset IFS # default of space, tab and nl # Wait for filesystem events inotifywait -m -e close_write \ /tmp /var/tmp /home/oracle/arch-orcl/ | while read dir op file do [[ "${dir}" == '/tmp/' && "${file}" == *.txt ]] && echo "Import job should start on $file ($dir $op)." [[ "${dir}" == '/var/tmp/' && "${file}" == CLOSE_WEEK*.txt ]] && echo Weekly backup is ready. [[ "${dir}" == '/home/oracle/arch-orcl/' && "${file}" == *.ARC ]] && su - oracle -c 'ORACLE_SID=orcl ~oracle/bin/log_shipper' & [[ "${dir}" == '/tmp/' && "${file}" == SHUT ]] && break ((step+=1)) done echo We processed $step events.There are a few problems with the script as presented -- of all the available shells on Linux, only ksh93 (that is, the AT&T Korn shell) will report the "step" variable correctly at the end of the script. All the other shells will report this variable as null.
The reason for this behavior can be found in a brief explanation on the manual page for Bash: "Each command in a pipeline is executed as a separate process (i.e., in a subshell)." The MirBSD clone of the Korn shell has a slightly longer explanation:
# man mksh | col -b | sed -n '/The parts/,/do so/p' The parts of a pipeline, like below, are executed in subshells. Thus, variable assignments inside them fail. Use co-processes instead. foo | bar | read baz # will not change $baz foo | bar |& read -p baz # will, however, do soAnd, the pdksh documentation in Oracle Linux 5 (from which MirBSD mksh emerged) has several more mentions of the subject:
General features of at&t ksh88 that are not (yet) in pdksh: - the last command of a pipeline is not run in the parent shell - `echo foo | read bar; echo $bar' prints foo in at&t ksh, nothing in pdksh (ie, the read is done in a separate process in pdksh). - in pdksh, if the last command of a pipeline is a shell builtin, it is not executed in the parent shell, so "echo a b | read foo bar" does not set foo and bar in the parent shell (at&t ksh will). This may get fixed in the future, but it may take a while. $ man pdksh | col -b | sed -n '/BTW, the/,/aware/p' BTW, the most frequently reported bug is echo hi | read a; echo $a # Does not print hi I'm aware of this and there is no need to report it.This behavior is easy enough to demonstrate -- running the script above with the default bash shell and providing a sequence of example events:
$ cp /etc/passwd /tmp/newdata.txt $ cp /etc/group /var/tmp/CLOSE_WEEK20170407.txt $ cp /etc/passwd /tmp/SHUTgives the following script output:
# ./inotify.sh Setting up watches. Watches established. Import job should start on newdata.txt (/tmp/ CLOSE_WRITE,CLOSE). Weekly backup is ready. We processed events.Examining the process list while the script is running, you'll also see two shells, one forked for the control structure:
$ function pps { typeset a IFS=\| ; ps ax | while read a do case $a in *$1*|+([!0-9])) echo $a;; esac; done } $ pps inot PID TTY STAT TIME COMMAND 3394 pts/1 S+ 0:00 /bin/sh ./inotify.sh 3395 pts/1 S+ 0:00 inotifywait -m -e close_write /tmp /var/tmp 3396 pts/1 S+ 0:00 /bin/sh ./inotify.shAs it was manipulated in a subshell, the "step" variable above was null when control flow reached the echo. Switching this from #/bin/sh to #/bin/ksh93 will correct the problem, and only one shell process will be seen:
# ./inotify.ksh93 Setting up watches. Watches established. Import job should start on newdata.txt (/tmp/ CLOSE_WRITE,CLOSE). Weekly backup is ready. We processed 2 events. $ pps inot PID TTY STAT TIME COMMAND 3583 pts/1 S+ 0:00 /bin/ksh93 ./inotify.sh 3584 pts/1 S+ 0:00 inotifywait -m -e close_write /tmp /var/tmpAlthough ksh93 behaves properly and in general handles scripts far more gracefully than all of the other Linux shells, it is rather large:
$ ll /bin/[bkm]+([aksh93]) /etc/alternatives/ksh -rwxr-xr-x. 1 root root 960456 Dec 6 11:11 /bin/bash lrwxrwxrwx. 1 root root 21 Apr 3 21:01 /bin/ksh -> /etc/alternatives/ksh -rwxr-xr-x. 1 root root 1518944 Aug 31 2016 /bin/ksh93 -rwxr-xr-x. 1 root root 296208 May 3 2014 /bin/mksh lrwxrwxrwx. 1 root root 10 Apr 3 21:01 /etc/alternatives/ksh -> /bin/ksh93The mksh binary is the smallest of the Bourne implementations above (some of these shells may be missing on your system, but you can install them with yum). For a long-term monitoring process, mksh is likely the best choice for reducing both processing and memory footprint, and it does not launch multiple copies of itself when idle assuming that a coprocess is used. Converting the script to use a Korn coprocess that is friendly to mksh is not difficult:
#!/bin/mksh unset IFS # default of space, tab and nl # Wait for filesystem events inotifywait -m -e close_write \ /tmp/ /var/tmp/ /home/oracle/arch-orcl/ \ 2</dev/null |& # Launch as Korn coprocess while read -p dir op file # Read from Korn coprocess do [[ "${dir}" == '/tmp/' && "${file}" == *.txt ]] && print "Import job should start on $file ($dir $op)." [[ "${dir}" == '/var/tmp/' && "${file}" == CLOSE_WEEK*.txt ]] && print Weekly backup is ready. [[ "${dir}" == '/home/oracle/arch-orcl/' && "${file}" == *.ARC ]] && su - oracle -c 'ORACLE_SID=orcl ~oracle/bin/log_shipper' & [[ "${dir}" == '/tmp/' && "${file}" == SHUT ]] && break ((step+=1)) done echo We processed $step events.Note that the Korn and Bolsky reference on the Korn shell outlines the following requirements in a program operating as a coprocess:
Caution: The co-process must:
- Send each output message to standard output.
- Have a Newline at the end of each message.
- Flush its standard output whenever it writes a message.
An
fflush(NULL)
is found in the main processing loop of the inotifywait source, and these requirements appear to be met.The mksh version of the script is the most reasonable compromise for efficient use and correct behavior, and I have explained it at some length here to save readers trouble and frustration -- it is important to avoid control structures executing in subshells in most of the Borne family. However, hopefully all of these ersatz shells someday fix this basic flaw and implement the Korn behavior correctly.
A Practical Application -- Oracle Log ShippingOracle databases that are configured for hot backups produce a stream of "archived redo log files" that are used for database recovery. These are the most critical backup files that are produced in an Oracle database.
These files are numbered sequentially and are written to a log directory configured by the DBA. An inotifywatch can trigger activities to compress, encrypt and/or distribute the archived logs to backup and disaster recovery servers for safekeeping. You can configure Oracle RMAN to do most of these functions, but the OS tools are more capable, flexible and simpler to use.
There are a number of important design parameters for a script handling archived logs:
- A "critical section" must be established that allows only a single process to manipulate the archived log files at a time. Oracle will sometimes write bursts of log files, and inotify might cause the handler script to be spawned repeatedly in a short amount of time. Only one instance of the handler script can be allowed to run -- any others spawned during the handler's lifetime must immediately exit. This will be achieved with a textbook application of the flock program from the util-linux package.
- The optimum compression available for production applications appears to be lzip . The author claims that the integrity of his archive format is superior to many more well known utilities , both in compression ability and also structural integrity. The lzip binary is not in the standard repository for Oracle Linux -- it is available in EPEL and is easily compiled from source.
- Note that 7-Zip uses the same LZMA algorithm as lzip, and it also will perform AES encryption on the data after compression. Encryption is a desirable feature, as it will exempt a business from breach disclosure laws in most US states if the backups are lost or stolen and they contain "Protected Personal Information" (PPI), such as birthdays or Social Security Numbers. The author of lzip does have harsh things to say regarding the quality of 7-Zip archives using LZMA2, and the
openssl enc
program can be used to apply AES encryption after compression to lzip archives or any other type of file, as I discussed in a previous article . I'm foregoing file encryption in the script below and using lzip for clarity.- The current log number will be recorded in a dot file in the Oracle user's home directory. If a log is skipped for some reason (a rare occurrence for an Oracle database), log shipping will stop. A missing log requires an immediate and full database backup (either cold or hot) -- successful recoveries of Oracle databases cannot skip logs.
- The
scp
program will be used to copy the log to a remote server, and it should be called repeatedly until it returns successfully.- I'm calling the genuine '93 Korn shell for this activity, as it is the most capable scripting shell and I don't want any surprises.
Given these design parameters, this is an implementation:
# cat ~oracle/archutils/process_logs #!/bin/ksh93 set -euo pipefail IFS=$'\n\t' # http://redsymbol.net/articles/unofficial-bash-strict-mode/ ( flock -n 9 || exit 1 # Critical section-allow only one process. ARCHDIR=~oracle/arch-${ORACLE_SID} APREFIX=${ORACLE_SID}_1_ ASUFFIX=.ARC CURLOG=$(<~oracle/.curlog-$ORACLE_SID) File="${ARCHDIR}/${APREFIX}${CURLOG}${ASUFFIX}" [[ ! -f "$File" ]] && exit while [[ -f "$File" ]] do ((NEXTCURLOG=CURLOG+1)) NextFile="${ARCHDIR}/${APREFIX}${NEXTCURLOG}${ASUFFIX}" [[ ! -f "$NextFile" ]] && sleep 60 # Ensure ARCH has finished nice /usr/local/bin/lzip -9q "$File" until scp "${File}.lz" "yourcompany.com:~oracle/arch-$ORACLE_SID" do sleep 5 done CURLOG=$NEXTCURLOG File="$NextFile" done echo $CURLOG > ~oracle/.curlog-$ORACLE_SID ) 9>~oracle/.processing_logs-$ORACLE_SIDThe above script can be executed manually for testing even while the inotify handler is running, as the flock protects it.
A standby server, or a DataGuard server in primitive standby mode, can apply the archived logs at regular intervals. The script below forces a 12-hour delay in log application for the recovery of dropped or damaged objects, so inotify cannot be easily used in this case -- cron is a more reasonable approach for delayed file processing, and a run every 20 minutes will keep the standby at the desired recovery point:
# cat ~oracle/archutils/delay-lock.sh #!/bin/ksh93 ( flock -n 9 || exit 1 # Critical section-only one process. WINDOW=43200 # 12 hours LOG_DEST=~oracle/arch-$ORACLE_SID OLDLOG_DEST=$LOG_DEST-applied function fage { print $(( $(date +%s) - $(stat -c %Y "$1") )) } # File age in seconds - Requires GNU extended date & stat cd $LOG_DEST of=$(ls -t | tail -1) # Oldest file in directory [[ -z "$of" || $(fage "$of") -lt $WINDOW ]] && exit for x in $(ls -rt) # Order by ascending file mtime do if [[ $(fage "$x") -ge $WINDOW ]] then y=$(basename $x .lz) # lzip compression is optional [[ "$y" != "$x" ]] && /usr/local/bin/lzip -dkq "$x" $ORACLE_HOME/bin/sqlplus '/ as sysdba' > /dev/null 2>&1 <<-EOF recover standby database; $LOG_DEST/$y cancel quit EOF [[ "$y" != "$x" ]] && rm "$y" mv "$x" $OLDLOG_DEST fi done ) 9> ~oracle/.recovering-$ORACLE_SIDI've covered these specific examples here because they introduce tools to control concurrency, which is a common issue when using inotify, and they advance a few features that increase reliability and minimize storage requirements. Hopefully enthusiastic readers will introduce many improvements to these approaches.
The incron SystemLukas Jelinek is the author of the incron package that allows users to specify tables of inotify events that are executed by the master incrond process. Despite the reference to "cron", the package does not schedule events at regular intervals -- it is a tool for filesystem events, and the cron reference is slightly misleading.
The incron package is available from EPEL . If you have installed the repository, you can load it with yum:
# yum install incron Loaded plugins: langpacks, ulninfo Resolving Dependencies --> Running transaction check ---> Package incron.x86_64 0:0.5.10-8.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================= Package Arch Version Repository Size ================================================================= Installing: incron x86_64 0.5.10-8.el7 epel 92 k Transaction Summary ================================================================== Install 1 Package Total download size: 92 k Installed size: 249 k Is this ok [y/d/N]: y Downloading packages: incron-0.5.10-8.el7.x86_64.rpm | 92 kB 00:01 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : incron-0.5.10-8.el7.x86_64 1/1 Verifying : incron-0.5.10-8.el7.x86_64 1/1 Installed: incron.x86_64 0:0.5.10-8.el7 Complete!On a systemd distribution with the appropriate service units, you can start and enable incron at boot with the following commands:
# systemctl start incrond # systemctl enable incrond Created symlink from /etc/systemd/system/multi-user.target.wants/incrond.service to /usr/lib/systemd/system/incrond.service.In the default configuration, any user can establish incron schedules. The incrontab format uses three fields:
<path> <mask> <command>Below is an example entry that was set with the
-e
option:$ incrontab -e #vi session follows $ incrontab -l /tmp/ IN_ALL_EVENTS /home/luser/myincron.sh $@ $% $#You can record a simple script and mark it with execute permission:
$ cat myincron.sh #!/bin/sh echo -e "path: $1 op: $2 \t file: $3" >> ~/op $ chmod 755 myincron.shThen, if you repeat the original /tmp file manipulations at the start of this article, the script will record the following output:
$ cat ~/op path: /tmp/ op: IN_ATTRIB file: hello path: /tmp/ op: IN_CREATE file: hello path: /tmp/ op: IN_OPEN file: hello path: /tmp/ op: IN_CLOSE_WRITE file: hello path: /tmp/ op: IN_OPEN file: passwd path: /tmp/ op: IN_CLOSE_WRITE file: passwd path: /tmp/ op: IN_MODIFY file: passwd path: /tmp/ op: IN_CREATE file: passwd path: /tmp/ op: IN_DELETE file: passwd path: /tmp/ op: IN_CREATE file: goodbye path: /tmp/ op: IN_ATTRIB file: goodbye path: /tmp/ op: IN_OPEN file: goodbye path: /tmp/ op: IN_CLOSE_WRITE file: goodbye path: /tmp/ op: IN_DELETE file: hello path: /tmp/ op: IN_DELETE file: goodbyeWhile the
IN_CLOSE_WRITE
event on a directory object is usually of greatest interest, most of the standard inotify events are available within incron, which also offers several unique amalgams:$ man 5 incrontab | col -b | sed -n '/EVENT SYMBOLS/,/child process/p' EVENT SYMBOLS These basic event mask symbols are defined: IN_ACCESS File was accessed (read) (*) IN_ATTRIB Metadata changed (permissions, timestamps, extended attributes, etc.) (*) IN_CLOSE_WRITE File opened for writing was closed (*) IN_CLOSE_NOWRITE File not opened for writing was closed (*) IN_CREATE File/directory created in watched directory (*) IN_DELETE File/directory deleted from watched directory (*) IN_DELETE_SELF Watched file/directory was itself deleted IN_MODIFY File was modified (*) IN_MOVE_SELF Watched file/directory was itself moved IN_MOVED_FROM File moved out of watched directory (*) IN_MOVED_TO File moved into watched directory (*) IN_OPEN File was opened (*) When monitoring a directory, the events marked with an asterisk (*) above can occur for files in the directory, in which case the name field in the returned event data identifies the name of the file within the directory. The IN_ALL_EVENTS symbol is defined as a bit mask of all of the above events. Two additional convenience symbols are IN_MOVE, which is a com- bination of IN_MOVED_FROM and IN_MOVED_TO, and IN_CLOSE, which combines IN_CLOSE_WRITE and IN_CLOSE_NOWRITE. The following further symbols can be specified in the mask: IN_DONT_FOLLOW Don't dereference pathname if it is a symbolic link IN_ONESHOT Monitor pathname for only one event IN_ONLYDIR Only watch pathname if it is a directory Additionally, there is a symbol which doesn't appear in the inotify sym- bol set. It is IN_NO_LOOP. This symbol disables monitoring events until the current one is completely handled (until its child process exits).The incron system likely presents the most comprehensive interface to inotify of all the tools researched and listed here. Additional configuration options can be set in /etc/incron.conf to tweak incron's behavior for those that require a non-standard configuration.
Path Units under systemdWhen your Linux installation is running systemd as PID 1, limited inotify functionality is available through "path units" as is discussed in a lighthearted article by Paul Brown at OCS-Mag .
The relevant manual page has useful information on the subject:
$ man systemd.path | col -b | sed -n '/Internally,/,/systems./p' Internally, path units use the inotify(7) API to monitor file systems. Due to that, it suffers by the same limitations as inotify, and for example cannot be used to monitor files or directories changed by other machines on remote NFS file systems.Note that when a systemd path unit spawns a shell script, the
$HOME
and tilde (~
) operator for the owner's home directory may not be defined. Using the tilde operator to reference another user's home directory (for example, ~nobody/) does work, even when applied to the self-same user running the script. The Oracle script above was explicit and did not reference ~ without specifying the target user, so I'm using it as an example here.Using inotify triggers with systemd path units requires two files. The first file specifies the filesystem location of interest:
$ cat /etc/systemd/system/oralog.path [Unit] Description=Oracle Archivelog Monitoring Documentation=http://docs.yourserver.com [Path] PathChanged=/home/oracle/arch-orcl/ [Install] WantedBy=multi-user.targetThe
PathChanged
parameter above roughly corresponds to theclose-write
event used in my previous direct inotify calls. The full collection of inotify events is not (currently) supported by systemd -- it is limited toPathExists
,PathChanged
andPathModified
, which are described inman systemd.path
.The second file is a service unit describing a program to be executed. It must have the same name, but a different extension, as the path unit:
$ cat /etc/systemd/system/oralog.service [Unit] Description=Oracle Archivelog Monitoring Documentation=http://docs.yourserver.com [Service] Type=oneshot Environment=ORACLE_SID=orcl ExecStart=/bin/sh -c '/root/process_logs >> /tmp/plog.txt 2>&1'The
oneshot
parameter above alerts systemd that the program that it forks is expected to exit and should not be respawned automatically -- the restarts are limited to triggers from the path unit. The above service configuration will provide the best options for logging -- divert them to /dev/null if they are not needed.Use
systemctl start
on the path unit to begin monitoring -- a common error is using it on the service unit, which will directly run the handler only once. Enable the path unit if the monitoring should survive a reboot.Although this limited functionality may be enough for some casual uses of inotify, it is a shame that the full functionality of inotifywait and incron are not represented here. Perhaps it will come in time.
ConclusionAlthough the inotify tools are powerful, they do have limitations. To repeat them, inotify cannot monitor remote (NFS) filesystems; it cannot report the userid involved in a triggering event; it does not work with /proc or other pseudo-filesystems; mmap() operations do not trigger it; and the inotify queue can overflow resulting in lost events, among other concerns.
Even with these weaknesses, the efficiency of inotify is superior to most other approaches for immediate notifications of filesystem activity. It also is quite flexible, and although the close-write directory trigger should suffice for most usage, it has ample tools for covering special use cases.
In any event, it is productive to replace polling activity with inotify watches, and system administrators should be liberal in educating the user community that the classic crontab is not an appropriate place to check for new files. Recalcitrant users should be confined to Ultrix on a VAX until they develop sufficient appreciation for modern tools and approaches, which should result in more efficient Linux systems and happier administrators.
Sidenote: Archiving /etc/passwdTracking changes to the password file involves many different types of inotify triggering events. The
vipw
utility commonly will make changes to a temporary file, then clobber the original with it. This can be seen when the inode number changes:# ll -i /etc/passwd 199720973 -rw-r--r-- 1 root root 3928 Jul 7 12:24 /etc/passwd # vipw [ make changes ] You are using shadow passwords on this system. Would you like to edit /etc/shadow now [y/n]? n # ll -i /etc/passwd 203784208 -rw-r--r-- 1 root root 3956 Jul 7 12:24 /etc/passwdThe destruction and replacement of /etc/passwd even occurs with setuid binaries called by unprivileged users:
$ ll -i /etc/passwd 203784196 -rw-r--r-- 1 root root 3928 Jun 29 14:55 /etc/passwd $ chsh Changing shell for fishecj. Password: New shell [/bin/bash]: /bin/csh Shell changed. $ ll -i /etc/passwd 199720970 -rw-r--r-- 1 root root 3927 Jul 7 12:23 /etc/passwdFor this reason, all inotify triggering events should be considered when tracking this file. If there is concern with an inotify queue overflow (in which events are lost), then the
OPEN
,ACCESS
andCLOSE_NOWRITE,CLOSE
triggers likely can be immediately ignored.All other inotify events on /etc/passwd might run the following script to version the changes into an RCS archive and mail them to an administrator:
#!/bin/sh # This script tracks changes to the /etc/passwd file from inotify. # Uses RCS for archiving. Watch for UID zero. [email protected] TPDIR=~/track_passwd cd $TPDIR if diff -q /etc/passwd $TPDIR/passwd then exit # they are the same else sleep 5 # let passwd settle diff /etc/passwd $TPDIR/passwd 2>&1 | # they are DIFFERENT mail -s "/etc/passwd changes $(hostname -s)" "$PWMAILS" cp -f /etc/passwd $TPDIR # copy for checkin # "SCCS, the source motel! Programs check in and never check out!" # -- Ken Thompson rcs -q -l passwd # lock the archive ci -q -m_ passwd # check in new ver co -q passwd # drop the new copy fi > /dev/null 2>&1Here is an example email from the script for the above
chfn
operation:-----Original Message----- From: root [mailto:[email protected]] Sent: Thursday, July 06, 2017 2:35 PM To: Fisher, Charles J. <[email protected]>; Subject: /etc/passwd changes myhost 57c57 < fishecj:x:123:456:Fisher, Charles J.:/home/fishecj:/bin/bash --- > fishecj:x:123:456:Fisher, Charles J.:/home/fishecj:/bin/cshFurther processing on the third column of /etc/passwd might detect UID zero (a root user) or other important user classes for emergency action. This might include a rollback of the file from RCS to /etc and/or SMS messages to security contacts. ______________________
Charles Fisher has an electrical engineering degree from the University of Iowa and works as a systems and database administrator for a Fortune 500 mining and manufacturing corporation.
Dec 09, 2017 | stackoverflow.com
,
That line defines what program will execute the given script. Forsh
normally that line should start with the # character as so:#!/bin/sh -eThe -e flag's long name is
errexit
, causing the script to immediately exit on the first error.
Dec 02, 2017 | www.cyberciti.biz
BASH Shell: How To Redirect stderr To stdout ( redirect stderr to a File ) Posted on March 12, 2008 March 12, 2008 in Categories BASH Shell , Linux , UNIX last updated March 12, 2008 Q. How do I redirect stderr to stdout? How do I redirect stderr to a file?
A. Bash and other modern shell provides I/O redirection facility. There are 3 default standard files (standard streams) open:
[a] stdin – Use to get input (keyboard) i.e. data going into a program.
[b] stdout – Use to write information (screen)
[c] stderr – Use to write error message (screen)
Understanding I/O streams numbersThe Unix / Linux standard I/O streams with numbers:
Redirecting the standard error stream to a file
Handle Name Description 0 stdin Standard input 1 stdout Standard output 2 stderr Standard error The following will redirect program error message to a file called error.log:
Redirecting the standard error (stderr) and stdout to file
$ program-name 2> error.log
$ command1 2> error.logUse the following syntax:
Redirect stderr to stdout
$ command-name &>file
OR
$ command > file-name 2>&1
Another useful example:
# find /usr/home -name .profile 2>&1 | more
Use the command as follows:
$ command-name 2>&1
Nov 01, 2017 | sanctum.geek.nz
A more flexible method for defining custom commands for an interactive shell (or within a script) is to use a shell function. We could declare our
ll
function in a Bash startup file as a function instead of an alias like so:# Shortcut to call ls(1) with the -l flag ll() { command ls -l "$@" }Note the use of the
command
builtin here to specify that thell
function should invoke the program namedls
, and not any function namedls
. This is particularly important when writing a function wrapper around a command, to stop an infinite loop where the function calls itself indefinitely:# Always add -q to invocations of gdb(1) gdb() { command gdb -q "$@" }In both examples, note also the use of the
"$@"
expansion, to add to the final command line any arguments given to the function. We wrap it in double quotes to stop spaces and other shell metacharacters in the arguments causing problems. This means that thell
command will work correctly if you were to pass it further options and/or one or more directories as arguments:$ ll -a $ ll ~/.configShell functions declared in this way are specified by POSIX for Bourne-style shells, so they should work in your shell of choice, including Bash,
dash
, Korn shell, and Zsh. They can also be used within scripts, allowing you to abstract away multiple instances of similar commands to improve the clarity of your script, in much the same way the basics of functions work in general-purpose programming languages.Functions are a good and portable way to approach adding features to your interactive shell; written carefully, they even allow you to port features you might like from other shells into your shell of choice. I'm fond of taking commands I like from Korn shell or Zsh and implementing them in Bash or POSIX shell functions, such as Zsh's
vared
or its two-argumentcd
features.If you end up writing a lot of shell functions, you should consider putting them into separate configuration subfiles to keep your shell's primary startup file from becoming unmanageably large.
Examples from the authorYou can take a look at some of the shell functions I have defined here that are useful to me in general shell usage; a lot of these amount to implementing convenience features that I wish my shell had, especially for quick directory navigation, or adding options to commands:
Other examples Variables in shell functionsYou can manipulate variables within shell functions, too:
# Print the filename of a path, stripping off its leading path and # extension fn() { name=$1 name=${name##*/} name=${name%.*} printf '%s\n' "$name" }This works fine, but the catch is that after the function is done, the value for
name
will still be defined in the shell, and will overwrite whatever was in there previously:$ printf '%s\n' "$name" foobar $ fn /home/you/Task_List.doc Task_List $ printf '%s\n' "$name" Task_ListThis may be desirable if you actually want the function to change some aspect of your current shell session, such as managing variables or changing the working directory. If you don't want that, you will probably want to find some means of avoiding name collisions in your variables.
If your function is only for use with a shell that provides the
local
(Bash) ortypeset
(Ksh) features, you can declare the variable as local to the function to remove its global scope, to prevent this happening:# Bash-like fn() { local name name=$1 name=${name##*/} name=${name%.*} printf '%s\n' "$name" } # Ksh-like # Note different syntax for first line function fn { typeset name name=$1 name=${name##*/} name=${name%.*} printf '%s\n' "$name" }If you're using a shell that lacks these features, or you want to aim for POSIX compatibility, things are a little trickier, since local function variables aren't specified by the standard. One option is to use a subshell , so that the variables are only defined for the duration of the function:
# POSIX; note we're using plain parentheses rather than curly brackets, for # a subshell fn() ( name=$1 name=${name##*/} name=${name%.*} printf '%s\n' "$name" ) # POSIX; alternative approach using command substitution: fn() { printf '%s\n' "$( name=$1 name=${name##*/} name=${name%.*} printf %s "$name" )" }This subshell method also allows you to change directory with
cd
within a function without changing the working directory of the user's interactive shell, or to change shell options withset
or Bash options withshopt
only temporarily for the purposes of the function.Another method to deal with variables is to manipulate the positional parameters directly (
$1
,$2
) withset
, since they are local to the function call too:# POSIX; using positional parameters fn() { set -- "${1##*/}" set -- "${1%.*}" printf '%s\n' "$1" }These methods work well, and can sometimes even be combined, but they're awkward to write, and harder to read than the modern shell versions. If you only need your functions to work with your modern shell, I recommend just using
Keeping functions for laterlocal
ortypeset
. The Bash Guide on Greg's Wiki has a very thorough breakdown of functions in Bash, if you want to read about this and other aspects of functions in more detail.As you get comfortable with defining and using functions during an interactive session, you might define them in ad-hoc ways on the command line for calling in a loop or some other similar circumstance, just to solve a task in that moment.
As an example, I recently made an ad-hoc function called
monit
to run a set of commands for its hostname argument that together established different types of monitoring system checks, using an existing script callednmfs
:$ monit() { nmfs "$1" Ping Y ; nmfs "$1" HTTP Y ; nmfs "$1" SNMP Y ; } $ for host in webhost{1..10} ; do > monit "$host" > doneAfter that task was done, I realized I was likely to use the
monit
command interactively again, so I decided to keep it. Shell functions only last as long as the current shell, so if you want to make them permanent, you need to store their definitions somewhere in your startup files. If you're using Bash, and you're content to just add things to the end of your~/.bashrc
file, you could just do something like this:$ declare -f monit >> ~/.bashrcThat would append the existing definition of
monit
in parseable form to your~/.bashrc
file, and themonit
function would then be loaded and available to you for future interactive sessions. Later on, I ended up convertingmonit
into a shell script, as its use wasn't limited to just an interactive shell.If you want a more robust approach to keeping functions like this for Bash permanently, I wrote a tool called Bashkeep , which allows you to quickly store functions and variables defined in your current shell into separate and appropriately-named files, including viewing and managing the list of names conveniently:
$ keep monit $ keep monit $ ls ~/.bashkeep.d monit.bash $ keep -d monit
February 27, 2012 sanctum.geek.nz
For tools like
diff
that work with multiple files as parameters, it can be useful to work with not just files on the filesystem, but also potentially with the output of arbitrary commands. Say, for example, you wanted to compare the output ofps
andps -e
withdiff -u
. An obvious way to do this is to write files to compare the output:$ ps > ps.out $ ps -e > pse.out $ diff -u ps.out pse.outThis works just fine, but Bash provides a shortcut in the form of process substitution , allowing you to treat the standard output of commands as files. This is done with the
<()
and>()
operators. In our case, we want to direct the standard output of two commands into place as files:$ diff -u <(ps) <(ps -e)This is functionally equivalent, except it's a little tidier because it doesn't leave files lying around. This is also very handy for elegantly comparing files across servers, using
ssh
:$ diff -u .bashrc <(ssh remote cat .bashrc)Conversely, you can also use the
>()
operator to direct from a filename context to the standard input of a command. This is handy for setting up in-place filters for things like logs. In the following example, I'm making a call torsync
, specifying that it should make a log of its actions inlog.txt
, but filter it throughgrep -vF .tmp
first to remove anything matching the fixed string.tmp
:$ rsync -arv --log-file=>(grep -vF .tmp >log.txt) src/ host::dst/Combined with
tee
this syntax is a way of simulating multiple filters for astdout
stream, transforming output from a command in as many ways as you see fit:$ ps -ef | tee >(awk '$1=="tom"' >toms-procs.txt) \ >(awk '$1=="root"' >roots-procs.txt) \ >(awk '$1!="httpd"' >not-apache-procs.txt) \ >(awk 'NR>1{print $1}' >pids-only.txt)In general, the idea is that wherever on the command line you could specify a file to be read from or written to, you can instead use this syntax to make an implicit named pipe for the text stream.
Thanks to Reddit user Rhomboid for pointing out an incorrect assertion about this syntax necessarily abstracting
mkfifo
calls, which I've since removed.
Mar 05, 2012 | sanctum.geek.nz
With judicious use of tricks like pipes, redirects, and process substitution in modern shells, it's very often possible to avoid using temporary files, doing everything inline and keeping them quite neat. However when manipulating a lot of data into various formats you do find yourself occasionally needing a temporary file, just to hold data temporarily.
A common way to deal with this is to create a temporary file in your home directory, with some arbitrary name, something like
test
orworking
:$ ps -ef >~/testIf you want to save the information indefinitely for later use, this makes sense, although it would be better to give it a slightly more instructive name than just
test
.If you really only needed the data temporarily, however, you're much better to use the temporary files directory. This is usually
/tmp
, but for good practice's sake it's better to check the value ofTMPDIR
first, and only use/tmp
as a default:$ ps -ef >"${TMPDIR:-/tmp}"/testThis is getting better, but there is still a significant problem: there's no built-in check that the
test
file doesn't already exist, perhaps being used by some other user or program, particularly another running instance of the same script.To that end, we have the
mktemp
program, which creates an empty temporary file in the appropriate directory for you without overwriting anything, and prints the filename it created. This allows you to use the file inline in both shell scripts and one-liners, and is much safer than specifying hardcoded paths:$ mktemp /tmp/tmp.yezXn0evDf $ procsfile=$(mktemp) $ printf '%s\n' "$procsfile" /tmp/tmp.9rBjzWYaSU $ ps -ef >"$procsfile"If you're going to create several such files for related purposes, you could also create a directory in which to put them using the
-d
option:$ procsdir=$(mktemp -d) $ printf '%s\n' "$procsdir" /tmp/tmp.HMAhM2RBSOOn GNU/Linux systems, files of a sufficient age in
TMPDIR
are cleared on boot (controlled in/etc/default/rcS
on Debian-derived systems,/etc/cron.daily/tmpwatch
on Red Hat ones), making/tmp
useful as a general scratchpad as well as for a kind of relatively reliable inter-process communication without cluttering up users' home directories.In some cases, there may be additional advantages in using
/tmp
for its designed purpose as some administrators choose to mount it as atmpfs
filesystem, so it operates in RAM and works very quickly. It's also common practice to set thenoexec
flag on the mount to prevent malicious users from executing any code they manage to find or save in the directory.
Jan 24, 2012 | sanctum.geek.nz
One of my favourite technical presentations I've read online has been Hal Pomeranz's Unix Command-Line Kung Fu , a catalogue of shortcuts and efficient methods of doing very clever things with the Bash shell. None of these are grand arcane secrets, but they're things that are often forgotten in the course of daily admin work, when you find yourself typing something you needn't, or pressing up repeatedly to find something you wrote for which you could simply search your command history.
I highly recommend reading the whole thing, as I think even the most experienced shell users will find there are useful tidbits in there that would make their lives easier and their time with the shell more productive, beyond simpler things like tab completion.
Here, I'll recap two of the things I thought were the most simple and useful items in the presentation for general shell usage, and see if I can add a little value to them with reference to the Bash manual.
History with Ctrl+RFor many shell users, finding a command in history means either pressing the up arrow key repeatedly, or perhaps piping a
history
call throughgrep
. It turns out there's a much nicer way to do this, using Bash's built-in history searching functionality; if you press Ctrl+R and start typing a search pattern, the most recent command matching that pattern will automatically be inserted on your current line, at which point you can adapt it as you need, or simply press Enter to run it again. You can keep pressing Ctrl+R to move further back in your history to the next-most recent match. On my shell, if I search through my history forgit
, I can pull up what I typed for a previous commit:(reverse-i-search)`git': git commit -am "Pulled up-to-date colors."This functionality isn't actually exclusive to Bash; you can establish a history search function in quite a few tools that use GNU Readline, including the MySQL client command line.
You can search forward through history in the same way with Ctrl+S, but it's likely you'll have to fix up a couple of terminal annoyances first.
Additionally, if like me you're a Vim user and you don't really like having to reach for the arrow keys, or if you're on a terminal where those keys are broken for whatever reason, you can browse back and forth within your command history with Ctrl+P (previous) and Ctrl+N (next). These are just a few of the Emacs-style shortcuts that GNU Readline provides; check here for a more complete list .
Repeating commands with !!The last command you ran in Bash can be abbreviated on the next line with two exclamation marks:
$ echo "Testing." Testing. $ !! Testing.You can use this to simply repeat a command over and over again, although for that you really should be using
watch
, but more interestingly it turns out this is very handy for building complex pipes in stages. Suppose you were building a pipeline to digest some data generated from a program likenetstat
, perhaps to determine the top 10 IP addresses that are holding open the most connections to a server. You might be able to build a pipeline like this:# netstat -ant # !! | awk '{print $5}' # !! | sort # !! | uniq -c # !! | sort -rn # !! | sed 10qSimilarly, you can repeat the last argument from the previous command line using
!$
, which is useful if you're doing a set of operations on one file, such as checking it out via RCS, editing it, and checking it back in:$ co -l file.txt $ vim !$ $ ci -u !$Or if you happen to want to work on a set of arguments, you can repeat all of the arguments from the previous command using
!*
:$ touch a.txt b.txt c.txt $ rm !*When you remember to user these three together, they can save you a lot of typing, and will really increase your accuracy because you won't be at risk of mistyping any of the commands or arguments. Naturally, however, it pays to be careful what you're running through
rm
!
Mar 16, 2012 | sanctum.geek.nz
When you have some spare time, something instructive to do that can help fill gaps in your Unix knowledge and to get a better idea of the programs installed on your system and what they can do is a simple
whatis
call, run over all the executable files in your/bin
and/usr/bin
directories.This will give you a one-line summary of the file's function if available from
man
pages.tom@conan:/bin$ whatis * bash (1) - GNU Bourne-Again SHell bunzip2 (1) - a block-sorting file compressor, v1.0.4 busybox (1) - The Swiss Army Knife of Embedded Linux bzcat (1) - decompresses files to stdout ... tom@conan:/usr/bin$ whatis * [ (1) - check file types and compare values 2to3 (1) - Python2 to Python3 converter 2to3-2.7 (1) - Python2 to Python3 converter 411toppm (1) - convert Sony Mavica .411 image to ppm ...It also works on many of the files in other directories, such as
/etc
:tom@conan:/etc$ whatis * acpi (1) - Shows battery status and other ACPI information adduser.conf (5) - configuration file for adduser(8) and addgroup(8) adjtime (3) - correct the time to synchronize the system clock aliases (5) - Postfix local alias database format ...Because packages often install more than one binary and you're only in the habit of using one or two of them, this process can tell you about programs on your system that you may have missed, particularly standard tools that solve common problems. As an example, I first learned about
watch
this way, having used a clunky solution withfor
loops withsleep
calls to do the same thing many times before.
Oct 28, 2013 | sanctum.geek.nz
In Bash scripting (and shell scripting in general), we often want to check the exit value of a command to decide an action to take after it completes, likely for the purpose of error handling. For example, to determine whether a particular regular expression
regex
was present somewhere in a fileoptions
, we might applygrep(1)
with its POSIX-q
option to suppress output and just use the exit value:grep -q regex optionsAn approach sometimes taken is then to test the exit value with the
$?
parameter, usingif
to check if it's non-zero, which is not very elegant and a bit hard to read:# Bad practice grep -q regex options if (($? > 0)); then printf '%s\n' 'myscript: Pattern not found!' >&2 exit 1 fiBecause the
if
construct by design tests the exit value of commands , it's better to test the command directly , making the expansion of$?
unnecessary:# Better if grep -q regex options; then # Do nothing : else printf '%s\n' 'myscript: Pattern not found!\n' >&2 exit 1 fiWe can precede the command to be tested with
!
to negate the test as well, to prevent us having to useelse
as well:# Best if ! grep -q regex options; then printf '%s\n' 'myscript: Pattern not found!' >&2 exit 1 fiAn alternative syntax is to use
&&
and||
to performif
andelse
tests with grouped commands between braces, but these tend to be harder to read:# Alternative grep -q regex options || { printf '%s\n' 'myscript: Pattern not found!' >&2 exit 1 }With this syntax, the two commands in the block are only executed if the
grep(1)
call exits with a non-zero status. We can apply&&
instead to execute commands if it does exit with zero.That syntax can be convenient for quickly short-circuiting failures in scripts, for example due to nonexistent commands, particularly if the command being tested already outputs its own error message. This therefore cuts the script off if the given command fails, likely due to
ffmpeg(1)
being unavailable on the system:hash ffmpeg || exit 1Note that the braces for a grouped command are not needed here, as there's only one command to be run in case of failure, the
exit
call.Calls to
cd
are another good use case here, as running a script in the wrong directory if a call tocd
fails could have really nasty effects:cd wherever || exit 1In general, you'll probably only want to test
$?
when you have specific non-zero error conditions to catch. For example, if we were using the--max-delete
option forrsync(1)
, we could check a call's return value to see whetherrsync(1)
hit the threshold for deleted file count and write a message to a logfile appropriately:rsync --archive --delete --max-delete=5 source destination if (($? == 25)); then printf '%s\n' 'Deletion limit was reached' >"$logfile" fiIt may be tempting to use the
errexit
feature in the hopes of stopping a script as soon as it encounters any error, but there are some problems with its usage that make it a bit error-prone. It's generally more straightforward to simply write your own error handling using the methods above.For a really thorough breakdown of dealing with conditionals in Bash, take a look at the relevant chapter of the Bash Guide .
Jan 30, 2015 | sanctum.geek.nz
Large shell startup scripts (
.bashrc
,.profile
) over about fifty lines or so with a lot of options, aliases, custom functions, and similar tweaks can get cumbersome to manage over time, and if you keep your dotfiles under version control it's not terribly helpful to see large sets of commits just editing the one file when it could be more instructive if broken up into files by section.Given that shell configuration is just shell code, we can apply the
source
builtin (or the.
builtin for POSIXsh
) to load several files at the end of a.bashrc
, for example:source ~/.bashrc.options source ~/.bashrc.aliases source ~/.bashrc.functionsThis is a better approach, but it still binds us into using those filenames; we still have to edit the
~/.bashrc
file if we want to rename them, or remove them, or add new ones.Fortunately, UNIX-like systems have a common convention for this, the
.d
directory suffix, in which sections of configuration can be stored to be read by a main configuration file dynamically. In our case, we can create a new directory~/.bashrc.d
:$ ls ~/.bashrc.d options.bash aliases.bash functions.bashWith a slightly more advanced snippet at the end of
~/.bashrc
, we can then load every file with the suffix.bash
in this directory:# Load any supplementary scripts for config in "$HOME"/.bashrc.d/*.bash ; do source "$config" done unset -v configNote that we unset the
config
variable after we're done, otherwise it'll be in the namespace of our shell where we don't need it. You may also wish to check for the existence of the~/.bashrc.d
directory, check there's at least one matching file inside it, or check that the file is readable before attempting to source it, depending on your preference.The same method can be applied with
.profile
to load all scripts with the suffix.sh
in~/.profile.d
, if we want to write in POSIXsh
, with some slightly different syntax:# Load any supplementary scripts for config in "$HOME"/.profile.d/*.sh ; do . "$config" done unset -v configAnother advantage of this method is that if you have your dotfiles under version control, you can arrange to add extra snippets on a per-machine basis unversioned, without having to update your
.bashrc
file.Here's my implementation of the above method, for both
.bashrc
and.profile
:Thanks to commenter oylenshpeegul for correcting the syntax of the loops.
Feb 21, 2012 | sanctum.geek.nz
By default, the Bash shell keeps the history of your most recent session in the
.bash_history
file, and the commands you've issued in your current session are also available with ahistory
call. These defaults are useful for keeping track of what you've been up to in the shell on any given machine, but with disks much larger and faster than they were when Bash was designed, a little tweaking in your.bashrc
file can record history more permanently, consistently, and usefully. Append history instead of rewriting itYou should start by setting the
histappend
option, which will mean that when you close a session, your history will be appended to the.bash_history
file rather than overwriting what's in there.shopt -s histappendAllow a larger history fileThe default maximum number of commands saved into the
.bash_history
file is a rather meager 500. If you want to keep history further back than a few weeks or so, you may as well bump this up by explicitly setting$HISTSIZE
to a much larger number in your.bashrc
. We can do the same thing with the$HISTFILESIZE
variable.HISTFILESIZE=1000000 HISTSIZE=1000000The
man
page for Bash says thatHISTFILESIZE
can beunset
to stop truncation entirely, but unfortunately this doesn't work in.bashrc
files due to the order in which variables are set; it's therefore more straightforward to simply set it to a very large number.If you're on a machine with resource constraints, it might be a good idea to occasionally archive old
Don't store specific lines.bash_history
files to speed up login and reduce memory footprint.You can prevent commands that start with a space from going into history by setting
$HISTCONTROL
toignorespace
. You can also ignore duplicate commands, for example repeateddu
calls to watch a file grow, by addingignoredups
. There's a shorthand to set both inignoreboth
.HISTCONTROL=ignorebothYou might also want to remove the use of certain commands from your history, whether for privacy or readability reasons. This can be done with the
$HISTIGNORE
variable. It's common to use this to excludels
calls, job control builtins likebg
andfg
, and calls tohistory
itself:HISTIGNORE='ls:bg:fg:history'Record timestampsIf you set
$HISTTIMEFORMAT
to something useful, Bash will record the timestamp of each command in its history. In this variable you can specify the format in which you want this timestamp displayed when viewed withhistory
. I find the full date and time to be useful, because it can be sorted easily and works well with tools likecut
andawk
.HISTTIMEFORMAT='%F %T 'Use one command per lineTo make your
.bash_history
file a little easier to parse, you can force commands that you entered on more than one line to be adjusted to fit on only one with thecmdhist
option:shopt -s cmdhistStore history immediatelyBy default, Bash only records a session to the
.bash_history
file on disk when the session terminates. This means that if you crash or your session terminates improperly, you lose the history up to that point. You can fix this by recording each line of history as you issue it, through the$PROMPT_COMMAND
variable:PROMPT_COMMAND='history -a'
Aug 16, 2012 | sanctum.geek.nz
Setting the Bash option
histexpand
allows some convenient typing shortcuts using Bash history expansion . The option can be set with either of these:$ set -H $ set -o histexpandIt's likely that this option is already set for all interactive shells, as it's on by default. The manual,
man bash
, describes these features as follows:-H Enable ! style history substitution. This option is on by default when the shell is interactive.You may have come across this before, perhaps to your annoyance, in the following error message that comes up whenever
!
is used in a double-quoted string, or without being escaped with a backslash:$ echo "Hi, this is Tom!" bash: !": event not foundIf you don't want the feature and thereby make
!
into a normal character, it can be disabled with either of these:$ set +H $ set +o histexpandHistory expansion is actually a very old feature of shells, having been available in
csh
before Bash usage became common.This article is a good followup to Better Bash history , which among other things explains how to include dates and times in
Basic history expansionhistory
output, as these examples do.Perhaps the best known and most useful of these expansions is using
!!
to refer to the previous command. This allows repeating commands quickly, perhaps to monitor the progress of a long process, such as disk space being freed while deleting a large file:$ rm big_file & [1] 23608 $ du -sh . 3.9G . $ !! du -sh . 3.3G .It can also be useful to specify the full filesystem path to programs that aren't in your
$PATH
:$ hdparm -bash: hdparm: command not found $ /sbin/!! /sbin/hdparmIn each case, note that the command itself is printed as expanded, and then run to print the output on the following line.
History by absolute indexHowever,
!!
is actually a specific example of a more general form of history expansion. For example, you can supply the history item number of a specific command to repeat it, after looking it up withhistory
:$ history | grep expand 3951 2012-08-16 15:58:53 set -o histexpand $ !3951 set -o histexpandYou needn't enter the
!3951
on a line by itself; it can be included as any part of the command, for example to add a prefix likesudo
:$ sudo !3850If you include the escape string
History by relative index\!
as part of your Bash prompt , you can include the current command number in the prompt before the command, making repeating commands by index a lot easier as long as they're still visible on the screen.It's also possible to refer to commands relative to the current command. To subtitute the second-to-last command, we can type
!-2
. For example, to check whether truncating a file withsed
worked correctly:$ wc -l bigfile.txt 267 bigfile.txt $ printf '%s\n' '11,$d' w | ed -s bigfile.txt $ !-2 wc -l bigfile.txt 10 bigfile.txtThis works further back into history, with
Expanding for historical arguments!-3
,!-4
, and so on.In each of the above cases, we're substituting for the whole command line. There are also ways to get specific tokens, or words , from the command if we want that. To get the first argument of a particular command in the history, use the
!^
token:$ touch a.txt b.txt c.txt $ ls !^ ls a.txt a.txtTo get the last argument, add
!$
:$ touch a.txt b.txt c.txt $ ls !$ ls c.txt c.txtTo get all arguments (but not the command itself), use
!*
:$ touch a.txt b.txt c.txt $ ls !* ls a.txt b.txt c.txt a.txt b.txt c.txtThis last one is particularly handy when performing several operations on a group of files; we could run
du
andwc
over them to get their size and character count, and then perhaps decide to delete them based on the output:$ du a.txt b.txt c.txt 4164 a.txt 5184 b.txt 8356 c.txt $ wc !* wc a.txt b.txt c.txt 16689 94038 4250112 a.txt 20749 117100 5294592 b.txt 33190 188557 8539136 c.txt 70628 399695 18083840 total $ rm !* rm a.txt b.txt c.txtThese work not just for the preceding command in history, but also absolute and relative command numbers:
$ history 3 3989 2012-08-16 16:30:59 wc -l b.txt 3990 2012-08-16 16:31:05 du -sh c.txt 3991 2012-08-16 16:31:12 history 3 $ echo !3989^ echo -l -l $ echo !3990$ echo c.txt c.txt $ echo !-1* echo c.txt c.txtMore generally, you can use the syntax
!n:w
to refer to any specific argument in a history item by number. In this case, the first word, usually a command or builtin, is word0
:$ history | grep bash 4073 2012-08-16 20:24:53 man bash $ !4073:0 man What manual page do you want? $ !4073:1 bashYou can even select ranges of words by separating their indices with a hyphen:
$ history | grep apt-get 3663 2012-08-15 17:01:30 sudo apt-get install gnome $ !3663:0-1 purge !3663:3 sudo apt-get purge gnomeYou can include
Expanding history by string^
and$
as start and endpoints for these ranges, too.3*
is a shorthand for3-$
, meaning "all arguments from the third to the last."You can also refer to a previous command in the history that starts with a specific string with the syntax
!string
:$ !echo echo c.txt c.txt $ !history history 3 4011 2012-08-16 16:38:28 rm a.txt b.txt c.txt 4012 2012-08-16 16:42:48 echo c.txt 4013 2012-08-16 16:42:51 history 3If you want to match any part of the command line, not just the start, you can use
!?string?
:$ !?bash? man bashBe careful when using these, if you use them at all. By default it will run the most recent command matching the string immediately , with no prompting, so it might be a problem if it doesn't match the command you expect.
Checking history expansions before runningIf you're paranoid about this, Bash allows you to audit the command as expanded before you enter it, with the
histverify
option:$ shopt -s histverify $ !rm $ rm a.txt b.txt c.txtThis option works for any history expansion, and may be a good choice for more cautious administrators. It's a good thing to add to one's
.bashrc
if so.If you don't need this set all the time, but you do have reservations at some point about running a history command, you can arrange to print the command without running it by adding a
:p
suffix:$ !rm:p rm important-fileIn this instance, the command was expanded, but thankfully not actually run.
Substituting strings in history expansionsTo get really in-depth, you can also perform substitutions on arbitrary commands from the history with
!!:gs/pattern/replacement/
. This is getting pretty baroque even for Bash, but it's possible you may find it useful at some point:$ !!:gs/txt/mp3/ rm a.mp3 b.mp3 c.mp3If you only want to replace the first occurrence, you can omit the
g
:$ !!:s/txt/mp3/ rm a.mp3 b.txt c.txtStripping leading directories or trailing filesIf you want to chop a filename off a long argument to work with the directory, you can do this by adding an
:h
suffix, kind of like adirname
call in Perl:$ du -sh /home/tom/work/doc.txt $ cd !$:h cd /home/tom/workTo do the opposite, like a
basename
call in Perl, use:t
:$ ls /home/tom/work/doc.txt $ document=!$:t document=doc.txtStripping extensions or base namesA bit more esoteric, but still possibly useful; to strip a file's extension, use
:r
:$ vi /home/tom/work/doc.txt $ stripext=!$:r stripext=/home/tom/work/docTo do the opposite, to get only the extension, use
:e
:$ vi /home/tom/work/doc.txt $ extonly=!$:e extonly=.txtQuoting historyIf you're performing substitution not to execute a command or fragment but to use it as a string, it's likely you'll want to quote it. For example, if you've just found through experiment and trial and error an ideal
ffmpeg
command line to accomplish some task, you might want to save it for later use by writing it to a script:$ ffmpeg -f alsa -ac 2 -i hw:0,0 -f x11grab -r 30 -s 1600x900 \ > -i :0.0+1600,0 -acodec pcm_s16le -vcodec libx264 -preset ultrafast \ > -crf 0 -threads 0 "$(date +%Y%m%d%H%M%S)".mkvTo make sure all the escaping is done correctly, you can write the command into the file with the
:q
modifier:$ echo '#!/usr/bin/env bash' >ffmpeg.sh $ echo !ffmpeg:q >>ffmpeg.shIn this case, this will prevent Bash from executing the command expansion
"$(date ... )"
, instead writing it literally to the file as desired. If you build a lot of complex commands interactively that you later write to scripts once completed, this feature is really helpful and saves a lot of cutting and pasting.Thanks to commenter Mihai Maruseac for pointing out a bug in the examples.
Nov 07, 2014 | sanctum.geek.nz
The common default of some variant of
\h:\w\$
for a Bash promptPS1
string includes the\w
escape character, so that the user's current working directory appears in the prompt, but with$HOME
shortened to a tilde:tom@sanctum:~$ tom@sanctum:~/Documents$ tom@sanctum:/usr/local/nagios$This is normally very helpful, particularly if you leave your shell for a time and forget where you are, though of course you can always call the
pwd
shell builtin. However it can get annoying for very deep directory hierarchies, particularly if you're using a smaller terminal window:tom@sanctum:/chroot/apache/usr/local/perl/app-library/lib/App/Library/Class:~$If you're using Bash version 4.0 or above (
bash --version
), you can save a bit of terminal space by setting thePROMPT_DIRTRIM
variable for the shell. This limits the length of the tail end of the\w
and\W
expansions to that number of path elements:tom@sanctum:/chroot/apache/usr/local/app-library/lib/App/Library/Class$ PROMPT_DIRTRIM=3 tom@sanctum:.../App/Library/Class$This is a good thing to include in your
~/.bashrc
file if you often find yourself deep in directory trees where the upper end of the hierarchy isn't of immediate interest to you. You can remove the effect again by unsetting the variable:tom@sanctum:.../App/Library/Class$ unset PROMPT_DIRTRIM tom@sanctum:/chroot/apache/usr/local/app-library/lib/App/Library/Class$
Oct 25, 2017 | linuxconfig.org
Trap syntax is very simple and easy to understand: first we must call the trap builtin, followed by the action(s) to be executed, then we must specify the signal(s) we want to react to:
trap [-lp] [[arg] sigspec]Let's see what the possibletrap
options are for.When used with the
-l
flag, the trap command will just display a list of signals associated with their numbers. It's the same output you can obtain running thekill -l
command:$ trap -l 1) SIGHUP 2) SIGINT 3) SIGQUIT 4) SIGILL 5) SIGTRAP 6) SIGABRT 7) SIGBUS 8) SIGFPE 9) SIGKILL 10) SIGUSR1 11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM 16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP 21) SIGTTIN 22) SIGTTOU 23) SIGURG 24) SIGXCPU 25) SIGXFSZ 26) SIGVTALRM 27) SIGPROF 28) SIGWINCH 29) SIGIO 30) SIGPWR 31) SIGSYS 34) SIGRTMIN 35) SIGRTMIN+1 36) SIGRTMIN+2 37) SIGRTMIN+3 38) SIGRTMIN+4 39) SIGRTMIN+5 40) SIGRTMIN+6 41) SIGRTMIN+7 42) SIGRTMIN+8 43) SIGRTMIN+9 44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13 48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9 56) SIGRTMAX-8 57) SIGRTMAX-7 58) SIGRTMAX-6 59) SIGRTMAX-5 60) SIGRTMAX-4 61) SIGRTMAX-3 62) SIGRTMAX-2 63) SIGRTMAX-1 64) SIGRTMAXIt's really important to specify that it's possible to react only to signals which allows the script to respond: theSIGKILL
andSIGSTOP
signals cannot be caught, blocked or ignored.Apart from signals, traps can also react to some
pseudo-signal
such as EXIT, ERR or DEBUG, but we will see them in detail later. For now just remember that a signal can be specified either by its number or by its name, even without theSIG
prefix.About the
-p
option now. This option has sense only when a command is not provided (otherwise it will produce an error). When trap is used with it, a list of the previously set traps will be displayed. If the signal name or number is specified, only the trap set for that specific signal will be displayed, otherwise no distinctions will be made, and all the traps will be displayed:$ trap 'echo "SIGINT caught!"' SIGINTWe set a trap to catch the SIGINT signal: it will just display the "SIGINT caught" message onscreen when given signal will be received by the shell. If we now use trap with the -p option, it will display the trap we just defined:$ trap -p trap -- 'echo "SIGINT caught!"' SIGINTBy the way, the trap is now "active", so if we send a SIGINT signal, either using the kill command, or with the CTRL-c shortcut, the associated command in the trap will be executed (^C is just printed because of the key combination):^CSIGINT caught!Trap in action We now will write a simple script to show trap in action, here it is:#!/usr/bin/env bash # # A simple script to demonstrate how trap works # set -e set -u set -o pipefail trap 'echo "signal caught, cleaning..."; rm -i linux_tarball.tar.xz' SIGINT SIGTERM echo "Downloading tarball..." wget -O linux_tarball.tar.xz https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.13.5.tar.xz &> /dev/nullThe above script just tries to download the latest linux kernel tarball into the directory from what it is launched usingwget
. During the task, if the SIGINT or SIGTERM signals are received (notice how you can specify more than one signal on the same line), the partially downloaded file will be deleted.In this case the command are actually two: the first is the
echo
which prints the message onscreen, and the second is the actualrm
command (we provided the -i option to it, so it will ask user confirmation before removing), and they are separated by a semicolon. Instead of specifying commands this way, you can also call functions: this would give you more re-usability. Notice that if you don't provide any command the signal(s) will just be ignored!This is the output of the script above when it receives a SIGINT signal:
$ ./fetchlinux.sh Downloading tarball... ^Csignal caught, cleaning... rm: remove regular file 'linux_tarball.tar.xz'?A very important thing to remember is that when a script is terminated by a signal, like above, its exist status will be the result of128 + the signal number
. As you can see, the script above, being terminated by a SIGINT, has an exit status of130
:$ echo $? 130Lastly, you can disable a trap just by callingtrap
followed by the-
sign, followed by the signal(s) name or number:trap - SIGINT SIGTERMThe signals will take back the value they had upon the entrance to shell. Pseudo-signals As already mentioned above, trap can be set not only for signals which allows the script to respond but also to what we can call "pseudo-signals". They are not technically signals, but correspond to certain situations that can be specified: EXIT WhenEXIT
is specified in a trap, the command of the trap will be execute on exit from the shell. ERR This will cause the argument of the trap to be executed when a command returns a non-zero exit status, with some exceptions (the same of the shell errexit option): the command must not be part of awhile
oruntil
loop; it must not be part of anif
construct, nor part of a&&
or||
list, and its value must not be inverted by using the!
operator. DEBUG This will cause the argument of the trap to be executed before every simple command,for
,case
orselect
commands, and before the first command in shell functions RETURN The argument of the trap is executed after a function or a script sourced by usingsource
or the.
command.
Oct 20, 2017 | stackoverflow.com
Amit , Jun 7, 2011 at 19:18
I have a couple of variables and I want to check the following condition (written out in words, then my failed attempt at bash scripting):if varA EQUALS 1 AND ( varB EQUALS "t1" OR varB EQUALS "t2" ) then do something done.And in my failed attempt, I came up with:
if (($varA == 1)) && ( (($varB == "t1")) || (($varC == "t2")) ); then scale=0.05 fiBest answer Gilles
What you've written actually almost works (it would work if all the variables were numbers), but it's not an idiomatic way at all.
( )
parentheses indicate a subshell . What's inside them isn't an expression like in many other languages. It's a list of commands (just like outside parentheses). These commands are executed in a separate subprocess, so any redirection, assignment, etc. performed inside the parentheses has no effect outside the parentheses.
- With a leading dollar sign,
$( )
is a command substitution : there is a command inside the parentheses, and the output from the command is used as part of the command line (after extra expansions unless the substitution is between double quotes, but that's another story ).
{ }
braces are like parentheses in that they group commands, but they only influence parsing, not grouping. The programx=2; { x=4; }; echo $x
prints 4, whereasx=2; (x=4); echo $x
prints 2. (Also braces require spaces around them and a semicolon before closing, whereas parentheses don't. That's just a syntax quirk.)
- With a leading dollar sign,
${VAR}
is a parameter expansion , expanding to the value of a variable, with possible extra transformations.
(( ))
double parentheses surround an arithmetic instruction , that is, a computation on integers, with a syntax resembling other programming languages. This syntax is mostly used for assignments and in conditionals.
- The same syntax is used in arithmetic expressions
$(( ))
, which expand to the integer value of the expression.
[[ ]]
double brackets surround conditional expressions . Conditional expressions are mostly built on operators such as-n $variable
to test if a variable is empty and-e $file
to test if a file exists. There are also string equality operators:"$string1" = "$string2"
(beware that the right-hand side is a pattern, e.g.[[ $foo = a* ]]
tests if$foo
starts witha
while[[ $foo = "a*" ]]
tests if$foo
is exactlya*
), and the familiar!
,&&
and||
operators for negation, conjunction and disjunction as well as parentheses for grouping.
- Note that you need a space around each operator (e.g.
[[ "$x" = "$y" ]]
, not), and a space or a character like[[ "$x"="$y" ]]
;
both inside and outside the brackets (e.g.[[ -n $foo ]]
, not).[[-n $foo]]
[ ]
single brackets are an alternate form of conditional expressions with more quirks (but older and more portable). Don't write any for now; start worrying about them when you find scripts that contain them.This is the idiomatic way to write your test in bash:
if [[ $varA = 1 && ($varB = "t1" || $varC = "t2") ]]; thenIf you need portability to other shells, this would be the way (note the additional quoting and the separate sets of brackets around each individual test):
if [ "$varA" = 1 ] && { [ "$varB" = "t1" ] || [ "$varC" = "t2" ]; }; thenWill Sheppard , Jun 19, 2014 at 11:07
It's better to use==
to differentiate the comparison from assigning a variable (which is also=
) Will Sheppard Jun 19 '14 at 11:07Cbhihe , Apr 3, 2016 at 8:05
+1 @WillSheppard for yr reminder of proper style. Gilles, don't you need a semicolon after yr closing curly bracket and before "then" ? I always thoughtif
,then
,else
andfi
could not be on the same line... As in:
if [ "$varA" = 1 ] && { [ "$varB" = "t1" ] || [ "$varC" = "t2" ]; }; then
Rockallite , Jan 19 at 2:41
Backquotes (` `
) are old-style form of command substitution, with some differences: in this form, backslash retains its literal meaning except when followed by$
,`
, or\
, and the first backquote not preceded by a backslash terminates the command substitution; whereas in the$( )
form, all characters between the parentheses make up the command, none are treated specially.Peter A. Schneider , Aug 28 at 13:16
You could emphasize that single brackets have completely different semantics inside and outside of double brackets. (Because you start with explicitly pointing out the subshell semantics but then only as an aside mention the grouping semantics as part of conditional expressions. Was confusing to me for a second when I looked at your idiomatic example.) Peter A. Schneider Aug 28 at 13:16matchew , Jun 7, 2011 at 19:29
very closeif (( $varA == 1 )) && [[ $varB == 't1' || $varC == 't2' ]]; then scale=0.05 fishould work.
breaking it down
(( $varA == 1 ))is an integer comparison where as
$varB == 't1'is a string comparison. otherwise, I am just grouping the comparisons correctly.
Double square brackets delimit a Conditional Expression. And, I find the following to be a good reading on the subject: "(IBM) Demystify test, [, [[, ((, and if-then-else"
Peter A. Schneider , Aug 28 at 13:21
Just to be sure: The quoting in 't1' is unnecessary, right? Because as opposed to arithmetic instructions in double parentheses, where t1 would be a variable, t1 in a conditional expression in double brackets is just a literal string.I.e.,
[[ $varB == 't1' ]]
is exactly the same as[[ $varB == t1 ]]
, right? Peter A. Schneider Aug 28 at 13:21
Oct 20, 2017 | unix.stackexchange.com
OR in `expr match` up vote down vote favorite
stracktracer , Dec 14, 2015 at 13:54
I'm confused as to why this does not match:
expr match Unauthenticated123 '^(Unauthenticated|Authenticated).*'
it outputs 0.
Charles Duffy , Dec 14, 2015 at 18:22
As an aside, if you were using bash for this, the preferred alternative would be the=~
operator in[[ ]]
, ie.[[ Unauthenticated123 =~ ^(Unauthenticated|Authenticated) ]]
– Charles Duffy Dec 14 '15 at 18:22Charles Duffy , Dec 14, 2015 at 18:25
...and if you weren't targeting a known/fixed operating system, usingcase
rather than a regex match is very much the better practice, since the accepted answer depends on behavior POSIX doesn't define. – Charles Duffy Dec 14 '15 at 18:25Gilles , Dec 14, 2015 at 23:43
See Why does my regular expression work in X but not in Y? – Gilles Dec 14 '15 at 23:43Lambert , Dec 14, 2015 at 14:04
Your command should be:expr match Unauthenticated123 'Unauthenticated\|Authenticated'If you want the number of characters matched.
To have the part of the string (Unauthenticated) returned use:
expr match Unauthenticated123 '\(Unauthenticated\|Authenticated\)'From
info coreutils 'expr invocation'
:
STRING : REGEX' Perform pattern matching. The arguments are converted to strings and the second is considered to be a (basic, a la GNU
grep') regular expression, with a `^' implicitly prepended. The first argument is then matched against this regular expression.If the match succeeds and REGEX uses `\(' and `\)', the `:' expression returns the part of STRING that matched the subexpression; otherwise, it returns the number of characters matched. If the match fails, the `:' operator returns the null string if `\(' and `\)' are used in REGEX, otherwise 0. Only the first `\( ... \)' pair is relevant to the return value; additional pairs are meaningful only for grouping the regular expression operators. In the regular expression, `\+', `\?', and `\|' are operators which respectively match one or more, zero or one, or separate alternatives. SunOS and other `expr''s treat these as regular characters. (POSIX allows either behavior.) *Note Regular Expression Library: (regex)Top, for details of regular expression syntax. Some examples are in *note Examples of expr::.stracktracer , Dec 14, 2015 at 14:18
Thanks escaping the | worked. Weird, normally I'd expect it if I wanted to match the literal |... – stracktracer Dec 14 '15 at 14:18reinierpost , Dec 14, 2015 at 15:34
Regular expression syntax, including the use of backquoting, is different for different tools. Always look it up. – reinierpost Dec 14 '15 at 15:34Stéphane Chazelas , Dec 14, 2015 at 14:49
Note that bothmatch
and\|
are GNU extensions (and the behaviour for:
(thematch
standard equivalent) when the pattern starts with^
varies with implementations). Standardly, you'd do:expr " $string" : " Authenticated" '|' " $string" : " Unauthenticated"The leading space is to avoid problems with values of
$string
that start with-
or areexpr
operators, but that means it adds one to the number of characters being matched.With GNU
expr
, you'd write it:expr + "$string" : 'Authenticated\|Unauthenticated'The
+
forces$string
to be taken as a string even if it happens to be aexpr
operator.expr
regular expressions are basic regular expressions which don't have an alternation operator (and where|
is not special). The GNU implementation has it as\|
though as an extension.If all you want is to check whether
$string
starts withAuthenticated
orUnauthenticated
, you'd better use:case $string in (Authenticated* | Unauthenticated*) do-something esacnetmonk , Dec 14, 2015 at 14:06
$ expr match "Unauthenticated123" '^\(Unauthenticated\|Authenticated\).*'
you have to escape with\
the parenthesis and the pipe.mikeserv , Dec 14, 2015 at 14:18
and the^
may not mean what some would think depending on theexpr
. it is implied anyway. – mikeserv Dec 14 '15 at 14:18Stéphane Chazelas , Dec 14, 2015 at 14:34
@mikeserv,match
and\|
are GNU extensions anyway. This Q&A seems to be about GNUexpr
anyway (where^
is guaranteed to mean match at the beginning of the string ). – Stéphane Chazelas Dec 14 '15 at 14:34mikeserv , Dec 14, 2015 at 14:49
@StéphaneChazelas - i didn't know they were strictly GNU. i think i remember them being explicitly officially unspecified - but i don't useexpr
too often anyway and didn't know that. thank you. – mikeserv Dec 14 '15 at 14:49Random832 , Dec 14, 2015 at 16:13
It's not "strictly GNU" - it's present in a number of historical implementations (even System V had it, undocumented, though it didn't have the others like substr/length/index), which is why it's explicitly unspecified. I can't find anything about\|
being an extension. – Random832 Dec 14 '15 at 16:13
Oct 19, 2017 | www.bashoneliners.com
Kill a process running on port 8080 $ lsof -i :8080 | awk 'NR > 1 {print $2}' | xargs --no-run-if-empty kill-- by Janos on Sept. 1, 2017, 8:31 p.m.
Make a new folder and cd into it. $ mkcd(){ NAME=$1; mkdir -p "$NAME"; cd "$NAME"; }-- by PrasannaNatarajan on Aug. 3, 2017, 6:49 a.m.
Go up to a particular folder $ alias ph='cd ${PWD%/public_html*}/public_html'-- by Jab2870 on July 18, 2017, 6:07 p.m.
ExplanationI work on a lot of websites and often need to go up to the
public_html
folder.This command creates an alias so that however many folders deep I am, I will be taken up to the correct folder.
alias ph='....'
: This creates a shortcut so that when command ph is typed, the part between the quotes is executed
cd ...
: This changes directory to the directory specified
PWD
: This is a global bash variable that contains the current directory
${...%/public_html*}
: This removes/public_html
and anything after it from the specified stringFinally,
/public_html
at the end is appended onto the string.So, to sum up, when ph is run, we ask bash to change the directory to the current working directory with anything after public_html removed.
Open another terminal at current location $ $TERMINAL & disown-- by Jab2870 on July 18, 2017, 3:04 p.m.
ExplanationOpens another terminal window at the current location.
Use Case
I often cd into a directory and decide it would be useful to open another terminal in the same folder, maybe for an editor or something. Previously, I would open the terminal and repeat the CD command.
I have aliased this command to open so I just type
open
and I get a new terminal already in my desired folder.The
& disown
part of the command stops the new terminal from being dependant on the first meaning that you can still use the first and if you close the first, the second will remain open. LimitationsIt relied on you having the $TERMINAL global variable set. If you don't have this set you could easily change it to something like the following:
gnome-terminal & disown
orkonsole & disown
Preserve your fingers from cd ..; cd ..; cd..; cd..; $ up(){ DEEP=$1; for i in $(seq 1 ${DEEP:-"1"}); do cd ../; done; }-- by alireza6677 on June 28, 2017, 5:40 p.m.
Generate a sequence of numbers $ echo {01..10}-- by Elkku on March 1, 2015, 12:04 a.m.
ExplanationThis example will print:
01 02 03 04 05 06 07 08 09 10While the original one-liner is indeed IMHO the canonical way to loop over numbers, the brace expansion syntax of Bash 4.x has some kick-ass features such as correct padding of the number with leading zeros. Limitations
The zero-padding feature works only in Bash >=4.
Related one-liners
Generate a sequence of numbers $ for ((i=1; i<=10; ++i)); do echo $i; done-- by Janos on Nov. 4, 2014, 12:29 p.m.
ExplanationThis is similar to
seq
, but portable.seq
does not exist in all systems and is not recommended today anymore. Other variations to emulate various uses withseq
:# seq 1 2 10 for ((i=1; i<=10; i+=2)); do echo $i; done # seq -w 5 10 for ((i=5; i<=10; ++i)); do printf '%02d\n' $i; done
Find recent logs that contain the string "Exception" $ find . -name '*.log' -mtime -2 -exec grep -Hc Exception {} \; | grep -v :0$-- by Janos on July 19, 2014, 7:53 a.m.
ExplanationThe
find
:
-name '*.log'
-- match files ending with.log
-mtime -2
-- match files modified within the last 2 days-exec CMD ARGS \;
-- for each file found, execute command, where{}
inARGS
will be replaced with the file's pathThe
grep
:
-c
is to print the count of the matches instead of the matches themselves-H
is to print the name of the file, asgrep
normally won't print it when there is only one filename argument- The output lines will be in the format
path:count
. Files that didn't match "Exception" will still be printed, with 0 as count- The second
grep
filters the output of the first, excluding lines that end with:0
(= the files that didn't contain matches)Extra tips:
- Change "Exception" to the typical relevant failure indicator of your application
- Add
-i
forgrep
to make the search case insensitive- To make the
find
match strictly only files, add-type f
- Schedule this as a periodic job, and pipe the output to a mailer, for example
| mailx -s 'error counts' [email protected]
Remove offending key from known_hosts file with one swift move $ sed -i 18d .ssh/known_hosts-- by EvaggelosBalaskas on Jan. 16, 2013, 2:29 p.m.
ExplanationUsing sed to remove a specific line.
The
-i
parameter is to edit the file in-place. LimitationsThis works as posted in GNU
sed
. In BSDsed
, the-i
flag requires a parameter to use as the suffix of a backup file. You can set it to empty to not use a backup file:
Feb 15, 2010 | stackoverflow.com
assassin , Feb 15, 2010 at 7:02
Is there a way in bash to convert a string into a lower case string?For example, if I have:
a="Hi all"I want to convert it to:
"hi all"ghostdog74 , Feb 15, 2010 at 7:43
The are various ways: tr$ echo "$a" | tr '[:upper:]' '[:lower:]' hi allAWK$ echo "$a" | awk '{print tolower($0)}' hi allBash 4.0$ echo "${a,,}" hi allPerl$ echo "$a" | perl -ne 'print lc' hi allBashlc(){ case "$1" in [A-Z]) n=$(printf "%d" "'$1") n=$((n+32)) printf \\$(printf "%o" "$n") ;; *) printf "%s" "$1" ;; esac } word="I Love Bash" for((i=0;i<${#word};i++)) do ch="${word:$i:1}" lc "$ch" donejangosteve , Jan 14, 2012 at 21:58
Am I missing something, or does your last example (in Bash) actually do something completely different? It works for "ABX", but if you instead makeword="Hi All"
like the other examples, it returnsha
, nothi all
. It only works for the capitalized letters and skips the already-lowercased letters. jangosteve Jan 14 '12 at 21:58Richard Hansen , Feb 3, 2012 at 18:55
Note that only thetr
andawk
examples are specified in the POSIX standard. Richard Hansen Feb 3 '12 at 18:55Richard Hansen , Feb 3, 2012 at 18:58
tr '[:upper:]' '[:lower:]'
will use the current locale to determine uppercase/lowercase equivalents, so it'll work with locales that use letters with diacritical marks. Richard Hansen Feb 3 '12 at 18:58Adam Parkin , Sep 25, 2012 at 18:01
How does one get the output into a new variable? Ie say I want the lowercased string into a new variable? Adam Parkin Sep 25 '12 at 18:01Tino , Nov 14, 2012 at 15:39
@Adam:b="$(echo $a | tr '[A-Z]' '[a-z]')"
Tino Nov 14 '12 at 15:39Dennis Williamson , Feb 15, 2010 at 10:31
In Bash 4:To lowercase
$ string="A FEW WORDS" $ echo "${string,}" a FEW WORDS $ echo "${string,,}" a few words $ echo "${string,,[AEIUO]}" a FeW WoRDS $ string="A Few Words" $ declare -l string $ string=$string; echo "$string" a few wordsTo uppercase
$ string="a few words" $ echo "${string^}" A few words $ echo "${string^^}" A FEW WORDS $ echo "${string^^[aeiou]}" A fEw wOrds $ string="A Few Words" $ declare -u string $ string=$string; echo "$string" A FEW WORDSToggle (undocumented, but optionally configurable at compile time)
$ string="A Few Words" $ echo "${string~~}" a fEW wORDS $ string="A FEW WORDS" $ echo "${string~}" a FEW WORDS $ string="a few words" $ echo "${string~}" A few wordsCapitalize (undocumented, but optionally configurable at compile time)
$ string="a few words" $ declare -c string $ string=$string $ echo "$string" A few wordsTitle case:
$ string="a few words" $ string=($string) $ string="${string[@]^}" $ echo "$string" A Few Words $ declare -c string $ string=(a few words) $ echo "${string[@]}" A Few Words $ string="a FeW WOrdS" $ string=${string,,} $ string=${string~} $ echo "$string"To turn off a
declare
attribute, use+
. For example,declare +c string
. This affects subsequent assignments and not the current value.The
declare
options change the attribute of the variable, but not the contents. The reassignments in my examples update the contents to show the changes.Edit:
Added "toggle first character by word" (
${var~}
) as suggested by ghostdog74Edit: Corrected tilde behavior to match Bash 4.3.
ghostdog74 , Feb 15, 2010 at 10:52
there's also${string~}
ghostdog74 Feb 15 '10 at 10:52Hubert Kario , Jul 12, 2012 at 16:48
Quite bizzare, "^^" and ",," operators don't work on non-ASCII characters but "~~" does... Sostring="łdź"; echo ${string~~}
will return "ŁDŹ", butecho ${string^^}
returns "łDź". Even inLC_ALL=pl_PL.utf-8
. That's using bash 4.2.24. Hubert Kario Jul 12 '12 at 16:48Dennis Williamson , Jul 12, 2012 at 18:20
@HubertKario: That's weird. It's the same for me in Bash 4.0.33 with the same string inen_US.UTF-8
. It's a bug and I've reported it. Dennis Williamson Jul 12 '12 at 18:20Dennis Williamson , Jul 13, 2012 at 0:44
@HubertKario: Tryecho "$string" | tr '[:lower:]' '[:upper:]'
. It will probably exhibit the same failure. So the problem is at least partly not Bash's. Dennis Williamson Jul 13 '12 at 0:44Dennis Williamson , Jul 14, 2012 at 14:27
@HubertKario: The Bash maintainer has acknowledged the bug and stated that it will be fixed in the next release. Dennis Williamson Jul 14 '12 at 14:27shuvalov , Feb 15, 2010 at 7:13
echo "Hi All" | tr "[:upper:]" "[:lower:]"Richard Hansen , Feb 3, 2012 at 19:00
+1 for not assuming english Richard Hansen Feb 3 '12 at 19:00Hubert Kario , Jul 12, 2012 at 16:56
@RichardHansen:tr
doesn't work for me for non-ACII characters. I do have correct locale set and locale files generated. Have any idea what could I be doing wrong? Hubert Kario Jul 12 '12 at 16:56wasatchwizard , Oct 23, 2014 at 16:42
FYI: This worked on Windows/Msys. Some of the other suggestions did not. wasatchwizard Oct 23 '14 at 16:42Ignacio Vazquez-Abrams , Feb 15, 2010 at 7:03
tr :a="$(tr [A-Z] [a-z] <<< "$a")"AWK :{ print tolower($0) }sed :y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/Sandeepan Nath , Feb 2, 2011 at 11:12
+1a="$(tr [A-Z] [a-z] <<< "$a")"
looks easiest to me. I am still a beginner... Sandeepan Nath Feb 2 '11 at 11:12Haravikk , Oct 19, 2013 at 12:54
I strongly recommend thesed
solution; I've been working in an environment that for some reason doesn't havetr
but I've yet to find a system withoutsed
, plus a lot of the time I want to do this I've just done something else insed
anyway so can chain the commands together into a single (long) statement. Haravikk Oct 19 '13 at 12:54Dennis , Nov 6, 2013 at 19:49
The bracket expressions should be quoted. Intr [A-Z] [a-z] A
, the shell may perform filename expansion if there are filenames consisting of a single letter or nullgob is set.tr "[A-Z]" "[a-z]" A
will behave properly. Dennis Nov 6 '13 at 19:49Haravikk , Jun 15, 2014 at 10:51
@CamiloMartin it's a BusyBox system where I'm having that problem, specifically Synology NASes, but I've encountered it on a few other systems too. I've been doing a lot of cross-platform shell scripting lately, and with the requirement that nothing extra be installed it makes things very tricky! However I've yet to encounter a system withoutsed
Haravikk Jun 15 '14 at 10:51fuz , Jan 31, 2016 at 14:54
Note thattr [A-Z] [a-z]
is incorrect in almost all locales. for example, in theen-US
locale,A-Z
is actually the intervalAaBbCcDdEeFfGgHh...XxYyZ
. fuz Jan 31 '16 at 14:54nettux443 , May 14, 2014 at 9:36
I know this is an oldish post but I made this answer for another site so I thought I'd post it up here:UPPER -> lower : use python:
b=`echo "print '$a'.lower()" | python`Or Ruby:
b=`echo "print '$a'.downcase" | ruby`Or Perl (probably my favorite):
b=`perl -e "print lc('$a');"`Or PHP:
b=`php -r "print strtolower('$a');"`Or Awk:
b=`echo "$a" | awk '{ print tolower($1) }'`Or Sed:
b=`echo "$a" | sed 's/./\L&/g'`Or Bash 4:
b=${a,,}Or NodeJS if you have it (and are a bit nuts...):
b=`echo "console.log('$a'.toLowerCase());" | node`You could also use
dd
(but I wouldn't!):b=`echo "$a" | dd conv=lcase 2> /dev/null`lower -> UPPER
use python:
b=`echo "print '$a'.upper()" | python`Or Ruby:
b=`echo "print '$a'.upcase" | ruby`Or Perl (probably my favorite):
b=`perl -e "print uc('$a');"`Or PHP:
b=`php -r "print strtoupper('$a');"`Or Awk:
b=`echo "$a" | awk '{ print toupper($1) }'`Or Sed:
b=`echo "$a" | sed 's/./\U&/g'`Or Bash 4:
b=${a^^}Or NodeJS if you have it (and are a bit nuts...):
b=`echo "console.log('$a'.toUpperCase());" | node`You could also use
dd
(but I wouldn't!):b=`echo "$a" | dd conv=ucase 2> /dev/null`Also when you say 'shell' I'm assuming you mean
bash
but if you can usezsh
it's as easy asb=$a:lfor lower case and
b=$a:ufor upper case.
JESii , May 28, 2015 at 21:42
Neither the sed command nor the bash command worked for me. JESii May 28 '15 at 21:42nettux443 , Nov 20, 2015 at 14:33
@JESii both work for me upper -> lower and lower-> upper. I'm using sed 4.2.2 and Bash 4.3.42(1) on 64bit Debian Stretch. nettux443 Nov 20 '15 at 14:33JESii , Nov 21, 2015 at 17:34
Hi, @nettux443... I just tried the bash operation again and it still fails for me with the error message "bad substitution". I'm on OSX using homebrew's bash: GNU bash, version 4.3.42(1)-release (x86_64-apple-darwin14.5.0) JESii Nov 21 '15 at 17:34tripleee , Jan 16, 2016 at 11:45
Do not use! All of the examples which generate a script are extremely brittle; if the value ofa
contains a single quote, you have not only broken behavior, but a serious security problem. tripleee Jan 16 '16 at 11:45Scott Smedley , Jan 27, 2011 at 5:37
In zsh:echo $a:uGotta love zsh!
Scott Smedley , Jan 27, 2011 at 5:39
or $a:l for lower case conversion Scott Smedley Jan 27 '11 at 5:39biocyberman , Jul 24, 2015 at 23:26
Add one more case:echo ${(C)a} #Upcase the first char only
biocyberman Jul 24 '15 at 23:26devnull , Sep 26, 2013 at 15:45
Using GNUsed
:sed 's/.*/\L&/'Example:
$ foo="Some STRIng"; $ foo=$(echo "$foo" | sed 's/.*/\L&/') $ echo "$foo" some stringtechnosaurus , Jan 21, 2012 at 10:27
For a standard shell (without bashisms) using only builtins:uppers=ABCDEFGHIJKLMNOPQRSTUVWXYZ lowers=abcdefghijklmnopqrstuvwxyz lc(){ #usage: lc "SOME STRING" -> "some string" i=0 while ([ $i -lt ${#1} ]) do CUR=${1:$i:1} case $uppers in *$CUR*)CUR=${uppers%$CUR*};OUTPUT="${OUTPUT}${lowers:${#CUR}:1}";; *)OUTPUT="${OUTPUT}$CUR";; esac i=$((i+1)) done echo "${OUTPUT}" }And for upper case:
uc(){ #usage: uc "some string" -> "SOME STRING" i=0 while ([ $i -lt ${#1} ]) do CUR=${1:$i:1} case $lowers in *$CUR*)CUR=${lowers%$CUR*};OUTPUT="${OUTPUT}${uppers:${#CUR}:1}";; *)OUTPUT="${OUTPUT}$CUR";; esac i=$((i+1)) done echo "${OUTPUT}" }Dereckson , Nov 23, 2014 at 19:52
I wonder if you didn't let some bashism in this script, as it's not portable on FreeBSD sh: ${1:$...}: Bad substitution Dereckson Nov 23 '14 at 19:52tripleee , Apr 14, 2015 at 7:09
Indeed; substrings with${var:1:1}
are a Bashism. tripleee Apr 14 '15 at 7:09Derek Shaw , Jan 24, 2011 at 13:53
Regular expressionI would like to take credit for the command I wish to share but the truth is I obtained it for my own use from http://commandlinefu.com . It has the advantage that if you
cd
to any directory within your own home folder that is it will change all files and folders to lower case recursively please use with caution. It is a brilliant command line fix and especially useful for those multitudes of albums you have stored on your drive.find . -depth -exec rename 's/(.*)\/([^\/]*)/$1\/\L$2/' {} \;You can specify a directory in place of the dot(.) after the find which denotes current directory or full path.
I hope this solution proves useful the one thing this command does not do is replace spaces with underscores - oh well another time perhaps.
Wadih M. , Nov 29, 2011 at 1:31
thanks for commandlinefu.com Wadih M. Nov 29 '11 at 1:31John Rix , Jun 26, 2013 at 15:58
This didn't work for me for whatever reason, though it looks fine. I did get this to work as an alternative though: find . -exec /bin/bash -c 'mv {} `tr [A-Z] [a-z] <<< {}`' \; John Rix Jun 26 '13 at 15:58Tino , Dec 11, 2015 at 16:27
This needsprename
fromperl
:dpkg -S "$(readlink -e /usr/bin/rename)"
givesperl: /usr/bin/prename
Tino Dec 11 '15 at 16:27c4f4t0r , Aug 21, 2013 at 10:21
In bash 4 you can use typesetExample:
A="HELLO WORLD" typeset -l A=$Acommunity wiki, Jan 16, 2016 at 12:26
Pre Bash 4.0Bash Lower the Case of a string and assign to variable
VARIABLE=$(echo "$VARIABLE" | tr '[:upper:]' '[:lower:]') echo "$VARIABLE"Tino , Dec 11, 2015 at 16:23
No need forecho
and pipes: use$(tr '[:upper:]' '[:lower:]' <<<"$VARIABLE")
Tino Dec 11 '15 at 16:23tripleee , Jan 16, 2016 at 12:28
@Tino The here string is also not portable back to really old versions of Bash; I believe it was introduced in v3. tripleee Jan 16 '16 at 12:28Tino , Jan 17, 2016 at 14:28
@tripleee You are right, it was introduced in bash-2.05b - however that's the oldest bash I was able to find on my systems Tino Jan 17 '16 at 14:28Bikesh M Annur , Mar 23 at 6:48
You can try thiss="Hello World!" echo $s # Hello World! a=${s,,} echo $a # hello world! b=${s^^} echo $b # HELLO WORLD!ref : http://wiki.workassis.com/shell-script-convert-text-to-lowercase-and-uppercase/
Orwellophile , Mar 24, 2013 at 13:43
For Bash versions earlier than 4.0, this version should be fastest (as it doesn't fork/exec any commands):function string.monolithic.tolower { local __word=$1 local __len=${#__word} local __char local __octal local __decimal local __result for (( i=0; i<__len; i++ )) do __char=${__word:$i:1} case "$__char" in [A-Z] ) printf -v __decimal '%d' "'$__char" printf -v __octal '%03o' $(( $__decimal ^ 0x20 )) printf -v __char \\$__octal ;; esac __result+="$__char" done REPLY="$__result" }technosaurus's answer had potential too, although it did run properly for mee.
Stephen M. Harris , Mar 22, 2013 at 22:42
If using v4, this is baked-in . If not, here is a simple, widely applicable solution. Other answers (and comments) on this thread were quite helpful in creating the code below.# Like echo, but converts to lowercase echolcase () { tr [:upper:] [:lower:] <<< "${*}" } # Takes one arg by reference (var name) and makes it lowercase lcase () { eval "${1}"=\'$(echo ${!1//\'/"'\''"} | tr [:upper:] [:lower:] )\' }Notes:
- Doing:
a="Hi All"
and then:lcase a
will do the same thing as:a=$( echolcase "Hi All" )
- In the lcase function, using
${!1//\'/"'\''"}
instead of${!1}
allows this to work even when the string has quotes.JaredTS486 , Dec 23, 2015 at 17:37
In spite of how old this question is and similar to this answer by technosaurus . I had a hard time finding a solution that was portable across most platforms (That I Use) as well as older versions of bash. I have also been frustrated with arrays, functions and use of prints, echos and temporary files to retrieve trivial variables. This works very well for me so far I thought I would share. My main testing environments are:
- GNU bash, version 4.1.2(1)-release (x86_64-redhat-linux-gnu)
- GNU bash, version 3.2.57(1)-release (sparc-sun-solaris2.10)
lcs="abcdefghijklmnopqrstuvwxyz" ucs="ABCDEFGHIJKLMNOPQRSTUVWXYZ" input="Change Me To All Capitals" for (( i=0; i<"${#input}"; i++ )) ; do : for (( j=0; j<"${#lcs}"; j++ )) ; do : if [[ "${input:$i:1}" == "${lcs:$j:1}" ]] ; then input="${input/${input:$i:1}/${ucs:$j:1}}" fi done doneSimple C-style for loop to iterate through the strings. For the line below if you have not seen anything like this before this is where I learned this . In this case the line checks if the char ${input:$i:1} (lower case) exists in input and if so replaces it with the given char ${ucs:$j:1} (upper case) and stores it back into input.
input="${input/${input:$i:1}/${ucs:$j:1}}"Gus Neves , May 16 at 10:04
Many answers using external programs, which is not really usingBash
.If you know you will have Bash4 available you should really just use the
${VAR,,}
notation (it is easy and cool). For Bash before 4 (My Mac still uses Bash 3.2 for example). I used the corrected version of @ghostdog74 's answer to create a more portable version.One you can call
lowercase 'my STRING'
and get a lowercase version. I read comments about setting the result to a var, but that is not really portable inBash
, since we can't return strings. Printing it is the best solution. Easy to capture with something likevar="$(lowercase $str)"
.How this works
The way this works is by getting the ASCII integer representation of each char with
printf
and thenadding 32
ifupper-to->lower
, orsubtracting 32
iflower-to->upper
. Then useprintf
again to convert the number back to a char. From'A' -to-> 'a'
we have a difference of 32 chars.Using
printf
to explain:$ printf "%d\n" "'a" 97 $ printf "%d\n" "'A" 65
97 - 65 = 32
And this is the working version with examples.
Please note the comments in the code, as they explain a lot of stuff:#!/bin/bash # lowerupper.sh # Prints the lowercase version of a char lowercaseChar(){ case "$1" in [A-Z]) n=$(printf "%d" "'$1") n=$((n+32)) printf \\$(printf "%o" "$n") ;; *) printf "%s" "$1" ;; esac } # Prints the lowercase version of a sequence of strings lowercase() { word="$@" for((i=0;i<${#word};i++)); do ch="${word:$i:1}" lowercaseChar "$ch" done } # Prints the uppercase version of a char uppercaseChar(){ case "$1" in [a-z]) n=$(printf "%d" "'$1") n=$((n-32)) printf \\$(printf "%o" "$n") ;; *) printf "%s" "$1" ;; esac } # Prints the uppercase version of a sequence of strings uppercase() { word="$@" for((i=0;i<${#word};i++)); do ch="${word:$i:1}" uppercaseChar "$ch" done } # The functions will not add a new line, so use echo or # append it if you want a new line after printing # Printing stuff directly lowercase "I AM the Walrus!"$'\n' uppercase "I AM the Walrus!"$'\n' echo "----------" # Printing a var str="A StRing WITH mixed sTUFF!" lowercase "$str"$'\n' uppercase "$str"$'\n' echo "----------" # Not quoting the var should also work, # since we use "$@" inside the functions lowercase $str$'\n' uppercase $str$'\n' echo "----------" # Assigning to a var myLowerVar="$(lowercase $str)" myUpperVar="$(uppercase $str)" echo "myLowerVar: $myLowerVar" echo "myUpperVar: $myUpperVar" echo "----------" # You can even do stuff like if [[ 'option 2' = "$(lowercase 'OPTION 2')" ]]; then echo "Fine! All the same!" else echo "Ops! Not the same!" fi exit 0And the results after running this:
$ ./lowerupper.sh i am the walrus! I AM THE WALRUS! ---------- a string with mixed stuff! A STRING WITH MIXED STUFF! ---------- a string with mixed stuff! A STRING WITH MIXED STUFF! ---------- myLowerVar: a string with mixed stuff! myUpperVar: A STRING WITH MIXED STUFF! ---------- Fine! All the same!This should only work for ASCII characters though .
For me it is fine, since I know I will only pass ASCII chars to it.
I am using this for some case-insensitive CLI options, for example.nitinr708 , Jul 8, 2016 at 9:20
To store the transformed string into a variable. Following worked for me -$SOURCE_NAME
to$TARGET_NAME
TARGET_NAME="`echo $SOURCE_NAME | tr '[:upper:]' '[:lower:]'`"
Oct 16, 2017 | www.safaribooksonline.com
Indenting Here-Documents Problem
The here-document is great, but it's messing up your shell script's formatting. You want to be able to indent for readability. Solution
Use <<- and then you can use tab characters (only!) at the beginning of lines to indent this portion of your shell script.
$ cat myscript.sh ... grep $1 <<-'EOF' lots of data can go here it's indented with tabs to match the script's indenting but the leading tabs are discarded when read EOF ls ... $DiscussionThe hyphen just after the << is enough to tell bash to ignore the leading tab characters. This is for tab characters only and not arbitrary white space. This is especially important with the
EOF
or any other marker designation. If you have spaces there, it will not recognize theEOF
as your ending marker, and the "here" data will continue through to the end of the file (swallowing the rest of your script). Therefore, you may want to always left-justify theEOF
(or other marker) just to be safe, and let the formatting go on this one line.
Oct 16, 2017 | prefetch.net
The Bourne shell provides here documents to allow block of data to be passed to a process through STDIN. The typical format for a here document is something similar to this:
command <<ARBITRARY_TAG
data to pass 1
data to pass 2
ARBITRARY_TAGThis will send the data between the ARBITRARY_TAG statements to the standard input of the process. In order for this to work, you need to make sure that the data is not indented. If you indent it for readability, you will get a syntax error similar to the following:
./test: line 12: syntax error: unexpected end of file
To allow your here documents to be indented, you can append a "-" to the end of the redirection strings like so:
if [ "${STRING}" = "SOMETHING" ] then somecommand <<-EOF this is a string1 this is a string2 this is a string3 EOF fiYou will need to use tabs to indent the data, but that is a small price to pay for added readability. Nice!
Oct 07, 2017 | www.tecmint.com
... ... ..To enable automatic user logout, we will be using the
TMOUT
shell variable, which terminates a user's login shell in case there is no activity for a given number of seconds that you can specify.To enable this globally (system-wide for all users), set the above variable in the
/etc/profile
shell initialization file.
Sep 27, 2017 | mywiki.wooledge.org
Bash has several different ways to say we want to do arithmetic instead of string operations. Let's look at them one by one.
The first way is the let command:
$ unset a; a=4+5 $ echo $a 4+5 $ let a=4+5 $ echo $a 9You may use spaces, parentheses and so forth, if you quote the expression:
$ let a='(5+2)*3'For a full list of operators availabile, see help let or the manual.
Next, the actual arithmetic evaluation compound command syntax:
$ ((a=(5+2)*3))This is equivalent to let , but we can also use it as a command , for example in an if statement:
$ if (($a == 21)); then echo 'Blackjack!'; fiOperators such as == , < , > and so on cause a comparison to be performed, inside an arithmetic evaluation. If the comparison is "true" (for example, 10 > 2 is true in arithmetic -- but not in strings!) then the compound command exits with status 0. If the comparison is false, it exits with status 1. This makes it suitable for testing things in a script.
Although not a compound command, an arithmetic substitution (or arithmetic expression ) syntax is also available:
$ echo "There are $(($rows * $columns)) cells"Inside $((...)) is an arithmetic context , just like with ((...)) , meaning we do arithmetic (multiplying things) instead of string manipulations (concatenating $rows , space, asterisk, space, $columns ). $((...)) is also portable to the POSIX shell, while ((...)) is not.
Readers who are familiar with the C programming language might wish to know that ((...)) has many C-like features. Among them are the ternary operator:
$ ((abs = (a >= 0) ? a : -a))and the use of an integer value as a truth value:
$ if ((flag)); then echo "uh oh, our flag is up"; fiNote that we used variables inside ((...)) without prefixing them with $ -signs. This is a special syntactic shortcut that Bash allows inside arithmetic evaluations and arithmetic expressions.
There is one final thing we must mention about ((flag)) . Because the inside of ((...)) is C-like, a variable (or expression) that evaluates to zero will be considered false for the purposes of the arithmetic evaluation. Then, because the evaluation is false, it will exit with a status of 1. Likewise, if the expression inside ((...)) is non-zero , it will be considered true ; and since the evaluation is true, it will exit with status 0. This is potentially very confusing, even to experts, so you should take some time to think about this. Nevertheless, when things are used the way they're intended, it makes sense in the end:
$ flag=0 # no error $ while read line; do > if [[ $line = *err* ]]; then flag=1; fi > done < inputfile $ if ((flag)); then echo "oh no"; fi
Sep 01, 2017 | stackoverflow.com
down vote favorite 234Peter Mortensen , asked Oct 5 '09 at 17:52
How do I iterate through each line of a text file with Bash ?With this script
echo "Start!" for p in (peptides.txt) do echo "${p}" doneI get this output on the screen:
Start! ./runPep.sh: line 3: syntax error near unexpected token `(' ./runPep.sh: line 3: `for p in (peptides.txt)'(Later I want to do something more complicated with $p than just output to the screen.)
The environment variable SHELL is (from env):
SHELL=/bin/bash
/bin/bash --version
output:GNU bash, version 3.1.17(1)-release (x86_64-suse-linux-gnu) Copyright (C) 2005 Free Software Foundation, Inc.
cat /proc/version
output:Linux version 2.6.18.2-34-default (geeko@buildhost) (gcc version 4.1.2 20061115 (prerelease) (SUSE Linux)) #1 SMP Mon Nov 27 11:46:27 UTC 2006The file peptides.txt contains:
RKEKNVQ IPKKLLQK QYFHQLEKMNVK IPKKLLQK GDLSTALEVAIDCYEK QYFHQLEKMNVKIPENIYR RKEKNVQ VLAKHGKLQDAIN ILGFMK LEDVALQILLBruno De Fraine , answered Oct 5 '09 at 18:00
One way to do it is:while read p; do echo $p done <peptides.txt
Exceptionally, if the loop body may read from standard input , you can open the file using a different file descriptor:
while read -u 10 p; do ... done 10<peptides.txtHere, 10 is just an arbitrary number (different from 0, 1, 2).
Warren Young , answered Oct 5 '09 at 17:54
cat peptides.txt | while read line do # do something with $line here doneStan Graves , answered Oct 5 '09 at 18:18
Option 1a: While loop: Single line at a time: Input redirection#!/bin/bash filename='peptides.txt' echo Start while read p; do echo $p done < $filenameOption 1b: While loop: Single line at a time:
Open the file, read from a file descriptor (in this case file descriptor #4).#!/bin/bash filename='peptides.txt' exec 4<$filename echo Start while read -u4 p ; do echo $p doneOption 2: For loop: Read file into single variable and parse.
This syntax will parse "lines" based on any white space between the tokens. This still works because the given input file lines are single work tokens. If there were more than one token per line, then this method would not work as well. Also, reading the full file into a single variable is not a good strategy for large files.#!/bin/bash filename='peptides.txt' filelines=`cat $filename` echo Start for line in $filelines ; do echo $line donemightypile , answered Oct 4 '13 at 13:30
This is no better than other answers, but is one more way to get the job done in a file without spaces (see comments). I find that I often need one-liners to dig through lists in text files without the extra step of using separate script files.for word in $(cat peptides.txt); do echo $word; doneThis format allows me to put it all in one command-line. Change the "echo $word" portion to whatever you want and you can issue multiple commands separated by semicolons. The following example uses the file's contents as arguments into two other scripts you may have written.
for word in $(cat peptides.txt); do cmd_a.sh $word; cmd_b.py $word; doneOr if you intend to use this like a stream editor (learn sed) you can dump the output to another file as follows.
for word in $(cat peptides.txt); do cmd_a.sh $word; cmd_b.py $word; done > outfile.txtI've used these as written above because I have used text files where I've created them with one word per line. (See comments) If you have spaces that you don't want splitting your words/lines, it gets a little uglier, but the same command still works as follows:
OLDIFS=$IFS; IFS=$'\n'; for line in $(cat peptides.txt); do cmd_a.sh $line; cmd_b.py $line; done > outfile.txt; IFS=$OLDIFSThis just tells the shell to split on newlines only, not spaces, then returns the environment back to what it was previously. At this point, you may want to consider putting it all into a shell script rather than squeezing it all into a single line, though.
Best of luck!
Jahid , answered Jun 9 '15 at 15:09
Use a while loop, like this:while IFS= read -r line; do echo "$line" done <fileNotes:
- If you don't set the
IFS
properly, you will lose indentation.- You should almost always use the -r option with read.
- Don't read lines with
for
codeforester , answered Jan 14 at 3:30
A few more things not covered by other answers: Reading from a delimited file# ':' is the delimiter here, and there are three fields on each line in the file # IFS set below is restricted to the context of `read`, it doesn't affect any other code while IFS=: read -r field1 field2 field3; do # process the fields # if the line has less than three fields, the missing fields will be set to an empty string # if the line has more than three fields, `field3` will get all the values, including the third field plus the delimiter(s) done < input.txtReading from more than one file at a timewhile read -u 3 -r line1 && read -u 4 -r line2; do # process the lines # note that the loop will end when we reach EOF on either of the files, because of the `&&` done 3< input1.txt 4< input2.txtReading a whole file into an array (Bash version 4+)readarray -t my_array < my_fileor
mapfile -t my_array < my_fileAnd then
for line in "${my_array[@]}"; do # process the lines doneAnjul Sharma , answered Mar 8 '16 at 16:10
If you don't want your read to be broken by newline character, use -#!/bin/bash while IFS='' read -r line || [[ -n "$line" ]]; do echo "$line" done < "$1"Then run the script with file name as parameter.
Sine , answered Nov 14 '13 at 14:23
#!/bin/bash # # Change the file name from "test" to desired input file # (The comments in bash are prefixed with #'s) for x in $(cat test.txt) do echo $x donedawg , answered Feb 3 '16 at 19:15
Suppose you have this file:$ cat /tmp/test.txt Line 1 Line 2 has leading space Line 3 followed by blank line Line 5 (follows a blank line) and has trailing space Line 6 has no ending CRThere are four elements that will alter the meaning of the file output read by many Bash solutions:
- The blank line 4;
- Leading or trailing spaces on two lines;
- Maintaining the meaning of individual lines (i.e., each line is a record);
- The line 6 not terminated with a CR.
If you want the text file line by line including blank lines and terminating lines without CR, you must use a while loop and you must have an alternate test for the final line.
Here are the methods that may change the file (in comparison to what
cat
returns):1) Lose the last line and leading and trailing spaces:
$ while read -r p; do printf "%s\n" "'$p'"; done </tmp/test.txt 'Line 1' 'Line 2 has leading space' 'Line 3 followed by blank line' '' 'Line 5 (follows a blank line) and has trailing space'(If you do
while IFS= read -r p; do printf "%s\n" "'$p'"; done </tmp/test.txt
instead, you preserve the leading and trailing spaces but still lose the last line if it is not terminated with CR)2) Using process substitution with
cat
will reads the entire file in one gulp and loses the meaning of individual lines:$ for p in "$(cat /tmp/test.txt)"; do printf "%s\n" "'$p'"; done 'Line 1 Line 2 has leading space Line 3 followed by blank line Line 5 (follows a blank line) and has trailing space Line 6 has no ending CR'(If you remove the
"
from$(cat /tmp/test.txt)
you read the file word by word rather than one gulp. Also probably not what is intended...)
The most robust and simplest way to read a file line-by-line and preserve all spacing is:
$ while IFS= read -r line || [[ -n $line ]]; do printf "'%s'\n" "$line"; done </tmp/test.txt 'Line 1' ' Line 2 has leading space' 'Line 3 followed by blank line' '' 'Line 5 (follows a blank line) and has trailing space ' 'Line 6 has no ending CR'If you want to strip leading and trading spaces, remove the
IFS=
part:$ while read -r line || [[ -n $line ]]; do printf "'%s'\n" "$line"; done </tmp/test.txt 'Line 1' 'Line 2 has leading space' 'Line 3 followed by blank line' '' 'Line 5 (follows a blank line) and has trailing space' 'Line 6 has no ending CR'(A text file without a terminating
\n
, while fairly common, is considered broken under POSIX. If you can count on the trailing\n
you do not need|| [[ -n $line ]]
in thewhile
loop.)More at the BASH FAQ
,
Here is my real life example how to loop lines of another program output, check for substrings, drop double quotes from variable, use that variable outside of the loop. I guess quite many is asking these questions sooner or later.##Parse FPS from first video stream, drop quotes from fps variable ## streams.stream.0.codec_type="video" ## streams.stream.0.r_frame_rate="24000/1001" ## streams.stream.0.avg_frame_rate="24000/1001" FPS=unknown while read -r line; do if [[ $FPS == "unknown" ]] && [[ $line == *".codec_type=\"video\""* ]]; then echo ParseFPS $line FPS=parse fi if [[ $FPS == "parse" ]] && [[ $line == *".r_frame_rate="* ]]; then echo ParseFPS $line FPS=${line##*=} FPS="${FPS%\"}" FPS="${FPS#\"}" fi done <<< "$(ffprobe -v quiet -print_format flat -show_format -show_streams -i "$input")" if [ "$FPS" == "unknown" ] || [ "$FPS" == "parse" ]; then echo ParseFPS Unknown frame rate fi echo Found $FPSDeclare variable outside of the loop, set value and use it outside of loop requires done <<< "$(...)" syntax. Application need to be run within a context of current console. Quotes around the command keeps newlines of output stream.
Loop match for substrings then reads name=value pair, splits right-side part of last = character, drops first quote, drops last quote, we have a clean value to be used elsewhere.
Aug 29, 2017 | askubuntu.com
If you actually need the output of the
.bash_history
file , replacehistory
with
cat ~/.bash_history
in all of the commands below.If you actually want the commands without numbers in front, use this command instead of
history
:history | cut -d' ' -f 4-
Jul 29, 2017 | unix.stackexchange.com
Amelio Vazquez-Reina asked May 19 '14I read here that the purpose of
export
in a shell is to make the variable available to sub-processes started from the shell.However, I have also read here and here that "Processes inherit their environment from their parent (the process which started them)."
If this is the case, why do we need
export
? What am I missing?Are shell variables not part of the environment by default? What is the difference?
Your assumption is that all shell variables are in the environment . This is incorrect. The
export
command is what defines a name to be in the environment at all. Thus:a=1 b=2 export b
results in the current shell knowing that
$a
expands to 1 and$b
to 2, but subprocesses will not know anything abouta
because it is not part of the environment (even in the current shell).Some useful tools:
set
: Useful for viewing the current shell's parameters, exported-or-notset -k
: Sets assigned args in the environment. Considerf() { set -k; env; }; f a=1
export
: Tells the shell to put a name in the environment. Export and assignment are two entirely different operations.env
: As an external command,env
can only tell you about the inherited environment, thus, it's useful for sanity checking.env -i
: Useful for clearing the environment before starting a subprocess.Alternatives to
export
:====
name=val command
# Assignment before command exports that name to the command.declare/local -x name
# Exports name, particularly useful in shell functions when you want to avoid exposing the name to outside scope.There's a difference between shell variables and environment variables. If you define a shell variable without
export
ing it, it is not added to the processes environment and thus not inherited to its children.Using
export
you tell the shell to add the shell variable to the environment. You can test this usingprintenv
(which just prints its environment tostdout
, since it's a child-process you see the effect ofexport
ing variables):#!/bin/sh MYVAR="my cool variable" echo "Without export:" printenv | grep MYVAR echo "With export:" export MYVAR printenv | grep MYVARA variable, once exported, is part of the environment.PATH
is exported in the shell itself, while custom variables can be exported as needed.... ... ..
Jul 29, 2017 | superuser.com
up vote down vote favorite 1I am using startx
to start the graphical environment. I have a very simple.xinitrc
which I will add things to as I set up the environment, but for now it is as follows:catwm & # Just a basic window manager, for testing. xterm
The reason I background the WM and foreground terminal and not the other way around as often is done, is because I would like to be able to come back to the virtual text console after typing
exit
inxterm
. This appears to work as described.The problem is that the
PS1
variable that currently is set to my preference in/etc/profile.d/user.sh
(which is sourced from/etc/profile
supplied by distro), does not appear to propagate to the environment of thexterm
mentioned above. The relevant process tree is as follows:\_ bash \_ xinit home user /. xinitrc -- etc X11 xinit xserverrc auth tmp serverauth ggJna3I0vx \_ usr bin nolisten tcp auth tmp serverauth ggJna3I0vx vt1 \_ sh home user /. xinitrc \_ home user catwm \_ xterm \_ bash
The shell started by
xterm
appears to be interactive, the shell executing.xinitrc
however is not. I am ok with both, the assumptions about interactivity seem to be perfectly valid, but now I have a non-interactive shell that spawns an interactive shell indirectly, and the interactive shell has no chance to automatically inherit the prompt, because the prompt was unset or otherwise made unavailable higher up the process tree.How do I go about getting my prompt back? bash environment-variables sh
down vote accepted
share improve this question edited Oct 21 '13 at 11:39 asked Oct 21 '13 at 9:51 amn 453 12 29 Commands
env
andexport
list only variables which are exported.$PS1
is usually not exported. Tryecho $PS1
in your shell to see actual value of$PS1
.Non-interactive shells usually do not have
$PS1
. Non-interactivebash
explicitly unsets$PS1
. 1 You can check ifbash
is interactive byecho $-
. If the output containsi
then it is interactive. You can explicitly start interactive shell by using the option on the command line:bash -i
. Shell started with-c
is not interactive.The
/etc/profile
script is read for a login shell. You can start the shell as a login shell by:bash -l
.With
bash
shell the scripts/etc/bash.bashrc
and~/.bashrc
are usually used to set$PS1
. Those scripts are sourced when interactive non-login shell is started. It is your case in thexterm
.See Setting the PS? Strings Permanently
Possible solutions
- Start the shell inside
xterm
as a login shellbash -l
. Check if/etc/profile
and~/.profile
do not contain code which should be executed only after login. Maybe slight modifications of the scripts will be needed.- Use a different shell. For example
dash
does not unset$PS1
. You can use such a shell just as the non-interactive shell which will run the scripts up toxterm
.- Give up the strict POSIX compliance and use the bash-standard place for setting
$PS1
:/etc/bash.bashrc
or~/.bashrc
.- Give up the strict POSIX compliance and source your own startup script like:
bash --rcfile <(echo "PS1=$PS1save") -i
- Start the intermediate shells from
startx
tillxterm
as interactive shells (bash -i
). Unfortunately this can have some side-effect and I would not do this.
share improve this answer edited Oct 22 '13 at 16:45 answered Oct 21 '13 at 11:19 pabouk 4,250 25 40
I am specifically avoiding to set PS1
in.bashrc
or/etc/bash.bashrc
(which is executed as well), to retain POSIX shell compatibility. These do not set or unsetPS1
.PS1
is set in/etc/profile.d/user.sh
, which is sourced by/etc/profile
. Indeed, this file is only executed for login shells, however I do exportPS1
from/etc/profile.d/user.sh
exactly because I want propagation of my preferred value down the process tree. So it shouldn't matter which subshells are login and/or interactive ones then, should it? amn Oct 21 '13 at 11:32
It seems that bash
removes thePS1
variable. What exactly do you want to achieve by "POSIX shell compatibility"? Do you want to be able to replacebash
by a different POSIX-compliant shell and retain the same functionality? Based on my testsbash
removesPS1
when it is started as non-interactive. I think of two simple solutions: 1. start the shell as a login shell with the-l
option (attention for actions in the startup scripts which should be started only at login) 2. start the intermediate shells as interactive with the-i
option. pabouk Oct 21 '13 at 12:00
I try to follow interfaces and specifications, not implementations - hence POSIX compatibility. That's important (to me). I already have one login shell - the one started by /usr/bin/login
. I understand that a non-interactive shell doesn't need prompt, but unsetting a variable is too much - I need the prompt in an interactive shell (spawned and used byxterm
) later on. What am I doing wrong? I guess most people set their prompt in.bashrc
which is sourced by bash anyway, and so the prompt survives. I try to avoid.bashrc
however. amn Oct 22 '13 at 12:12
@amn: I have added various possible solutions to the reply. pabouk Oct 22 '13 at 16:46
Jul 29, 2017 | stackoverflow.com
user3718463 , asked Sep 27 '14 at 21:41
The Learning Bash Book mention that a subshell will inherit only environment variabels and file descriptors , ...etc and that it will not inherit variables that are not exported of$ var=15 $ (echo $var) 15 $ ./file # this file include the same command echo $var $As i know the shell will create two subshells for () case and for ./file, but why in () case the subshell identified the var variable although it is not exported and in the ./file case it did not identify it ?
...I tried to use strace to figure out how this happens and surprisingly i found that bash will use the same arguments for the clone system call so this means that the both forked process in () and ./file should have the same process address space of the parent, so why in () case the variable is visible to the subshell and the same does not happen for ./file case although the same arguments is based with clone system call ?
Alfe , answered Sep 27 '14 at 23:16
The subshell created using parentheses does not use anexecve()
call for the new process, the calling of the script does. At this point the variables from the parent shell are handled differently: Theexecve()
passes a deliberate set of variables (the script-calling case) while not callingexecve()
(the parentheses case) leaves the complete set of variables intact.Your probing using
strace
should have shown exactly that difference; if you did not see it, I can only assume that you made one of several possible mistakes. I will just strip down what I did to show the difference, then you can decide for yourself where your error was.... ... ...
Nicolas Albert , answered Sep 27 '14 at 21:43
You have to export yourvar
for child process:export var 15
Once exported, the variable is used for all children process at the launch time (not export time).
var 15 export var
is same as
export var var 15
is same as
export var 15
Export can be cancelled using
unset
. Sample:unset var
.user3718463 , answered Sep 27 '14 at 23:11
The solution for this mystery is that subshells inherit everything from the parent shell including all shell variables because they are simply called with fork or clone so they share the same memory space with the parent shell , that's why this will work$ var=15 $ (echo $var) 15But in the ./file , the subshell will be later followed by exec or execv system call which will clear all the previous parent variables but we still have the environment variables you can check this out using strace using -f to monitor the child subshell and you will find that there is a call to execv
Mar 03, 2014 | www.digitalocean.com
Introduction
When interacting with your server through a shell session, there are many pieces of information that your shell compiles to determine its behavior and access to resources. Some of these settings are contained within configuration settings and others are determined by user input.
One way that the shell keeps track of all of these settings and details is through an area it maintains called the environment . The environment is an area that the shell builds every time that it starts a session that contains variables that define system properties.
In this guide, we will discuss how to interact with the environment and read or set environmental and shell variables interactively and through configuration files. We will be using an Ubuntu 12.04 VPS as an example, but these details should be relevant on any Linux system.
How the Environment and Environmental Variables Work
Every time a shell session spawns, a process takes place to gather and compile information that should be available to the shell process and its child processes. It obtains the data for these settings from a variety of different files and settings on the system.
Basically the environment provides a medium through which the shell process can get or set settings and, in turn, pass these on to its child processes.
The environment is implemented as strings that represent key-value pairs. If multiple values are passed, they are typically separated by colon (:) characters. Each pair will generally will look something like this:
KEY value1 value2:...If the value contains significant white-space, quotations are used:
KEY =" value with spaces "The keys in these scenarios are variables. They can be one of two types, environmental variables or shell variables.
Environmental variables are variables that are defined for the current shell and are inherited by any child shells or processes. Environmental variables are used to pass information into processes that are spawned from the shell.
Shell variables are variables that are contained exclusively within the shell in which they were set or defined. They are often used to keep track of ephemeral data, like the current working directory.
By convention, these types of variables are usually defined using all capital letters. This helps users distinguish environmental variables within other contexts.
Printing Shell and Environmental Variables
Each shell session keeps track of its own shell and environmental variables. We can access these in a few different ways.
We can see a list of all of our environmental variables by using the
env
orprintenv
commands. In their default state, they should function exactly the same:printenv
SHELL=/bin/bash TERM=xterm USER=demouser LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca:... MAIL=/var/mail/demouser PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games PWD=/home/demouser LANG=en_US.UTF-8 SHLVL=1 HOME=/home/demouser LOGNAME=demouser LESSOPEN=| /usr/bin/lesspipe %s LESSCLOSE=/usr/bin/lesspipe %s %s _=/usr/bin/printenv
This is fairly typical of the output of both
printenv
andenv
. The difference between the two commands is only apparent in their more specific functionality. For instance, withprintenv
, you can requests the values of individual variables:printenv SHELL
/bin/bash
On the other hand,
env
let's you modify the environment that programs run in by passing a set of variable definitions into a command like this:env VAR1="blahblah" command_to_run command_options
Since, as we learned above, child processes typically inherit the environmental variables of the parent process, this gives you the opportunity to override values or add additional variables for the child.
As you can see from the output of our
printenv
command, there are quite a few environmental variables set up through our system files and processes without our input.These show the environmental variables, but how do we see shell variables?
The
set
command can be used for this. If we typeset
without any additional parameters, we will get a list of all shell variables, environmental variables, local variables, and shell functions:set
BASH=/bin/bash BASHOPTS=checkwinsize:cmdhist:expand_aliases:extglob:extquote:force_fignore:histappend:interactive_comments:login_shell:progcomp:promptvars:sourcepath BASH_ALIASES=() BASH_ARGC=() BASH_ARGV=() BASH_CMDS=() . . .
This is usually a huge list. You probably want to pipe it into a pager program to deal with the amount of output easily:
set | less
The amount of additional information that we receive back is a bit overwhelming. We probably do not need to know all of the bash functions that are defined, for instance.
We can clean up the output by specifying that
set
should operate in POSIX mode, which won't print the shell functions. We can execute this in a sub-shell so that it does not change our current environment:(set -o posix; set)
This will list all of the environmental and shell variables that are defined.
We can attempt to compare this output with the output of the
env
orprintenv
commands to try to get a list of only shell variables, but this will be imperfect due to the different ways that these commands output information:comm -23 <(set -o posix; set | sort) <(env | sort)
This will likely still include a few environmental variables, due to the fact that the
set
command outputs quoted values, while theprintenv
andenv
commands do not quote the values of strings.This should still give you a good idea of the environmental and shell variables that are set in your session.
These variables are used for all sorts of things. They provide an alternative way of setting persistent values for the session between processes, without writing changes to a file.
Common Environmental and Shell Variables
Some environmental and shell variables are very useful and are referenced fairly often.
Here are some common environmental variables that you will come across:
- SHELL : This describes the shell that will be interpreting any commands you type in. In most cases, this will be bash by default, but other values can be set if you prefer other options.
- TERM : This specifies the type of terminal to emulate when running the shell. Different hardware terminals can be emulated for different operating requirements. You usually won't need to worry about this though.
- USER : The current logged in user.
- PWD : The current working directory.
- OLDPWD : The previous working directory. This is kept by the shell in order to switch back to your previous directory by running
cd -
.- LS_COLORS : This defines color codes that are used to optionally add colored output to the
ls
command. This is used to distinguish different file types and provide more info to the user at a glance.- MAIL : The path to the current user's mailbox.
- PATH : A list of directories that the system will check when looking for commands. When a user types in a command, the system will check directories in this order for the executable.
- LANG : The current language and localization settings, including character encoding.
- HOME : The current user's home directory.
- : The most recent previously executed command.
In addition to these environmental variables, some shell variables that you'll often see are:
Setting Shell and Environmental Variables
- BASHOPTS : The list of options that were used when bash was executed. This can be useful for finding out if the shell environment will operate in the way you want it to.
- BASH_VERSION : The version of bash being executed, in human-readable form.
- BASH_VERSINFO : The version of bash, in machine-readable output.
- COLUMNS : The number of columns wide that are being used to draw output on the screen.
- DIRSTACK : The stack of directories that are available with the
pushd
andpopd
commands.- HISTFILESIZE : Number of lines of command history stored to a file.
- HISTSIZE : Number of lines of command history allowed in memory.
- HOSTNAME : The hostname of the computer at this time.
- IFS : The internal field separator to separate input on the command line. By default, this is a space.
- PS1 : The primary command prompt definition. This is used to define what your prompt looks like when you start a shell session. The
PS2
is used to declare secondary prompts for when a command spans multiple lines.- SHELLOPTS : Shell options that can be set with the
set
option.- UID : The UID of the current user.
To better understand the difference between shell and environmental variables, and to introduce the syntax for setting these variables, we will do a small demonstration.
Creating Shell Variables
We will begin by defining a shell variable within our current session. This is easy to accomplish; we only need to specify a name and a value. We'll adhere to the convention of keeping all caps for the variable name, and set it to a simple string.
TEST_VAR='Hello World!'
Here, we've used quotations since the value of our variable contains a space. Furthermore, we've used single quotes because the exclamation point is a special character in the bash shell that normally expands to the bash history if it is not escaped or put into single quotes.
We now have a shell variable. This variable is available in our current session, but will not be passed down to child processes.
We can see this by grepping for our new variable within the
set
output:set | grep TEST_VAR
TEST_VAR='Hello World!'
We can verify that this is not an environmental variable by trying the same thing with
printenv
:printenv | grep TEST_VAR
No out should be returned.
Let's take this as an opportunity to demonstrate a way of accessing the value of any shell or environmental variable.
echo $TEST_VAR
Hello World!
As you can see, reference the value of a variable by preceding it with a
$
sign. The shell takes this to mean that it should substitute the value of the variable when it comes across this.So now we have a shell variable. It shouldn't be passed on to any child processes. We can spawn a new bash shell from within our current one to demonstrate:
bash echo $TEST_VAR
If we type
bash
to spawn a child shell, and then try to access the contents of the variable, nothing will be returned. This is what we expected.Get back to our original shell by typing
exit
:Creating Environmental Variablesexit
Now, let's turn our shell variable into an environmental variable. We can do this by exporting the variable. The command to do so is appropriately named:
export TEST_VAR
This will change our variable into an environmental variable. We can check this by checking our environmental listing again:
printenv | grep TEST_VAR
TEST_VAR=Hello World!
This time, our variable shows up. Let's try our experiment with our child shell again:
bash echo $TEST_VAR
Hello World!
Great! Our child shell has received the variable set by its parent. Before we exit this child shell, let's try to export another variable. We can set environmental variables in a single step like this:
export NEW_VAR="Testing export"
Test that it's exported as an environmental variable:
printenv | grep NEW_VAR
NEW_VAR=Testing export
Now, let's exit back into our original shell:
exit
Let's see if our new variable is available:
echo $NEW_VAR
Nothing is returned.
This is because environmental variables are only passed to child processes. There isn't a built-in way of setting environmental variables of the parent shell. This is good in most cases and prevents programs from affecting the operating environment from which they were called.
The
Demoting and Unsetting VariablesNEW_VAR
variable was set as an environmental variable in our child shell. This variable would be available to itself and any of its child shells and processes. When we exited back into our main shell, that environment was destroyed.
We still have our
TEST_VAR
variable defined as an environmental variable. We can change it back into a shell variable by typing:export -n TEST_VAR
It is no longer an environmental variable:
printenv | grep TEST_VAR
However, it is still a shell variable:
set | grep TEST_VAR
TEST_VAR='Hello World!'
If we want to completely unset a variable, either shell or environmental, we can do so with the
unset
command:unset TEST_VAR
We can verify that it is no longer set:
echo $TEST_VAR
Nothing is returned because the variable has been unset.
Setting Environmental Variables at Login
We've already mentioned that many programs use environmental variables to decide the specifics of how to operate. We do not want to have to set important variables up every time we start a new shell session, and we have already seen how many variables are already set upon login, so how do we make and define variables automatically?
This is actually a more complex problem than it initially seems, due to the numerous configuration files that the bash shell reads depending on how it is started.
The Difference between Login, Non-Login, Interactive, and Non-Interactive Shell Sessions
The bash shell reads different configuration files depending on how the session is started.
One distinction between different sessions is whether the shell is being spawned as a "login" or "non-login" session.
A login shell is a shell session that begins by authenticating the user. If you are signing into a terminal session or through SSH and authenticate, your shell session will be set as a "login" shell.
If you start a new shell session from within your authenticated session, like we did by calling the
bash
command from the terminal, a non-login shell session is started. You were were not asked for your authentication details when you started your child shell.Another distinction that can be made is whether a shell session is interactive, or non-interactive.
An interactive shell session is a shell session that is attached to a terminal. A non-interactive shell session is one is not attached to a terminal session.
So each shell session is classified as either login or non-login and interactive or non-interactive.
A normal session that begins with SSH is usually an interactive login shell. A script run from the command line is usually run in a non-interactive, non-login shell. A terminal session can be any combination of these two properties.
Whether a shell session is classified as a login or non-login shell has implications on which files are read to initialize the shell session.
A session started as a login session will read configuration details from the
/etc/profile
file first. It will then look for the first login shell configuration file in the user's home directory to get user-specific configuration details.It reads the first file that it can find out of
~/.bash_profile
,~/.bash_login
, and~/.profile
and does not read any further files.In contrast, a session defined as a non-login shell will read
/etc/bash.bashrc
and then the user-specific~/.bashrc
file to build its environment.Non-interactive shells read the environmental variable called
Implementing Environmental VariablesBASH_ENV
and read the file specified to define the new environment.
As you can see, there are a variety of different files that we would usually need to look at for placing our settings.
This provides a lot of flexibility that can help in specific situations where we want certain settings in a login shell, and other settings in a non-login shell. However, most of the time we will want the same settings in both situations.
Fortunately, most Linux distributions configure the login configuration files to source the non-login configuration files. This means that you can define environmental variables that you want in both inside the non-login configuration files. They will then be read in both scenarios.
We will usually be setting user-specific environmental variables, and we usually will want our settings to be available in both login and non-login shells. This means that the place to define these variables is in the
~/.bashrc
file.Open this file now:
nano ~/.bashrc
This will most likely contain quite a bit of data already. Most of the definitions here are for setting bash options, which are unrelated to environmental variables. You can set environmental variables just like you would from the command line:
export VARNAME=value
We can then save and close the file. The next time you start a shell session, your environmental variable declaration will be read and passed on to the shell environment. You can force your current session to read the file now by typing:
source ~/.bashrc
If you need to set system-wide variables, you may want to think about adding them to
Conclusion/etc/profile
,/etc/bash.bashrc
, or/etc/environment
.
Environmental and shell variables are always present in your shell sessions and can be very useful. They are an interesting way for a parent process to set configuration details for its children, and are a way of setting options outside of files.
This has many advantages in specific situations. For instance, some deployment mechanisms rely on environmental variables to configure authentication information. This is useful because it does not require keeping these in files that may be seen by outside parties.
There are plenty of other, more mundane, but more common scenarios where you will need to read or alter the environment of your system. These tools and techniques should give you a good foundation for making these changes and using them correctly.
By Justin Ellingwood
Jul 29, 2017 | stackoverflow.com
up vote 130 down vote favorite 717
Adam Rosenfield , asked Jan 6 '09 at 3:58
I've used a number of different *nix-based systems of the years, and it seems like every flavor of Bash I use has a different algorithm for deciding which startup scripts to run. For the purposes of tasks like setting up environment variables and aliases and printing startup messages (e.g. MOTDs), which startup script is the appropriate place to do these?What's the difference between putting things in
.bashrc
,.bash_profile
, and.environment
? I've also seen other files such as.login
,.bash_login
, and.profile
; are these ever relevant? What are the differences in which ones get run when logging in physically, logging in remotely via ssh, and opening a new terminal window? Are there any significant differences across platforms (including Mac OS X (and its Terminal.app) and Cygwin Bash)?Cos , answered Jan 6 '09 at 4:18
The main difference with shell config files is that some are only read by "login" shells (eg. when you login from another host, or login at the text console of a local unix machine). these are the ones called, say,.login
or.profile
or.zlogin
(depending on which shell you're using).Then you have config files that are read by "interactive" shells (as in, ones connected to a terminal (or pseudo-terminal in the case of, say, a terminal emulator running under a windowing system). these are the ones with names like
.bashrc
,.tcshrc
,.zshrc
, etc.
bash
complicates this in that.bashrc
is only read by a shell that's both interactive and non-login , so you'll find most people end up telling their.bash_profile
to also read.bashrc
with something like
[[ -r ~/.bashrc ]] && . ~/.bashrc
Other shells behave differently - eg with
zsh
,.zshrc
is always read for an interactive shell, whether it's a login one or not.The manual page for bash explains the circumstances under which each file is read. Yes, behaviour is generally consistent between machines.
.profile
is simply the login script filename originally used by/bin/sh
.bash
, being generally backwards-compatible with/bin/sh
, will read.profile
if one exists.Johannes Schaub - litb , answered Jan 6 '09 at 15:21
That's simple. It's explained inman bash
:... ... ...
Login shells are the ones that are the one you login (so, they are not executed when merely starting up xterm, for example). There are other ways to login. For example using an X display manager. Those have other ways to read and export environment variables at login time.
Also read the
INVOCATION
chapter in the manual. It says "The following paragraphs describe how bash executes its startup files." , i think that's a spot-on :) It explains what an "interactive" shell is too.Bash does not know about
.environment
. I suspect that's a file of your distribution, to set environment variables independent of the shell that you drive.Jonathan Leffler , answered Jan 6 '09 at 4:13
Classically,~/.profile
is used by Bourne Shell, and is probably supported by Bash as a legacy measure. Again,~/.login
and~/.cshrc
were used by C Shell - I'm not sure that Bash uses them at all.The
~/.bash_profile
would be used once, at login. The~/.bashrc
script is read every time a shell is started. This is analogous to/.cshrc
for C Shell.One consequence is that stuff in
~/.bashrc
should be as lightweight (minimal) as possible to reduce the overhead when starting a non-login shell.I believe the
~/.environment
file is a compatibility file for Korn Shell.Filip Ekberg , answered Jan 6 '09 at 4:03
I found information about .bashrc and .bash_profile here to sum it up:.bash_profile is executed when you login. Stuff you put in there might be your PATH and other important environment variables.
.bashrc is used for non login shells. I'm not sure what that means. I know that RedHat executes it everytime you start another shell (su to this user or simply calling bash again) You might want to put aliases in there but again I am not sure what that means. I simply ignore it myself.
.profile is the equivalent of .bash_profile for the root. I think the name is changed to let other shells (csh, sh, tcsh) use it as well. (you don't need one as a user)
There is also .bash_logout wich executes at, yeah good guess...logout. You might want to stop deamons or even make a little housekeeping . You can also add "clear" there if you want to clear the screen when you log out.
Also there is a complete follow up on each of the configurations files here
These are probably even distro.-dependant, not all distros choose to have each configuraton with them and some have even more. But when they have the same name, they usualy include the same content.
Rose Perrone , answered Feb 27 '12 at 0:22
According to Josh Staiger , Mac OS X's Terminal.app actually runs a login shell rather than a non-login shell by default for each new terminal window, calling .bash_profile instead of .bashrc.He recommends:
Most of the time you don't want to maintain two separate config files for login and non-login shells ! when you set a PATH, you want it to apply to both. You can fix this by sourcing .bashrc from your .bash_profile file, then putting PATH and common settings in .bashrc.
To do this, add the following lines to .bash_profile:
if ~/.bashrc ]; then source ~/.bashrc fi
Now when you login to your machine from a console .bashrc will be called.
PolyThinker , answered Jan 6 '09 at 4:06
A good place to look at is the man page of bash. Here 's an online version. Look for "INVOCATION" section.seismick , answered May 21 '12 at 10:42
I have used Debian-family distros which appear to execute.profile
, but not.bash_profile
, whereas RHEL derivatives execute.bash_profile
before.profile
.It seems to be a mess when you have to set up environment variables to work in any Linux OS.
Jul 29, 2017 | unix.stackexchange.com
Oli , asked Aug 26 '10 at 13:04
I consistently have more than one terminal open. Anywhere from two to ten, doing various bits and bobs. Now let's say I restart and open up another set of terminals. Some remember certain things, some forget.I want a history that:
- Remembers everything from every terminal
- Is instantly accessible from every terminal (eg if I
ls
in one, switch to another already-running terminal and then press up,ls
shows up)- Doesn't forget command if there are spaces at the front of the command.
Anything I can do to make bash work more like that?
Pablo R. , answered Aug 26 '10 at 14:37
# Avoid duplicates export HISTCONTROL=ignoredups:erasedups # When the shell exits, append to the history file instead of overwriting it shopt -s histappend # After each command, append to the history file and reread it export PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND$'\n'}history -a; history -c; history -r"kch , answered Sep 19 '08 at 17:49
So, this is all my history-related.bashrc
thing:export HISTCONTROL=ignoredups:erasedups # no duplicate entries export HISTSIZE=100000 # big big history export HISTFILESIZE=100000 # big big history shopt -s histappend # append to history, don't overwrite it # Save and reload the history after each command finishes export PROMPT_COMMAND="history -a; history -c; history -r; $PROMPT_COMMAND"Tested with bash 3.2.17 on Mac OS X 10.5, bash 4.1.7 on 10.6.
lesmana , answered Jun 16 '10 at 16:11
Here is my attempt at Bash session history sharing. This will enable history sharing between bash sessions in a way that the history counter does not get mixed up and history expansion like!number
will work (with some constraints).Using Bash version 4.1.5 under Ubuntu 10.04 LTS (Lucid Lynx).
HISTSIZE=9000 HISTFILESIZE=$HISTSIZE HISTCONTROL=ignorespace:ignoredups _bash_history_sync() { builtin history -a #1 HISTFILESIZE=$HISTSIZE #2 builtin history -c #3 builtin history -r #4 } history() { #5 _bash_history_sync builtin history "$@" } PROMPT_COMMAND=_bash_history_syncExplanation:More explanation:
- Append the just entered line to the
$HISTFILE
(default is.bash_history
). This will cause$HISTFILE
to grow by one line.- Setting the special variable
$HISTFILESIZE
to some value will cause Bash to truncate$HISTFILE
to be no longer than$HISTFILESIZE
lines by removing the oldest entries.- Clear the history of the running session. This will reduce the history counter by the amount of
$HISTSIZE
.- Read the contents of
$HISTFILE
and insert them in to the current running session history. this will raise the history counter by the amount of lines in$HISTFILE
. Note that the line count of$HISTFILE
is not necessarily$HISTFILESIZE
.- The
history()
function overrides the builtin history to make sure that the history is synchronised before it is displayed. This is necessary for the history expansion by number (more about this later).About the constraints of the history expansion:
- Step 1 ensures that the command from the current running session gets written to the global history file.
- Step 4 ensures that the commands from the other sessions gets read in to the current session history.
- Because step 4 will raise the history counter, we need to reduce the counter in some way. This is done in step 3.
- In step 3 the history counter is reduced by
$HISTSIZE
. In step 4 the history counter is raised by the number of lines in$HISTFILE
. In step 2 we make sure that the line count of$HISTFILE
is exactly$HISTSIZE
(this means that$HISTFILESIZE
must be the same as$HISTSIZE
).When using history expansion by number, you should always look up the number immediately before using it. That means no bash prompt display between looking up the number and using it. That usually means no enter and no ctrl+c.
Generally, once you have more than one Bash session, there is no guarantee whatsoever that a history expansion by number will retain its value between two Bash prompt displays. Because when
PROMPT_COMMAND
is executed the history from all other Bash sessions are integrated in the history of the current session. If any other bash session has a new command then the history numbers of the current session will be different.I find this constraint reasonable. I have to look the number up every time anyway because I can't remember arbitrary history numbers.
Usually I use the history expansion by number like this
$ history | grep something #note number $ !numberI recommend using the following Bash options.
## reedit a history substitution line if it failed shopt -s histreedit ## edit a recalled history line before executing shopt -s histverifyStrange bugs:Running the history command piped to anything will result that command to be listed in the history twice. For example:
$ history | head $ history | tail $ history | grep foo $ history | true $ history | falseAll will be listed in the history twice. I have no idea why.
Ideas for improvements:
- Modify the function
_bash_history_sync()
so it does not execute every time. For example it should not execute after aCTRL+C
on the prompt. I often useCTRL+C
to discard a long command line when I decide that I do not want to execute that line. Sometimes I have to useCTRL+C
to stop a Bash completion script.- Commands from the current session should always be the most recent in the history of the current session. This will also have the side effect that a given history number keeps its value for history entries from this session.
Maciej Piechotka , answered Aug 26 '10 at 13:20
I'm not aware of any way usingbash
. But it's one of the most popular features ofzsh
.
Personally I preferzsh
overbash
so I recommend trying it.Here's the part of my
.zshrc
that deals with history:SAVEHIST=10000 # Number of entries HISTSIZE=10000 HISTFILE=~/.zsh/history # File setopt APPEND_HISTORY # Don't erase history setopt EXTENDED_HISTORY # Add additional data to history like timestamp setopt INC_APPEND_HISTORY # Add immediately setopt HIST_FIND_NO_DUPS # Don't show duplicates in search setopt HIST_IGNORE_SPACE # Don't preserve spaces. You may want to turn it off setopt NO_HIST_BEEP # Don't beep setopt SHARE_HISTORY # Share history between session/terminalsChris Down , answered Nov 25 '11 at 15:46
To do this, you'll need to add two lines to your~/.bashrc
:shopt -s histappend PROMPT_COMMAND="history -a;history -c;history -r;" $PROMPT_COMMANDFrom
man bash
:If the histappend shell option is enabled (see the description of shopt under SHELL BUILTIN COMMANDS below), the lines are appended to the history file, otherwise the history file is over-written.
Schof , answered Sep 19 '08 at 19:38
You can edit your BASH prompt to run the "history -a" and "history -r" that Muerr suggested:savePS1=$PS1(in case you mess something up, which is almost guaranteed)
PS1=$savePS1`history -a;history -r`(note that these are back-ticks; they'll run history -a and history -r on every prompt. Since they don't output any text, your prompt will be unchanged.
Once you've got your PS1 variable set up the way you want, set it permanently it in your ~/.bashrc file.
If you want to go back to your original prompt while testing, do:
PS1=$savePS1I've done basic testing on this to ensure that it sort of works, but can't speak to any side-effects from running
history -a;history -r
on every prompt.pts , answered Mar 25 '11 at 17:40
If you need a bash or zsh history synchronizing solution which also solves the problem below, then see it at http://ptspts.blogspot.com/2011/03/how-to-automatically-synchronize-shell.htmlThe problem is the following: I have two shell windows A and B. In shell window A, I run
sleep 9999
, and (without waiting for the sleep to finish) in shell window B, I want to be able to seesleep 9999
in the bash history.The reason why most other solutions here won't solve this problem is that they are writing their history changes to the the history file using
PROMPT_COMMAND
orPS1
, both of which are executing too late, only after thesleep 9999
command has finished.jtimberman , answered Sep 19 '08 at 17:38
You can usehistory -a
to append the current session's history to the histfile, then usehistory -r
on the other terminals to read the histfile.jmanning2k , answered Aug 26 '10 at 13:59
I can offer a fix for that last one: make sure the env variable HISTCONTROL does not specify "ignorespace" (or "ignoreboth").But I feel your pain with multiple concurrent sessions. It simply isn't handled well in bash.
Toby , answered Nov 20 '14 at 14:53
Here's an alternative that I use. It's cumbersome but it addresses the issue that @axel_c mentioned where sometimes you may want to have a separate history instance in each terminal (one for make, one for monitoring, one for vim, etc).I keep a separate appended history file that I constantly update. I have the following mapped to a hotkey:
history | grep -v history >> ~/master_history.txtThis appends all history from the current terminal to a file called master_history.txt in your home dir.
I also have a separate hotkey to search through the master history file:
cat /home/toby/master_history.txt | grep -iI use cat | grep because it leaves the cursor at the end to enter my regex. A less ugly way to do this would be to add a couple of scripts to your path to accomplish these tasks, but hotkeys work for my purposes. I also periodically will pull history down from other hosts I've worked on and append that history to my master_history.txt file.
It's always nice to be able to quickly search and find that tricky regex you used or that weird perl one-liner you came up with 7 months ago.
Yarek T , answered Jul 23 '15 at 9:05
Right, So finally this annoyed me to find a decent solution:# Write history after each command _bash_history_append() { builtin history -a } PROMPT_COMMAND="_bash_history_append; $PROMPT_COMMAND"What this does is sort of amalgamation of what was said in this thread, except that I don't understand why would you reload the global history after every command. I very rarely care about what happens in other terminals, but I always run series of commands, say in one terminal:
make ls -lh target/*.foo scp target/artifact.foo vm:~/(Simplified example)
And in another:
pv ~/test.data | nc vm:5000 >> output less output mv output output.backup1No way I'd want the command to be shared
rouble , answered Apr 15 at 17:43
Here is my enhancement to @lesmana's answer . The main difference is that concurrent windows don't share history. This means you can keep working in your windows, without having context from other windows getting loaded into your current windows.If you explicitly type 'history', OR if you open a new window then you get the history from all previous windows.
Also, I use this strategy to archive every command ever typed on my machine.
# Consistent and forever bash history HISTSIZE=100000 HISTFILESIZE=$HISTSIZE HISTCONTROL=ignorespace:ignoredups _bash_history_sync() { builtin history -a #1 HISTFILESIZE=$HISTSIZE #2 } _bash_history_sync_and_reload() { builtin history -a #1 HISTFILESIZE=$HISTSIZE #2 builtin history -c #3 builtin history -r #4 } history() { #5 _bash_history_sync_and_reload builtin history "$@" } export HISTTIMEFORMAT="%y/%m/%d %H:%M:%S " PROMPT_COMMAND='history 1 >> ${HOME}/.bash_eternal_history' PROMPT_COMMAND=_bash_history_sync;$PROMPT_COMMANDsimotek , answered Jun 1 '14 at 6:02
I have written a script for setting a history file per session or task its based off the following.# write existing history to the old file history -a # set new historyfile export HISTFILE="$1" export HISET=$1 # touch the new file to make sure it exists touch $HISTFILE # load new history file history -r $HISTFILEIt doesn't necessary save every history command but it saves the ones that i care about and its easier to retrieve them then going through every command. My version also lists all history files and provides the ability to search through them all.
Full source: https://github.com/simotek/scripts-config/blob/master/hiset.sh
Litch , answered Aug 11 '15 at 0:15
I chose to put history in a file-per-tty, as multiple people can be working on the same server - separating each session's commands makes it easier to audit.# Convert /dev/nnn/X or /dev/nnnX to "nnnX" HISTSUFFIX=`tty | sed 's/\///g;s/^dev//g'` # History file is now .bash_history_pts0 HISTFILE=".bash_history_$HISTSUFFIX" HISTTIMEFORMAT="%y-%m-%d %H:%M:%S " HISTCONTROL=ignoredups:ignorespace shopt -s histappend HISTSIZE=1000 HISTFILESIZE=5000History now looks like:
user@host:~# test 123 user@host:~# test 5451 user@host:~# history 1 15-08-11 10:09:58 test 123 2 15-08-11 10:10:00 test 5451 3 15-08-11 10:10:02 historyWith the files looking like:
user@host:~# ls -la .bash* -rw------- 1 root root 4275 Aug 11 09:42 .bash_history_pts0 -rw------- 1 root root 75 Aug 11 09:49 .bash_history_pts1 -rw-r--r-- 1 root root 3120 Aug 11 10:09 .bashrcfstang , answered Sep 10 '16 at 19:30
Here I will point out one problem withexport PROMPT_COMMAND="${PROMPT_COMMAND:+$PROMPT_COMMAND$'\n'}history -a; history -c; history -r"and
PROMPT_COMMAND="$PROMPT_COMMAND;history -a; history -n"If you run source ~/.bashrc, the $PROMPT_COMMAND will be like
"history -a; history -c; history -r history -a; history -c; history -r"and
"history -a; history -n history -a; history -n"This repetition occurs each time you run 'source ~/.bashrc'. You can check PROMPT_COMMAND after each time you run 'source ~/.bashrc' by running 'echo $PROMPT_COMMAND'.
You could see some commands are apparently broken: "history -n history -a". But the good news is that it still works, because other parts still form a valid command sequence (Just involving some extra cost due to executing some commands repetitively. And not so clean.)
Personally I use the following simple version:
shopt -s histappend PROMPT_COMMAND="history -a; history -c; history -r"which has most of the functionalities while no such issue as mentioned above.
Another point to make is: there is really nothing magic . PROMPT_COMMAND is just a plain bash environment variable. The commands in it get executed before you get bash prompt (the $ sign). For example, your PROMPT_COMMAND is "echo 123", and you run "ls" in your terminal. The effect is like running "ls; echo 123".
$ PROMPT_COMMAND="echo 123"output (Just like running 'PROMPT_COMMAND="echo 123"; $PROMPT_COMMAND'):
123Run the following:
$ echo 3output:
3 123"history -a" is used to write the history commands in memory to ~/.bash_history
"history -c" is used to clear the history commands in memory
"history -r" is used to read history commands from ~/.bash_history to memory
See history command explanation here: http://ss64.com/bash/history.html
PS: As other users have pointed out, export is unnecessary. See: using export in .bashrc
Hopping Bunny , answered May 13 '15 at 4:48
Here is the snippet from my .bashrc and short explanations wherever needed:# The following line ensures that history logs screen commands as well shopt -s histappend # This line makes the history file to be rewritten and reread at each bash prompt PROMPT_COMMAND="$PROMPT_COMMAND;history -a; history -n" # Have lots of history HISTSIZE=100000 # remember the last 100000 commands HISTFILESIZE=100000 # start truncating commands after 100000 lines HISTCONTROL=ignoreboth # ignoreboth is shorthand for ignorespace and ignoredupsThe HISTFILESIZE and HISTSIZE are personal preferences and you can change them as per your tastes.
Mulki , answered Jul 24 at 20:49
This works for ZSH############################################################################## # History Configuration for ZSH ############################################################################## HISTSIZE=10000 #How many lines of history to keep in memory HISTFILE=~/.zsh_history #Where to save history to disk SAVEHIST=10000 #Number of history entries to save to disk #HISTDUP=erase #Erase duplicates in the history file setopt appendhistory #Append history to the history file (no overwriting) setopt sharehistory #Share history across terminals setopt incappendhistory #Immediately append to the history file, not just when a term is killed
Jul 29, 2017 | stackoverflow.com
user1284631 , asked Jun 5 '13 at 8:44
Following some issues with scp (it did not like the presence of the bash bind command in my.bashrc
file, apparently), I followed the advice of a clever guy on the Internet (I just cannot find that post right now) that put at the top of its.bashrc
file this:[[ ${-#*} != ${-} ]] || returnin order to make sure that the bash initialization is NOT executed unless in interactive session.
Now, that works. However, I am not able to figure how it works. Could you enlighten me?
According to this answer , the
$-
is the current options set for the shell and I know that the${}
is the so-called "substring" syntax for expanding variables.However, I do not understand the
${-#*i}
part. And why$-#*i
is not the same as${-#*i}
.blue , answered Jun 5 '13 at 8:49
parameter#word} $parameter##word}The word is expanded to produce a pattern just as in filename expansion. If the pattern matches the beginning of the expanded value of parameter, then the result of the expansion is the expanded value of parameter with the shortest matching pattern (the '#' case) or the longest matching pattern (the '##' case) deleted.
If parameter is '@' or ' ', the pattern removal operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with '@' or ' ', the pattern removal operation is applied to each member of the array in turn, and the expansion is the resultant list.
Source: http://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html
So basically what happens in
${-#*i}
is that*i
is expanded, and if it matches the beginning of the value of$-
, then the result of the whole expansion is$-
with the shortest matching pattern between*i
and$-
deleted.Example
VAR "baioasd" echo ${VAR#*i};outputs
oasd
.In your case
If shell is interactive,
$-
will contain the letter 'i', so when you strip the variable$-
of the pattern*i
you will get a string that is different from the original$-
([[ ${-#*i} != ${-} ]]
yelds true). If shell is not interactive,$-
does not contain the letter 'i' so the pattern*i
does not match anything in$-
and[[ ${-#*i} != $- ]]
yelds false, and thereturn
statement is executed.perreal , answered Jun 5 '13 at 8:53
See this :To determine within a startup script whether or not Bash is running interactively, test the value of the '-' special parameter. It contains i when the shell is interactive
Your substitution removes the string up to, and including the
i
and tests if the substituted version is equal to the original string. They will be different if there isi
in the${-}
.
Jul 28, 2017 | stackoverflow.com
Possible Duplicate: What's the difference between .bashrc, .bash_profile, and .environment?
It seems that if I use
alias ls 'ls -F'
inside of .bashrc on Mac OS X, then the newly created shell will not have that alias. I need to type
bash
again and that alias will be in effect.And if I log into Linux on the hosting company, the
.bashrc
file has a comment line that says:For non-login shell
and the
.bash_profile
file has a comment that saysfor login shell
So where should aliases be written in? How come we separate the login shell and non-login shell?
Some webpage say use
.bash_aliases
, but it doesn't work on Mac OS X, it seems.Maggyero edited Apr 25 '16 at 16:24
The reason you separate the login and non-login shell is because the.bashrc
file is reloaded every time you start a new copy of Bash.The
.profile
file is loaded only when you either log in or use the appropriate flag to tell Bash to act as a login shell.Personally,
- I put my
PATH
setup into a.profile
file (because I sometimes use other shells);- I put my Bash aliases and functions into my
.bashrc
file;- I put this
#!/bin/bash # CRM .bash_profile Time-stamp: "2008-12-07 19:42" # echo "Loading ${HOME}/.bash_profile" source ~/. profile # get my PATH setup source ~/. bashrc # get my Bash aliases
in my
.bash_profile
file.Oh, and the reason you need to type
bash
again to get the new alias is that Bash loads your.bashrc
file when it starts but it doesn't reload it unless you tell it to. You can reload the.bashrc
file (and not need a second shell) by typingsource ~/. bashrc
which loads the
.bashrc
file as if you had typed the commands directly to Bash.lhunath answered May 24 '09 at 6:22
Check out http://mywiki.wooledge.org/DotFiles for an excellent resource on the topic aside fromAdam Rosenfield May 24 '09 at 2:46man bash
.Summary:
- You only log in once, and that's when
~/.bash_profile
or~/.profile
is read and executed. Since everything you run from your login shell inherits the login shell's environment, you should put all your environment variables in there. LikeLESS
,PATH
,MANPATH
,LC_*
, ... For an example, see: My.profile
- Once you log in, you can run several more shells. Imagine logging in, running X, and in X starting a few terminals with bash shells. That means your login shell started X, which inherited your login shell's environment variables, which started your terminals, which started your non-login bash shells. Your environment variables were passed along in the whole chain, so your non-login shells don't need to load them anymore. Non-login shells only execute
~/.bashrc
, not/.profile
or~/.bash_profile
, for this exact reason, so in there define everything that only applies to bash . That's functions, aliases, bash-only variables like HISTSIZE (this is not an environment variable, don't export it!) , shell options withset
andshopt
, etc. For an example, see: My.bashrc
- Now, as part of UNIX peculiarity, a login-shell does NOT execute
~/.bashrc
but only~/.profile
or~/.bash_profile
, so you should source that one manually from the latter. You'll see me do that in my~/.profile
too:source ~/.bashrc
.From the bash manpage:When bash is invoked as an interactive login shell, or as a non-interactive shell with the
--login
option, it first reads and executes commands from the file/etc/profile
, if that file exists. After reading that file, it looks for~/.bash_profile
,~/.bash_login
, and~/.profile
, in that order, and reads and executes commands from the first one that exists and is readable. The--noprofile
option may be used when the shell is started to inhibit this behavior.When a login shell exits, bash reads and executes commands from the file
~/.bash_logout
, if it exists.When an interactive shell that is not a login shell is started, bash reads and executes commands from
~/.bashrc
, if that file exists. This may be inhibited by using the--norc
option. The--rcfile
file option will force bash to read and execute commands from file instead of~/.bashrc
.Thus, if you want to get the same behavior for both login shells and interactive non-login shells, you should put all of your commands in either
.bashrc
or.bash_profile
, and then have the other file source the first one.Adam Rosenfield May 24 '09 at 2:46
.bash_profile
is loaded for a "login shell". I am not sure what that would be on OS X, but on Linux that is either X11 or a virtual terminal.
.bashrc
is loaded every time you run Bash. That is where you should put stuff you want loaded whenever you open a new Terminal.app window.I personally put everything in
.bashrc
so that I don't have to restart the application for changes to take effect.
www.linuxquestions.org
Mohtek
I feel stupid: declare not found in bash scripting? I was anxious to get my feet wet, and I'm only up to my toes before I'm stuck...this seems very very easy but I'm not sure what I've done wrong. Below is the script and its output. What the heck am I missing?
______________________________________________________
#!/bin/bash
declare -a PROD[0]="computers" PROD[1]="HomeAutomation"
printf "${ PROD[*]}"
_______________________________________________________products.sh: 6: declare: not found
products.sh: 8: Syntax error: Bad substitutionI ran what you posted (but at the command line, not in a script, though that should make no significant difference), and got this:
Code:
-bash: ${ PROD[*]}: bad substitution
In other words, I couldn't reproduce your first problem, the "declare: not found" error. Try the declare command by itself, on the command line.
And I got rid of the "bad substitution" problem when I removed the space which is between the ${ and the PROD on the printf line.
Hope this helps.
blackhole54
The previous poster identified your second problem.
As far as your first problem goes ... I am not a bash guru although I have written a number of bash scripts. So far I have found no need for declare statements. I suspect that you might not need it either. But if you do want to use it, the following does work:
Code:
#!/bin/bashdeclare -a PROD
PROD[0]="computers"
PROD[1]="HomeAutomation"
printf "${PROD[*]}\n"EDIT: My original post was based on an older version of bash. When I tried the declare statement you posted I got an error message, but one that was different from yours. I just tried it on a newer version of bash, and your declare statement worked fine. So it might depend on the version of bash you are running. What I posted above runs fine on both versions.
Jul 26, 2017 | unix.stackexchange.com
Ron Burk :
Obviously cut out of a much more complex script that was more meaningful:
bash array readonly#!/bin/bash function InitializeConfig(){ declare -r -g -A SHCFG_INIT=( [a]=b ) declare -r -g -A SHCFG_INIT=( [c]=d ) echo "This statement never gets executed" } set -o xtrace InitializeConfig echo "Back from function"The output looks like this:ronburk@ubuntu:~/ubucfg$ bash bug.sh + InitializeConfig + SHCFG_INIT=([a]=b) + declare -r -g -A SHCFG_INIT + SHCFG_INIT=([c]=d) + echo 'Back from function' Back from functionBash seems to silently execute a function return upon the second declare statement. Starting to think this really is a new bug, but happy to learn otherwise.Other details:
Machine: x86_64 OS: linux-gnu Compiler: gcc Compilation CFLAGS: -DPROGRAM='bash' -DCONF_HOSTTYPE='x86_64' -DCONF_OSTYPE='linux-gnu' -DCONF_MACHTYPE='x86_64-pc-linux-gn$ uname output: Linux ubuntu 3.16.0-38-generic #52~14.04.1-Ubuntu SMP Fri May 8 09:43:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Lin$ Machine Type: x86_64-pc-linux-gnu Bash Version: 4.3 Patch Level: 11 Release Status: release
share improve this question edited Jun 14 '15 at 17:43 asked Jun 14 '15 at 7:05 118
add a comment |
Weird. Doesn't happen in bash 4.2.53(1). – choroba Jun 14 '15 at 7:22
I can reproduce this problem with bash version 4.3.11 (Ubuntu 14.04.1 LTS). It works fine with bash 4.2.8 (Ubuntu 11.04). – Cyrus Jun 14 '15 at 7:34
Maybe related: unix.stackexchange.com/q/56815/116972 I can get expected result with declare -r -g -A 'SHCFG_INIT=( [a]=b )'
. – yaegashi Jun 14 '15 at 23:22By gum, you're right! Then I get readonly warning on second declare, which is reasonable, and the function completes. The xtrace output is also interesting; implies
declare
without single quotes is really treated as two steps. Ready to become superstitious about always single-quoting the argument todeclare
. Hard to see how popping the function stack can be anything but a bug, though. – Ron Burk Jun 14 '15 at 23:58I found this thread in [email protected] related to
test -v
on an assoc array. In short, bash implicitly didtest -v SHCFG_INIT[0]
in your script. I'm not sure this behavior got introduced in 4.3.You might want to use
declare -p
to workaround this...if declare p SHCFG_INIT >/dev/null >& ; then echo "looks like SHCFG_INIT not defined" fi ====
Well, rats. I think your answer is correct, but also reveals I'm really asking two separate questions when I thought they were probably the same issue. Since the title better reflects what turns out to be the "other" question, I'll leave this up for a while and see if anybody knows what's up with the mysterious implicit function return... Thanks! – Ron Burk Jun 14 '15 at 17:01
Edited question to focus on the remaining issue. Thanks again for the answer on the "-v" issue with associative arrays. – Ron Burk Jun 14 '15 at 17:55
Accepting this answer. Complete answer is here plus your comments above plus (IMHO) there's a bug in this version of bash (can't see how there can be any excuse for popping the function stack without warning). Thanks for your excellent research on this! – Ron Burk Jun 21 '15 at 19:31
Jul 26, 2017 | www.tldp.org
The declare or typeset builtins , which are exact synonyms, permit modifying the properties of variables. This is a very weak form of the typing [1] available in certain programming languages. The declare command is specific to version 2 or later of Bash. The typeset command also works in ksh scripts.
declare/typeset optionsExample 9-10. Using declare to type variables
- -r readonly
- ( declare -r var1 works the same as readonly var1 )
This is the rough equivalent of the C const type qualifier. An attempt to change the value of a readonly variable fails with an error message.
declare -r var1=1 echo "var1 = $var1" # var1 = 1 (( var1++ )) # x.sh: line 4: var1: readonly variable- -i integer
declare -i number # The script will treat subsequent occurrences of "number" as an integer. number=3 echo "Number = $number" # Number = 3 number=three echo "Number = $number" # Number = 0 # Tries to evaluate the string "three" as an integer.Certain arithmetic operations are permitted for declared integer variables without the need for expr or let .
n=6/3 echo "n = $n" # n = 6/3 declare -i n n=6/3 echo "n = $n" # n = 2- -a array
declare -a indicesThe variable indices will be treated as an array .
- -f function(s)
declare -fA declare -f line with no arguments in a script causes a listing of all the functions previously defined in that script.
declare -f function_nameA declare -f function_name in a script lists just the function named.
- -x export
declare -x var3This declares a variable as available for exporting outside the environment of the script itself.
- -x var=$value
declare -x var3=373The declare command permits assigning a value to a variable in the same statement as setting its properties.
#!/bin/bash func1 () { echo This is a function. } declare -f # Lists the function above. echo declare -i var1 # var1 is an integer. var1=2367 echo "var1 declared as $var1" var1=var1+1 # Integer declaration eliminates the need for 'let'. echo "var1 incremented by 1 is $var1." # Attempt to change variable declared as integer. echo "Attempting to change var1 to floating point value, 2367.1." var1=2367.1 # Results in error message, with no change to variable. echo "var1 is still $var1" echo declare -r var2=13.36 # 'declare' permits setting a variable property #+ and simultaneously assigning it a value. echo "var2 declared as $var2" # Attempt to change readonly variable. var2=13.37 # Generates error message, and exit from script. echo "var2 is still $var2" # This line will not execute. exit 0 # Script will not exit here.9.2.1. Another use for declare
Using the declare builtin restricts the scope of a variable.
foo () { FOO="bar" } bar () { foo echo $FOO } bar # Prints bar.However . . .
foo (){ declare FOO="bar" } bar () { foo echo $FOO } bar # Prints nothing. # Thank you, Michael Iatrou, for pointing this out.The declare command can be helpful in identifying variables, environmental or otherwise. This can be especially useful with arrays .
Notes
bash$ declare | grep HOME HOME=/home/bozo bash$ zzy=68 bash$ declare | grep zzy zzy=68 bash$ Colors=([0]="purple" [1]="reddish-orange" [2]="light green") bash$ echo ${Colors[@]} purple reddish-orange light green bash$ declare | grep Colors Colors=([0]="purple" [1]="reddish-orange" [2]="light green")
[1] In this context, typing a variable means to classify it and restrict its properties. For example, a variable declared or typed as an integer is no longer available for string operations .
declare -i intvar intvar=23 echo "$intvar" # 23 intvar=stringval echo "$intvar" # 0
Jul 25, 2017 | wiki.bash-hackers.org
Script execution Your perfect Bash script executes with syntax errors If you write Bash scripts with Bash specific syntax and features, run them with Bash , and run them with Bash in native mode .
Wrong
- no shebang
- the interpreter used depends on the OS implementation and current shell
- can be run by calling bash with the script name as an argument, e.g.
bash myscript
#!/bin/sh
shebang
- depends on what
/bin/sh
actually is, for a Bash it means compatiblity mode, not native modeSee also:
Your script named "test" doesn't execute Give it another name. The executabletest
already exists.In Bash it's a builtin. With other shells, it might be an executable file. Either way, it's bad name choice!
Workaround: You can call it using the pathname:
/home/user/bin/testGlobbing Brace expansion is not globbing The following command line is not related to globbing (filename expansion):
# YOU EXPECT # -i1.vob -i2.vob -i3.vob .... echo -i{*.vob,} # YOU GET # -i*.vob -iWhy? The brace expansion is simple text substitution. All possible text formed by the prefix, the postfix and the braces themselves are generated. In the example, these are only two:-i*.vob
and-i
. The filename expansion happens after that, so there is a chance that-i*.vob
is expanded to a filename - if you have files like-ihello.vob
. But it definitely doesn't do what you expected.Please see:
Test-command
if [ $foo ]
if [-d $dir]
Please see:
Variables Setting variables The Dollar-Sign There is no$
(dollar-sign) when you reference the name of a variable! Bash is not PHP!# THIS IS WRONG! $myvar="Hello world!"A variable name preceeded with a dollar-sign always means that the variable gets expanded . In the example above, it might expand to nothing (because it wasn't set), effectively resulting in
="Hello world!"which definitely is wrong !When you need the name of a variable, you write only the name , for example
- (as shown above) to set variables:
picture=/usr/share/images/foo.png
- to name variables to be used by the
read
builtin command:read picture
- to name variables to be unset:
unset picture
When you need the content of a variable, you prefix its name with a dollar-sign , like
Whitespace Putting spaces on either or both sides of the equal-sign (
- echo "The used picture is: $picture"
=
) when assigning a value to a variable will fail.# INCORRECT 1 example = Hello # INCORRECT 2 example= Hello # INCORRECT 3 example =HelloThe only valid form is no spaces between the variable name and assigned value
# CORRECT 1 example=Hello # CORRECT 2 example=" Hello"Expanding (using) variables A typical beginner's trap is quoting.
As noted above, when you want to expand a variable i.e. "get the content", the variable name needs to be prefixed with a dollar-sign. But, since Bash knows various ways to quote and does word-splitting, the result isn't always the same.
Let's define an example variable containing text with spaces:
example="Hello world"
Used form result number of words $example
Hello world
2 "$example"
Hello world
1 \$example
$example
1 '$example'
$example
1 If you use parameter expansion, you must use the name (
PATH
) of the referenced variables/parameters. i.e. not ($PATH
):# WRONG! echo "The first character of PATH is ${$PATH:0:1}" # CORRECT echo "The first character of PATH is ${PATH:0:1}"Note that if you are using variables in arithmetic expressions , then the bare name is allowed:
((a=$a+7)) # Add 7 to a ((a = a + 7)) # Add 7 to a. Identical to the previous command. ((a += 7)) # Add 7 to a. Identical to the previous command. a=$((a+7)) # POSIX-compatible version of previous code.Please see:
Exporting Exporting a variable means to give newly created (child-)processes a copy of that variable. not copy a variable created in a child process to the parent process. The following example does not work, since the variablehello
is set in a child process (the process you execute to start that script./script.sh
):$ cat script.sh export hello=world $ ./script.sh $ echo $hello $Exporting is one-way. The direction is parent process to child process, not the reverse. The above example will work, when you don't execute the script, but include ("source") it:
$ source ./script.sh $ echo $hello world $In this case, the export command is of no use.Please see:
Exit codes Reacting to exit codes If you just want to react to an exit code, regardless of its specific value, you don't need to use$?
in a test command like this:grep ^root: etc passwd >/ dev null >& if $? -neq then echo "root was not found - check the pub at the corner" fiThis can be simplified to:
if grep ^root: etc passwd >/ dev null >& then echo "root was not found - check the pub at the corner" fiOr, simpler yet:
grep ^root: etc passwd >/ dev null >& || echo "root was not found - check the pub at the corner"If you need the specific value of
$?
, there's no other choice. But if you need only a "true/false" exit indication, there's no need for$?
.See also:
Output vs. Return Value It's important to remember the different ways to run a child command, and whether you want the output, the return value, or neither.When you want to run a command (or a pipeline) and save (or print) the output , whether as a string or an array, you use Bash's
$(command)
syntax:$(ls -l /tmp) newvariable=$(printf "foo")When you want to use the return value of a command, just use the command, or add ( ) to run a command or pipeline in a subshell:
if grep someuser /etc/passwd ; then # do something fi if ( w | grep someuser | grep sqlplus ) ; then # someuser is logged in and running sqlplus fiMake sure you're using the form you intended:
# WRONG! if $(grep ERROR /var/log/messages) ; then # send alerts fi
Jul 25, 2017 | wiki.bash-hackers.org
Intro The day will come when you want to give arguments to your scripts. These arguments are known as positional parameters . Some relevant special parameters are described below:
Parameter(s) Description $0
the first positional parameter, equivalent to argv[0]
in C, see the first argument$FUNCNAME
the function name ( attention : inside a function, $0
is still the$0
of the shell, not the function name)$1 $9
the argument list elements from 1 to 9 ${10} ${N}
the argument list elements beyond 9 (note the parameter expansion syntax!) $*
all positional parameters except $0
, see mass usage$@
all positional parameters except $0
, see mass usage$#
the number of arguments, not counting $0
These positional parameters reflect exactly what was given to the script when it was called.
Option-switch parsing (e.g.
-h
for displaying help) is not performed at this point.See also the dictionary entry for "parameter" . The first argument The very first argument you can access is referenced as
$0
. It is usually set to the script's name exactly as called, and it's set on shell initialization:Testscript - it just echos
$0
:#!/bin/bash echo "$0"You see,$0
is always set to the name the script is called with ($
is the prompt ):> ./testscript ./testscript> /usr/bin/testscript /usr/bin/testscriptHowever, this isn't true for login shells:
> echo "$0" -bashIn other terms,
$0
is not a positional parameter, it's a special parameter independent from the positional parameter list. It can be set to anything. In the ideal case it's the pathname of the script, but since this gets set on invocation, the invoking program can easily influence it (thelogin
program does that for login shells, by prefixing a dash, for example).Inside a function,
$0
still behaves as described above. To get the function name, use$FUNCNAME
. Shifting The builtin commandshift
is used to change the positional parameter values:
$1
will be discarded$2
will become$1
$3
will become$2
- in general:
$N
will become$N-1
The command can take a number as argument: Number of positions to shift. e.g.
shift 4
shifts$5
to$1
. Using them Enough theory, you want to access your script-arguments. Well, here we go. One by one One way is to access specific parameters:#!/bin/bash echo "Total number of arguments: $#" echo "Argument 1: $1" echo "Argument 2: $2" echo "Argument 3: $3" echo "Argument 4: $4" echo "Argument 5: $5"While useful in another situation, this way is lacks flexibility. The maximum number of arguments is a fixedvalue - which is a bad idea if you write a script that takes many filenames as arguments.
⇒ forget that one Loops There are several ways to loop through the positional parameters.
You can code a C-style for-loop using
$#
as the end value. On every iteration, theshift
-command is used to shift the argument list:numargs=$# for ((i=1 ; i <= numargs ; i++)) do echo "$1" shift doneNot very stylish, but usable. The
numargs
variable is used to store the initial value of$#
because the shift command will change it as the script runs.
Another way to iterate one argument at a time is the
for
loop without a given wordlist. The loop uses the positional parameters as a wordlist:for arg do echo "$arg" doneAdvantage: The positional parameters will be preserved
The next method is similar to the first example (the
for
loop), but it doesn't test for reaching$#
. It shifts and checks if$1
still expands to something, using the test command :while [ "$1" ] do echo "$1" shift doneLooks nice, but has the disadvantage of stopping when
$1
is empty (null-string). Let's modify it to run as long as$1
is defined (but may be null), using parameter expansion for an alternate value :while [ "${1+defined}" ]; do echo "$1" shift doneGetopts There is a small tutorial dedicated to ''getopts'' ( under construction ). Mass usage All Positional Parameters Sometimes it's necessary to just "relay" or "pass" given arguments to another program. It's very inefficient to do that in one of these loops, as you will destroy integrity, most likely (spaces!).
The shell developers created
$*
and$@
for this purpose.As overview:
Syntax Effective result $*
$1 $2 $3 ${N}
$@
$1 $2 $3 ${N}
"$*"
"$1c$2c$3c c${N}"
"$@"
"$1" "$2" "$3" "${N}"
Without being quoted (double quotes), both have the same effect: All positional parameters from
$1
to the last one used are expanded without any special handling.When the
$*
special parameter is double quoted, it expands to the equivalent of:"$1c$2c$3c$4c ..$N"
, where 'c' is the first character ofIFS
.But when the
$@
special parameter is used inside double quotes, it expands to the equivanent of
"$1" "$2" "$3" "$4" .. "$N"
which reflects all positional parameters as they were set initially and passed to the script or function. If you want to re-use your positional parameters to call another program (for example in a wrapper-script), then this is the choice for you, use double quoted
"$@"
.Well, let's just say: You almost always want a quoted
"$@"
! Range Of Positional Parameters Another way to mass expand the positional parameters is similar to what is possible for a range of characters using substring expansion on normal parameters and the mass expansion range of arrays .
${@:START:COUNT}
${*:START:COUNT}
"${@:START:COUNT}"
"${*:START:COUNT}"
The rules for using
@
or*
and quoting are the same as above. This will expandCOUNT
number of positional parameters beginning atSTART
.COUNT
can be omitted (${@:START}
), in which case, all positional parameters beginning atSTART
are expanded.If
START
is negative, the positional parameters are numbered in reverse starting with the last one.
COUNT
may not be negative, i.e. the element count may not be decremented.Example: START at the last positional parameter:
echo "${@: -1}"Attention : As of Bash 4, a
START
of0
includes the special parameter$0
, i.e. the shell name or whatever $0 is set to, when the positional parameters are in use. ASTART
of1
begins at$1
. In Bash 3 and older, both0
and1
began at$1
. Setting Positional Parameters Setting positional parameters with command line arguments, is not the only way to set them. The builtin command, set may be used to "artificially" change the positional parameters from inside the script or function:set "This is" my new "set of" positional parameters # RESULTS IN # $1: This is # $2: my # $3: new # $4: set of # $5: positional # $6: parametersIt's wise to signal "end of options" when setting positional parameters this way. If not, the dashes might be interpreted as an option switch by
set
itself:# both ways work, but behave differently. See the article about the set command! set -- ... set - ...Alternately this will also preserve any verbose (-v) or tracing (-x) flags, which may otherwise be reset by
set
set -$- ...Production examples Using a while loop To make your program accept options as standard command syntax:
COMMAND [options] <params>
# Like 'cat -A file.txt'See simple option parsing code below. It's not that flexible. It doesn't auto-interpret combined options (-fu USER) but it works and is a good rudimentary way to parse your arguments.
#!/bin/sh # Keeping options in alphabetical order makes it easy to add more. while : do case "$1" in -f | --file) file="$2" # You may want to check validity of $2 shift 2 ;; -h | --help) display_help # Call your function # no shifting needed here, we're done. exit 0 ;; -u | --user) username="$2" # You may want to check validity of $2 shift 2 ;; -v | --verbose) # It's better to assign a string, than a number like "verbose=1" # because if you're debugging the script with "bash -x" code like this: # # if [ "$verbose" ] ... # # You will see: # # if [ "verbose" ] ... # # Instead of cryptic # # if [ "1" ] ... # verbose="verbose" shift ;; --) # End of all options shift break; -*) echo "Error: Unknown option: $1" >&2 exit 1 ;; *) # No more options break ;; esac done # End of fileFilter unwanted options with a wrapper script This simple wrapper enables filtering unwanted options (here:
-a
and–all
forls
) out of the command line. It reads the positional parameters and builds a filtered array consisting of them, then callsls
with the new option set. It also respects the–
as "end of options" forls
and doesn't change anything after it:#!/bin/bash # simple ls(1) wrapper that doesn't allow the -a option options=() # the buffer array for the parameters eoo=0 # end of options reached while [[ $1 ]] do if ! ((eoo)); then case "$1" in -a) shift ;; --all) shift ;; -[^-]*a*|-a?*) options+=("${1//a}") shift ;; --) eoo=1 options+=("$1") shift ;; *) options+=("$1") shift ;; esac else options+=("$1") # Another (worse) way of doing the same thing: # options=("${options[@]}" "$1") shift fi done /bin/ls "${options[@]}"Using getopts There is a small tutorial dedicated to ''getopts'' ( under construction ). See also
Discussion 2010/04/14 14:20
- Internal: Small getopts tutorial
- Internal: The while-loop
- Internal: The C-style for-loop
- Internal: Arrays (for equivalent syntax for mass-expansion)
- Internal: Substring expansion on a parameter (for equivalent syntax for mass-expansion)
- Dictionary, internal: Parameter
The shell-developers invented $* and $@ for this purpose.Without being quoted (double-quoted), both have the same effect: All positional parameters from $1 to the last used one >are expanded, separated by the first character of IFS (represented by "c" here, but usually a space):
$1c$2c$3c$4c........$NWithout double quotes, $* and $@ are expanding the positional parameters separated by only space, not by IFS.
#!/bin/bash export IFS='-' echo -e $* echo -e $@$./test "This is" 2 3 This is 2 3 This is 2 32011/02/18 16:11 #!/bin/bashOLDIFS="$IFS" IFS='-' #export IFS='-'
#echo -e $* #echo -e $@ #should be echo -e "$*" echo -e "$@" IFS="$OLDIFS"
2011/02/18 16:14 #should be echo -e "$*"2012/04/20 10:32 Here's yet another non-getopts way.
http://bsdpants.blogspot.de/2007/02/option-ize-your-shell-scripts.html
2012/07/16 14:48 Hi there!What if I use "$@" in subsequent function calls, but arguments are strings?
I mean, having:
#!/bin/bash echo "$@" echo n: $#If you use it
mypc$ script arg1 arg2 "asd asd" arg4 arg1 arg2 asd asd arg4 n: 4But having
#!/bin/bash myfunc() { echo "$@" echo n: $# } echo "$@" echo n: $# myfunc "$@"you get:
mypc$ myscrpt arg1 arg2 "asd asd" arg4 arg1 arg2 asd asd arg4 4 arg1 arg2 asd asd arg4 5As you can see, there is no way to make know the function that a parameter is a string and not a space separated list of arguments.
Any idea of how to solve it? I've test calling functions and doing expansion in almost all ways with no results.
2012/08/12 09:11 I don't know why it fails for you. It should work if you use
"$@"
, of course.See the example I used your second script with:
$ ./args1 a b c "d e" f a b c d e f n: 5 a b c d e f n: 5
Jul 25, 2017 | wiki.bash-hackers.org
Purpose An array is a parameter that holds mappings from keys to values. Arrays are used to store a collection of parameters into a parameter. Arrays (in any programming language) are a useful and common composite data structure, and one of the most important scripting features in Bash and other shells.
Here is an abstract representation of an array named
NAMES
. The indexes go from 0 to 3.NAMES 0: Peter 1: Anna 2: Greg 3: JanInstead of using 4 separate variables, multiple related variables are grouped grouped together into elements of the array, accessible by their key . If you want the second name, ask for index 1 of the array
NAMES
. Indexing Bash supports two different types of ksh-like one-dimensional arrays. Multidimensional arrays are not implemented .Syntax Referencing To accommodate referring to array variables and their individual elements, Bash extends the parameter naming scheme with a subscript suffix. Any valid ordinary scalar parameter name is also a valid array name:
- Indexed arrays use positive integer numbers as keys. Indexed arrays are always sparse , meaning indexes are not necessarily contiguous. All syntax used for both assigning and dereferencing indexed arrays is an arithmetic evaluation context (see Referencing ). As in C and many other languages, the numerical array indexes start at 0 (zero). Indexed arrays are the most common, useful, and portable type. Indexed arrays were first introduced to Bourne-like shells by ksh88. Similar, partially compatible syntax was inherited by many derivatives including Bash. Indexed arrays always carry the
-a
attribute.- Associative arrays (sometimes known as a "hash" or "dict") use arbitrary nonempty strings as keys. In other words, associative arrays allow you to look up a value from a table based upon its corresponding string label. Associative arrays are always unordered , they merely associate key-value pairs. If you retrieve multiple values from the array at once, you can't count on them coming out in the same order you put them in. Associative arrays always carry the
-A
attribute, and unlike indexed arrays, Bash requires that they always be declared explicitly (as indexed arrays are the default, see declaration ). Associative arrays were first introduced in ksh93, and similar mechanisms were later adopted by Zsh and Bash version 4. These three are currently the only POSIX-compatible shells with any associative array support.[[:alpha:]_][[:alnum:]_]*
. The parameter name may be followed by an optional subscript enclosed in square brackets to refer to a member of the array.The overall syntax is
arrname[subscript]
- where for indexed arrays,subscript
is any valid arithmetic expression, and for associative arrays, any nonempty string. Subscripts are first processed for parameter and arithmetic expansions, and command and process substitutions. When used within parameter expansions or as an argument to the unset builtin, the special subscripts*
and@
are also accepted which act upon arrays analogously to the way the@
and*
special parameters act upon the positional parameters. In parsing the subscript, bash ignores any text that follows the closing bracket up to the end of the parameter name.With few exceptions, names of this form may be used anywhere ordinary parameter names are valid, such as within arithmetic expressions , parameter expansions , and as arguments to builtins that accept parameter names. An array is a Bash parameter that has been given the
-a
(for indexed) or-A
(for associative) attributes . However, any regular (non-special or positional) parameter may be validly referenced using a subscript, because in most contexts, referring to the zeroth element of an array is synonymous with referring to the array name without a subscript.# "x" is an ordinary non-array parameter. $ x=hi; printf '%s ' "$x" "${x[0]}"; echo "${_[0]}" hi hi hiThe only exceptions to this rule are in a few cases where the array variable's name refers to the array as a whole. This is the case for the
unset
builtin (see destruction ) and when declaring an array without assigning any values (see declaration ). Declaration The following explicitly give variables array attributes, making them arrays:Storing values Storing values in arrays is quite as simple as storing values in normal variables.
Syntax Description ARRAY=()
Declares an indexed array ARRAY
and initializes it to be empty. This can also be used to empty an existing array.ARRAY[0]=
Generally sets the first element of an indexed array. If no array ARRAY
existed before, it is created.declare -a ARRAY
Declares an indexed array ARRAY
. An existing array is not initialized.declare -A ARRAY
Declares an associative array ARRAY
. This is the one and only way to create associative arrays.
Syntax Description ARRAY[N]=VALUE
Sets the element N
of the indexed arrayARRAY
toVALUE
.N
can be any valid arithmetic expressionARRAY[STRING]=VALUE
Sets the element indexed by STRING
of the associative arrayARRAY
.ARRAY=VALUE
As above. If no index is given, as a default the zeroth element is set to VALUE
. Careful, this is even true of associative arrays - there is no error if no key is specified, and the value is assigned to string index "0".ARRAY=(E1 E2 )
Compound array assignment - sets the whole array ARRAY
to the given list of elements indexed sequentially starting at zero. The array is unset before assignment unless the += operator is used. When the list is empty (ARRAY=()
), the array will be set to an empty array. This method obviously does not use explicit indexes. An associative array can not be set like that! Clearing an associative array usingARRAY=()
works.ARRAY=([X]=E1 [Y]=E2 )
Compound assignment for indexed arrays with index-value pairs declared individually (here for example X
andY
). X and Y are arithmetic expressions. This syntax can be combined with the above - elements declared without an explicitly specified index are assigned sequentially starting at either the last element with an explicit index, or zero.ARRAY=([S1]=E1 [S2]=E2 )
Individual mass-setting for associative arrays . The named indexes (here: S1
andS2
) are strings.ARRAY+=(E1 E2 )
Append to ARRAY. As of now, arrays can't be exported. Getting values article about parameter expansion and check the notes about arrays.
Syntax Description ${ARRAY[N]}
Expands to the value of the index N
in the indexed arrayARRAY
. IfN
is a negative number, it's treated as the offset from the maximum assigned index (can't be used for assignment) - 1${ARRAY[S]}
Expands to the value of the index S
in the associative arrayARRAY
."${ARRAY[@]}"
${ARRAY[@]}
"${ARRAY[*]}"
${ARRAY[*]}Similar to mass-expanding positional parameters , this expands to all elements. If unquoted, both subscripts *
and@
expand to the same result, if quoted,@
expands to all elements individually quoted,*
expands to all elements quoted as a whole."${ARRAY[@]:N:M}"
${ARRAY[@]:N:M}
"${ARRAY[*]:N:M}"
${ARRAY[*]:N:M}Similar to what this syntax does for the characters of a single string when doing substring expansion , this expands to M
elements starting with elementN
. This way you can mass-expand individual indexes. The rules for quoting and the subscripts*
and@
are the same as above for the other mass-expansions.For clarification: When you use the subscripts
@
or*
for mass-expanding, then the behaviour is exactly what it is for$@
and$*
when mass-expanding the positional parameters . You should read this article to understand what's going on. MetadataDestruction The unset builtin command is used to destroy (unset) arrays or individual elements of arrays.
Syntax Description ${#ARRAY[N]}
Expands to the length of an individual array member at index N
( stringlength${#ARRAY[STRING]}
Expands to the length of an individual associative array member at index STRING
( stringlength )${#ARRAY[@]}
${#ARRAY[*]}
Expands to the number of elements in ARRAY
${!ARRAY[@]}
${!ARRAY[*]}
Expands to the indexes in ARRAY
since BASH 3.0
Syntax Description unset -v ARRAY
unset -v ARRAY[@]
unset -v ARRAY[*]
Destroys a complete array unset -v ARRAY[N]
Destroys the array element at index N
unset -v ARRAY[STRING]
Destroys the array element of the associative array at index STRING
It is best to explicitly specify -v when unsetting variables with unset.
pathname expansion to occur due to the presence of glob characters.Example: You are in a directory with a file named
x1
, and you want to destroy an array elementx[1]
, withunset x[1]then pathname expansion will expand to the filenamex1
and break your processing!Even worse, if
nullglob
is set, your array/index will disappear.To avoid this, always quote the array name and index:
unset -v 'x[1]'This applies generally to all commands which take variable names as arguments. Single quotes preferred.
Usage Numerical Index Numerical indexed arrays are easy to understand and easy to use. The Purpose and Indexing chapters above more or less explain all the needed background theory.
Now, some examples and comments for you.
Let's say we have an array
sentence
which is initialized as follows:sentence=(Be liberal in what you accept, and conservative in what you send)Since no special code is there to prevent word splitting (no quotes), every word there will be assigned to an individual array element. When you count the words you see, you should get 12. Now let's see if Bash has the same opinion:
$ echo ${#sentence[@]} 12Yes, 12. Fine. You can take this number to walk through the array. Just subtract 1 from the number of elements, and start your walk at 0 (zero)
((n_elements=${#sentence[@]}, max_index=n_elements - 1)) for ((i = 0; i <= max_index; i++)); do echo "Element $i: '${sentence[i]}'" doneYou always have to remember that, it seems newbies have problems sometimes. Please understand that numerical array indexing begins at 0 (zero)
The method above, walking through an array by just knowing its number of elements, only works for arrays where all elements are set, of course. If one element in the middle is removed, then the calculation is nonsense, because the number of elements doesn't correspond to the highest used index anymore (we call them " sparse arrays "). Associative (Bash 4) Associative arrays (or hash tables ) are not much more complicated than numerical indexed arrays. The numerical index value (in Bash a number starting at zero) just is replaced with an arbitrary string:
# declare -A, introduced with Bash 4 to declare an associative array declare -A sentence sentence[Begin]='Be liberal in what' sentence[Middle]='you accept, and conservative' sentence[End]='in what you send' sentence['Very end']=...Beware: don't rely on the fact that the elements are ordered in memory like they were declared, it could look like this:
# output from 'set' command sentence=([End]="in what you send" [Middle]="you accept, and conservative " [Begin]="Be liberal in what " ["Very end"]="...")This effectively means, you can get the data back with"${sentence[@]}"
, of course (just like with numerical indexing), but you can't rely on a specific order. If you want to store ordered data, or re-order data, go with numerical indexes. For associative arrays, you usually query known index values:for element in Begin Middle End "Very end"; do printf "%s" "${sentence[$element]}" done printf "\n"A nice code example: Checking for duplicate files using an associative array indexed with the SHA sum of the files:
# Thanks to Tramp in #bash for the idea and the code unset flist; declare -A flist; while read -r sum fname; do if [[ ${flist[$sum]} ]]; then printf 'rm -- "%s" # Same as >%s<\n' "$fname" "${flist[$sum]}" else flist[$sum]="$fname" fi done < <(find . -type f -exec sha256sum {} +) >rmdupsInteger arrays Any type attributes applied to an array apply to all elements of the array. If the integer attribute is set for either indexed or associative arrays, then values are considered as arithmetic for both compound and ordinary assignment, and the += operator is modified in the same way as for ordinary integer variables.
~ $ ( declare -ia 'a=(2+4 [2]=2+2 [a[2]]="a[2]")' 'a+=(42 [a[4]]+=3)'; declare -p a ) declare -ai a='([0]="6" [2]="4" [4]="7" [5]="42")'
a[0]
is assigned to the result of2+4
.a[1]
gets the result of2+2
. The last index in the first assignment is the result ofa[2]
, which has already been assigned as4
, and its value is also givena[2]
.This shows that even though any existing arrays named
a
in the current scope have already been unset by using=
instead of+=
to the compound assignment, arithmetic variables within keys can self-reference any elements already assigned within the same compound-assignment. With integer arrays this also applies to expressions to the right of the=
. (See evaluation order , the right side of an arithmetic assignment is typically evaluated first in Bash.)The second compound assignment argument to declare uses
+=
, so it appends after the last element of the existing array rather than deleting it and creating a new array, soa[5]
gets42
.Lastly, the element whose index is the value of
a[4]
(4
), gets3
added to its existing value, makinga[4]
==7
. Note that having the integer attribute set this time causes += to add, rather than append a string, as it would for a non-integer array.The single quotes force the assignments to be evaluated in the environment of
declare
. This is important because attributes are only applied to the assignment after assignment arguments are processed. Without them the+=
compound assignment would have been invalid, and strings would have been inserted into the integer array without evaluating the arithmetic. A special-case of this is shown in the next section.eval
, but there are differences.)'Todo:
' Discuss this in detail.Indirection Arrays can be expanded indirectly using the indirect parameter expansion syntax. Parameters whose values are of the form:
name[index]
,name[@]
, orname[*]
when expanded indirectly produce the expected results. This is mainly useful for passing arrays (especially multiple arrays) by name to a function.This example is an "isSubset"-like predicate which returns true if all key-value pairs of the array given as the first argument to isSubset correspond to a key-value of the array given as the second argument. It demonstrates both indirect array expansion and indirect key-passing without eval using the aforementioned special compound assignment expansion.
isSubset() { local -a 'xkeys=("${!'"$1"'[@]}")' 'ykeys=("${!'"$2"'[@]}")' set -- "${@/%/[key]}" (( ${#xkeys[@]} <= ${#ykeys[@]} )) || return 1 local key for key in "${xkeys[@]}"; do [[ ${!2+_} && ${!1} == ${!2} ]] || return 1 done } main() { # "a" is a subset of "b" local -a 'a=({0..5})' 'b=({0..10})' isSubset a b echo $? # true # "a" contains a key not in "b" local -a 'a=([5]=5 {6..11})' 'b=({0..10})' isSubset a b echo $? # false # "a" contains an element whose value != the corresponding member of "b" local -a 'a=([5]=5 6 8 9 10)' 'b=({0..10})' isSubset a b echo $? # false } mainThis script is one way of implementing a crude multidimensional associative array by storing array definitions in an array and referencing them through indirection. The script takes two keys and dynamically calls a function whose name is resolved from the array.
callFuncs() { # Set up indirect references as positional parameters to minimize local name collisions. set -- "${@:1:3}" ${2+'a["$1"]' "$1"'["$2"]'} # The only way to test for set but null parameters is unfortunately to test each individually. local x for x; do [[ $x ]] || return 0 done local -A a=( [foo]='([r]=f [s]=g [t]=h)' [bar]='([u]=i [v]=j [w]=k)' [baz]='([x]=l [y]=m [z]=n)' ) ${4+${a["$1"]+"${1}=${!3}"}} # For example, if "$1" is "bar" then define a new array: bar=([u]=i [v]=j [w]=k) ${4+${a["$1"]+"${!4-:}"}} # Now just lookup the new array. for inputs: "bar" "v", the function named "j" will be called, which prints "j" to stdout. } main() { # Define functions named {f..n} which just print their own names. local fun='() { echo "$FUNCNAME"; }' x for x in {f..n}; do eval "${x}${fun}" done callFuncs "$@" } main "$@"Bugs and Portability Considerations
Bugs
- Arrays are not specified by POSIX. One-dimensional indexed arrays are supported using similar syntax and semantics by most Korn-like shells.
- Associative arrays are supported via
typeset -A
in Bash 4, Zsh, and Ksh93.- In Ksh93, arrays whose types are not given explicitly are not necessarily indexed. Arrays defined using compound assignments which specify subscripts are associative by default. In Bash, associative arrays can only be created by explicitly declaring them as associative, otherwise they are always indexed. In addition, ksh93 has several other compound structures whose types can be determined by the compound assignment syntax used to create them.
- In Ksh93, using the
=
compound assignment operator unsets the array, including any attributes that have been set on the array prior to assignment. In order to preserve attributes, you must use the+=
operator. However, declaring an associative array, then attempting ana=( )
style compound assignment without specifying indexes is an error. I can't explain this inconsistency.$ ksh -c 'function f { typeset -a a; a=([0]=foo [1]=bar); typeset -p a; }; f' # Attribute is lost, and since subscripts are given, we default to associative. typeset -A a=([0]=foo [1]=bar) $ ksh -c 'function f { typeset -a a; a+=([0]=foo [1]=bar); typeset -p a; }; f' # Now using += gives us the expected results. typeset -a a=(foo bar) $ ksh -c 'function f { typeset -A a; a=(foo bar); typeset -p a; }; f' # On top of that, the reverse does NOT unset the attribute. No idea why. ksh: f: line 1: cannot append index array to associative array a- Only Bash and mksh support compound assignment with mixed explicit subscripts and automatically incrementing subscripts. In ksh93, in order to specify individual subscripts within a compound assignment, all subscripts must be given (or none). Zsh doesn't support specifying individual subscripts at all.
- Appending to a compound assignment is a fairly portable way to append elements after the last index of an array. In Bash, this also sets append mode for all individual assignments within the compound assignment, such that if a lower subscript is specified, subsequent elements will be appended to previous values. In ksh93, it causes subscripts to be ignored, forcing appending everything after the last element. (Appending has different meaning due to support for multi-dimensional arrays and nested compound datastructures.)
$ ksh -c 'function f { typeset -a a; a+=(foo bar baz); a+=([3]=blah [0]=bork [1]=blarg [2]=zooj); typeset -p a; }; f' # ksh93 forces appending to the array, disregarding subscripts typeset -a a=(foo bar baz '[3]=blah' '[0]=bork' '[1]=blarg' '[2]=zooj') $ bash -c 'function f { typeset -a a; a+=(foo bar baz); a+=(blah [0]=bork blarg zooj); typeset -p a; }; f' # Bash applies += to every individual subscript. declare -a a='([0]="foobork" [1]="barblarg" [2]="bazzooj" [3]="blah")' $ mksh -c 'function f { typeset -a a; a+=(foo bar baz); a+=(blah [0]=bork blarg zooj); typeset -p a; }; f' # Mksh does like Bash, but clobbers previous values rather than appending. set -A a typeset a[0]=bork typeset a[1]=blarg typeset a[2]=zooj typeset a[3]=blah- In Bash and Zsh, the alternate value assignment parameter expansion (
${arr[idx]:=foo}
) evaluates the subscript twice, first to determine whether to expand the alternate, and second to determine the index to assign the alternate to. See evaluation order .$ : ${_[$(echo $RANDOM >&2)1]:=$(echo hi >&2)} 13574 hi 14485- In Zsh, arrays are indexed starting at 1 in its default mode. Emulation modes are required in order to get any kind of portability.
- Zsh and mksh do not support compound assignment arguments to
typeset
.- Ksh88 didn't support modern compound array assignment syntax. The original (and most portable) way to assign multiple elements is to use the
set -A name arg1 arg2
syntax. This is supported by almost all shells that support ksh-like arrays except for Bash. Additionally, these shells usually support an optional-s
argument toset
which performs lexicographic sorting on either array elements or the positional parameters. Bash has no built-in sorting ability other than the usual comparison operators.$ ksh -c 'set -A arr -- foo bar bork baz; typeset -p arr' # Classic array assignment syntax typeset -a arr=(foo bar bork baz) $ ksh -c 'set -sA arr -- foo bar bork baz; typeset -p arr' # Native sorting! typeset -a arr=(bar baz bork foo) $ mksh -c 'set -sA arr -- foo "[3]=bar" "[2]=baz" "[7]=bork"; typeset -p arr' # Probably a bug. I think the maintainer is aware of it. set -A arr typeset arr[2]=baz typeset arr[3]=bar typeset arr[7]=bork typeset arr[8]=foo- Evaluation order for assignments involving arrays varies significantly depending on context. Notably, the order of evaluating the subscript or the value first can change in almost every shell for both expansions and arithmetic variables. See evaluation order for details.
- Bash 4.1.* and below cannot use negative subscripts to address array indexes relative to the highest-numbered index. You must use the subscript expansion, i.e.
"${arr[@]:(-n):1}"
, to expand the nth-last element (or the next-highest indexed aftern
ifarr[n]
is unset). In Bash 4.2, you may expand (but not assign to) a negative index. In Bash 4.3, ksh93, and zsh, you may both assign and expand negative offsets.- ksh93 also has an additional slice notation:
"${arr[n..m]}"
wheren
andm
are arithmetic expressions. These are needed for use with multi-dimensional arrays.- Assigning or referencing negative indexes in mksh causes wrap-around. The max index appears to be
UINT_MAX
, which would be addressed byarr[-1]
.- So far, Bash's
-v var
test doesn't support individual array subscripts. You may supply an array name to test whether an array is defined, but can't check an element. ksh93's-v
supports both. Other shells lack a-v
test.Evaluation order Here are some of the nasty details of array assignment evaluation order. You can use this testcase code to generate these results.
- Fixed in 4.3 Bash 4.2.* and earlier considers each chunk of a compound assignment, including the subscript for globbing. The subscript part is considered quoted, but any unquoted glob characters on the right-hand side of the
[ ]=
will be clumped with the subscript and counted as a glob. Therefore, you must quote anything on the right of the=
sign. This is fixed in 4.3, so that each subscript assignment statement is expanded following the same rules as an ordinary assignment. This also works correctly in ksh93.$ touch '[1]=a'; bash -c 'a=([1]=*); echo "${a[@]}"' [1]=amksh has a similar but even worse problem in that the entire subscript is considered a glob.$ touch 1=a; mksh -c 'a=([123]=*); print -r -- "${a[@]}"' 1=a- Fixed in 4.3 In addition to the above globbing issue, assignments preceding "declare" have an additional effect on brace and pathname expansion.
$ set -x; foo=bar declare arr=( {1..10} ) + foo=bar + declare 'a=(1)' 'a=(2)' 'a=(3)' 'a=(4)' 'a=(5)' $ touch xy=foo $ declare x[y]=* + declare 'x[y]=*' $ foo=bar declare x[y]=* + foo=bar + declare xy=fooEach word (the entire assignment) is subject to globbing and brace expansion. This appears to trigger the same strange expansion mode aslet
,eval
, other declaration commands, and maybe more.- Fixed in 4.3 Indirection combined with another modifier expands arrays to a single word.
$ a=({a..c}) b=a[@]; printf '<%s> ' "${!b}"; echo; printf '<%s> ' "${!b/%/foo}"; echo <a> <b> <c> <a b cfoo>- Fixed in 4.3 Process substitutions are evaluated within array indexes. Zsh and ksh don't do this in any arithmetic context.
# print "moo" dev=fd=1 _[1<(echo moo >&2)]= # Fork bomb ${dev[${dev='dev[1>(${dev[dev]})]'}]}Each testcase prints evaluation order for indexed array assignment contexts. Each context is tested for expansions (represented by digits) and arithmetic (letters), ordered from left to right within the expression. The output corresponds to the way evaluation is re-ordered for each shell: a[ $1 a ]=${b[ $2 b ]:=${c[ $3 c ]}} No attributes a[ $1 a ]=${b[ $2 b ]:=c[ $3 c ]} typeset -ia a a[ $1 a ]=${b[ $2 b ]:=c[ $3 c ]} typeset -ia b a[ $1 a ]=${b[ $2 b ]:=c[ $3 c ]} typeset -ia a b (( a[ $1 a ] = b[ $2 b ] ${c[ $3 c ]} )) No attributes (( a[ $1 a ] = ${b[ $2 b ]:=c[ $3 c ]} )) typeset -ia b a+=( [ $1 a ]=${b[ $2 b ]:=${c[ $3 c ]}} [ $4 d ]=$(( $5 e )) ) typeset -a a a+=( [ $1 a ]=${b[ $2 b ]:=c[ $3 c ]} [ $4 d ]=${5}e ) typeset -ia a bash: 4.2.42(1)-release 2 b 3 c 2 b 1 a 2 b 3 2 b 1 a c 2 b 3 2 b c 1 a 2 b 3 2 b c 1 a c 1 2 3 c b a 1 2 b 3 2 b c c a 1 2 b 3 c 2 b 4 5 e a d 1 2 b 3 2 b 4 5 a c d e ksh93: Version AJM 93v- 2013-02-22 1 2 b b a 1 2 b b a 1 2 b b a 1 2 b b a 1 2 3 c b a 1 2 b b a 1 2 b b a 4 5 e d 1 2 b b a 4 5 d e mksh: @(#)MIRBSD KSH R44 2013/02/24 2 b 3 c 1 a 2 b 3 1 a c 2 b 3 c 1 a 2 b 3 c 1 a 1 2 3 c a b 1 2 b 3 c a 1 2 b 3 c 4 5 e a d 1 2 b 3 4 5 a c d e zsh: 5.0.2 2 b 3 c 2 b 1 a 2 b 3 2 b 1 a c 2 b 1 a 2 b 1 a 1 2 3 c b a 1 2 b a 1 2 b 3 c 2 b 4 5 e 1 2 b 3 2 b 4 5See also
- Parameter expansion (contains sections for arrays)
- The classic for-loop (contains some examples to iterate over arrays)
- The declare builtin command
- BashFAQ 005 - How can I use array variables? - A very detailed discussion on arrays with many examples.
- BashSheet - Arrays - Bashsheet quick-reference on Greycat's wiki.
Jul 25, 2017 | artofsoftware.org
Sep 2, 2011
Posted by craig in Tools Tags Once upon a time I was playing with Windows Power Shell (WPSH) and discovered a very useful function for changing to commonly visited directories. The function, called "go", which was written by Peter Provost , grew on me as I used WPSH ! so much so that I decided to implement it in bash after my WPSH experiments ended.The problem is simple. Users of command line interfaces tend to visit the same directories repeatedly over the course of their work, and having a way to get to these oft-visited places without a lot of typing is nice.
The solution entails maintaining a map of key-value pairs, where each key is an alias to a value, which is itself a commonly visited directory. The "go" function will, when given a string input, look that string up in the map, and if the key is found, move to the directory indicated by the value.
The map itself is just a specially formatted text file with one key-value entry per line, while each entry is separated into key-value components by the first encountered colon, with the left side being interpreted as the entry's key and the right side as its value.
Keys are typically short easily typed strings, while values can be arbitrary path names, and even contain references to environment variables. The effect of this is that "go" can respond dynamically to the environment.
Finally, the "go" function finds the map file by referring to an environment variable called "GO_FILE", which should have as its value the full path to the map.
Before I ran into this idea I had maintained a number of shell aliases, (i.e. alias dwork='cd $WORK_DIR'), to achieve a similar end, but every time I wanted to add a new location I was forced to edit my .bashrc file. Then I would subsequently have to resource it or enter the alias again on the command line. Since I typically keep multiple shells open this is just a pain, and so I didn't add new aliases very often. With this method, a new entry in the "go file" is immediately available to all open shells without any extra finagling.
This functionality is related to CDPATH, but they are not replacements for one another. Indeed CDPATH is the more appropriate solution when you want to be able to "cd" to all or most of the sub-directories of some parent. On the other hand, "go" works very well for getting to a single directory easily. For example you might not want "/usr/local" in your CDPATH and still want an abbreviated way of getting to "/usr/local/share".
The code for the go function, as well as some brief documentation follows.
############################################## # GO # # Inspired by some Windows Power Shell code # from Peter Provost (peterprovost.org) # # Here are some examples entries: # work:${WORK_DIR} # source:${SOURCE_DIR} # dev:/c/dev # object:${USER_OBJECT_DIR} # debug:${USER_OBJECT_DIR}/debug ############################################### export GO_FILE=~/.go_locations function go { if [ -z "$GO_FILE" ] then echo "The variable GO_FILE is not set." return fi if [ ! -e "$GO_FILE" ] then echo "The 'go file': '$GO_FILE' does not exist." return fi dest="" oldIFS=${IFS} IFS=$'\n' for entry in `cat ${GO_FILE}` do if [ "$1" = ${entry%%:*} ] then #echo $entry dest=${entry##*:} break fi done if [ -n "$dest" ] then # Expand variables in the go file. #echo $dest cd `eval echo $dest` else echo "Invalid location, valid locations are:" cat $GO_FILE fi export IFS=${oldIFS} }
Jul 25, 2017 | wiki.bash-hackers.org
local to a function:
- Using the
local
keyword, or- Using
declare
(which will detect when it was called from within a function and make the variable(s) local).myfunc () local var =VALUE # alternative, only when used INSIDE a function declare var =VALUE ...The local keyword (or declaring a variable using the
declare
command) tags a variable to be treated completely local and separate inside the function where it was declared:foo =external printvalue () local foo =internal echo $foo # this will print "external" echo $foo # this will print "internal" printvalue # this will print - again - "external" echo $foo
Jul 25, 2017 | wiki.bash-hackers.org
The environment space is not directly related to the topic about scope, but it's worth mentioning.
Every UNIX® process has a so-called environment . Other items, in addition to variables, are saved there, the so-called environment variables . When a child process is created (in Bash e.g. by simply executing another program, say
ls
to list files), the whole environment including the environment variables is copied to the new process. Reading that from the other side means: Only variables that are part of the environment are available in the child process.A variable can be tagged to be part of the environment using the
export
command:# create a new variable and set it: # -> This is a normal shell variable, not an environment variable! myvariable "Hello world." # make the variable visible to all child processes: # -> Make it an environment variable: "export" it export myvariableRemember that the exported variable is a copy . There is no provision to "copy it back to the parent." See the article about Bash in the process tree !
1) under specific circumstances, also by the shell itself
Jul 25, 2017 | wiki.bash-hackers.org
:
(colon) and input redirection. The:
does nothing, it's a pseudo command, so it does not care about standard input. In the following code example, you want to test mail and logging, but not dump the database, or execute a shutdown:#!/bin/bash # Write info mails, do some tasks and bring down the system in a safe way echo "System halt requested" mail -s "System halt" netadmin example.com logger -t SYSHALT "System halt requested" ##### The following "code block" is effectively ignored : << "SOMEWORD" etc init.d mydatabase clean_stop mydatabase_dump var db db1 mnt fsrv0 backups db1 logger -t SYSHALT "System halt: pre-shutdown actions done, now shutting down the system" shutdown -h NOW SOMEWORD ##### The ignored codeblock ends hereWhat happened? The:
pseudo command was given some input by redirection (a here-document) - the pseudo command didn't care about it, effectively, the entire block was ignored.The here-document-tag was quoted here to avoid substitutions in the "commented" text! Check redirection with here-documents for more
Jul 25, 2017 | wiki.bash-hackers.org
- Simple locking (against parallel run)
- Rudimentary config files for your scripts
- Editing files with ed(1)
- Collapsing Functions
- Illustrated Redirection Tutorial
- Calculate with dc(1)
- Introduction to pax - the POSIX archiver
- Small getopts tutorial ( under construction! )
- Dissect a bad oneliner An example of a bad oneliner, breakdown and fix (by
kojoro
)- Write tests for ./your-script.sh by using bashtest util
Jul 25, 2017 | wiki.bash-hackers.org
Bash changes page for new stuff introduced.
Besides many bugfixes since Bash 3.2, Bash 4 will bring some interesting new features for shell users and scripters. See also Bash changes for a small general overview with more details.
Not all of the changes and news are included here, just the biggest or most interesting ones. The changes to completion, and the readline component are not covered. Though, if you're familiar with these parts of Bash (and Bash 4), feel free to write a chapter here.
The complete list of fixes and changes is in the CHANGES or NEWS file of your Bash 4 distribution.
The current available stable version is 4.2 release (February 13, 2011): New or changed commands and keywords The new "coproc" keyword Bash 4 introduces the concepts of coprocesses, a well known feature of other shells. The basic concept is simple: It will start any command in the background and set up an array that is populated with accessible files that represent the filedescriptors of the started process.In other words: It lets you start a process in background and communicate with its input and output data streams.
See The coproc keyword The new "mapfile" builtin The
mapfile
builtin is able to map the lines of a file directly into an array. This avoids having to fill an array yourself using a loop. It enables you to define the range of lines to read, and optionally call a callback, for example to display a progress bar.See: The mapfile builtin command Changes to the "case" keyword The
case
construct understands two new action list terminators:The
;&
terminator causes execution to continue with the next action list (rather than terminate thecase
construct).The
;;&
terminator causes thecase
construct to test the next given pattern instead of terminating the whole execution.See The case statement Changes to the "declare" builtin The
-p
option now prints all attributes and values of declared variables (or functions, when used with-f
). The output is fully re-usable as input.The new option
-l
declares a variable in a way that the content is converted to lowercase on assignment. For uppercase, the same applies to-u
. The option-c
causes the content to be capitalized before assignment.
declare -A
declares associative arrays (see below). Changes to the "read" builtin Theread
builtin command has some interesting new features.The
-t
option to specify a timeout value has been slightly tuned. It now accepts fractional values and the special value 0 (zero). When-t 0
is specified,read
immediately returns with an exit status indicating if there's data waiting or not. However, when a timeout is given, and theread
builtin times out, any partial data recieved up to the timeout is stored in the given variable, rather than lost. When a timeout is hit,read
exits with a code greater than 128.A new option,
-i
, was introduced to be able to preload the input buffer with some text (when Readline is used, with-e
). The user is able to change the text, or press return to accept it.See The read builtin command Changes to the "help" builtin The builtin itself didn't change much, but the data displayed is more structured now. The help texts are in a better format, much easier to read.
There are two new options:
-d
displays the summary of a help text,-m
displays a manpage-like format. Changes to the "ulimit" builtin Besides the use of the 512 bytes blocksize everywhere in POSIX mode,ulimit
supports two new limits:-b
for max socket buffer size and-T
for max number of threads. Expansions Brace Expansion The brace expansion was tuned to provide expansion results with leading zeros when requesting a row of numbers.See Brace expansion Parameter Expansion Methods to modify the case on expansion time have been added.
On expansion time you can modify the syntax by adding operators to the parameter name.
See Case modification on parameter expansion Substring expansion When using substring expansion on the positional parameters, a starting index of 0 now causes $0 to be prepended to the list (if the positional parameters are used). Before, this expansion started with $1:
# this should display $0 on Bash v4, $1 on Bash v3 echo ${@:0:1}Globbing There's a new shell option
globstar
. When enabled, Bash will perform recursive globbing on**
– this means it matches all directories and files from the current position in the filesystem, rather than only the current level.The new shell option
dirspell
enables spelling corrections on directory names during globbing.See Pathname expansion (globbing) Associative Arrays Besides the classic method of integer indexed arrays, Bash 4 supports associative arrays.
An associative array is an array indexed by an arbitrary string, something like
declare -A ASSOC ASSOC[First]="first element" ASSOC[Hello]="second element" ASSOC[Peter Pan]="A weird guy"See Arrays Redirection There is a new
&>>
redirection operator, which appends the standard output and standard error to the named file. This is the same as the good old>>FILE 2>&1
notation.The parser now understands
|&
as a synonym for2>&1 |
, which redirects the standard error for a command through a pipe.See Redirection Interesting new shell variables
Variable Description BASHPID contains the PID of the current shell (this is different than what $$
does!)PROMPT_DIRTRIM specifies the max. level of unshortened pathname elements in the prompt FUNCNEST control the maximum number of shell function recursions See Special parameters and shell variables Interesting new Shell Options The mentioned shell options are off by default unless otherwise mentioned.
Option Description checkjobs
check for and report any running jobs at shell exit compat*
set compatiblity modes for older shell versions (influences regular expression matching in [[ ... ]]
dirspell
enables spelling corrections on directory names during globbing globstar
enables recursive globbing with **
lastpipe
(4.2) to execute the last command in a pipeline in the current environment See List of shell options Misc
- If a command is not found, the shell attempts to execute a shell function named
command_not_found_handle
, supplying the command words as the function arguments. This can be used to display userfriendly messages or perform different command searches- The behaviour of the
set -e
(errexit
) mode was changed, it now acts more intuitive (and is better documented in the manpage).- The output target for the
xtrace
(set -x
/set +x
) feature is configurable since Bash 4.1 (previously, it was fixed tostderr
): a variable named BASH_XTRACEFD can be set to the filedescriptor that should get the output- Bash 4.1 is able to log the history to syslog (only to be enabled at compile time in
config-top.h
)
Jul 25, 2017 | eli.thegreenplace.net
June 11, 2013 at 19:27 Tags Linux , Software & Tools
Update (Jan 26, 2016): I posted a short update about my usage of persistent history.
For someone spending most of his time in front of a Linux terminal, history is very important. But traditional bash history has a number of limitations, especially when multiple terminals are involved (I sometimes have dozens open). Also it's not very good at preserving just the history you're interested in across reboots.
There are many approaches to improve the situation; here I want to discuss one I've been using very successfully in the past few months - a simple "persistent history" that keeps track of history across terminal instances, saving it into a dot-file in my home directory ( ~/.persistent_history ). All commands, from all terminal instances, are saved there, forever. I found this tremendously useful in my work - it saves me time almost every day.
Why does it go into a separate history and not the main one which is accessible by all the existing history manipulation tools? Because IMHO the latter is still worthwhile to be kept separate for the simple need of bringing up recent commands in a single terminal, without mixing up commands from other terminals. While the terminal is open, I want the press "Up" and get the previous command, even if I've executed a 1000 other commands in other terminal instances in the meantime.
Persistent history is very easy to set up. Here's the relevant portion of my ~/.bashrc :
log_bash_persistent_history() { [[ $(history 1) =~ ^\ *[0-9]+\ +([^\ ]+\ [^\ ]+)\ +(.*)$ ]] local date_part="${BASH_REMATCH[1]}" local command_part="${BASH_REMATCH[2]}" if [ "$command_part" != "$PERSISTENT_HISTORY_LAST" ] then echo $date_part "|" "$command_part" >> ~/.persistent_history export PERSISTENT_HISTORY_LAST="$command_part" fi } # Stuff to do on PROMPT_COMMAND run_on_prompt_command() { log_bash_persistent_history } PROMPT_COMMAND="run_on_prompt_command"The format of the history file created by this is:
2013-06-09 17:48:11 | cat ~/.persistent_history 2013-06-09 17:49:17 | vi /home/eliben/.bashrc 2013-06-09 17:49:23 | lsNote that an environment variable is used to avoid useless duplication (i.e. if I run ls twenty times in a row, it will only be recorded once).
OK, so we have ~/.persistent_history , how do we use it? First, I should say that it's not used very often, which kind of connects to the point I made earlier about separating it from the much higher-use regular command history. Sometimes I just look into the file with vi or tail , but mostly this alias does the trick for me:
alias phgrep='cat ~/.persistent_history|grep --color'The alias name mirrors another alias I've been using for ages:
alias hgrep='history|grep --color'Another tool for managing persistent history is a trimmer. I said earlier this file keeps the history "forever", which is a scary word - what if it grows too large? Well, first of all - worry not. At work my history file grew to about 2 MB after 3 months of heavy usage, and 2 MB is pretty small these days. Appending to the end of a file is very, very quick (I'm pretty sure it's a constant-time operation) so the size doesn't matter much. But trimming is easy:
tail -20000 ~/.persistent_history | tee ~/.persistent_historyTrims to the last 20000 lines. This should be sufficient for at least a couple of months of history, and your workflow should not really rely on more than that :-)
Finally, what's the use of having a tool like this without employing it to collect some useless statistics. Here's a histogram of the 15 most common commands I've used on my home machine's terminal over the past 3 months:
ls : 865 vi : 863 hg : 741 cd : 512 ll : 289 pss : 245 hst : 200 python : 168 make : 167 git : 148 time : 94 python3 : 88 ./python : 88 hpu : 82 cat : 80Some explanation: hst is an alias for hg st . hpu is an alias for hg pull -u . pss is my awesome pss tool , and is the reason why you don't see any calls to grep and find in the list. The proportion of Mercurial vs. git commands is likely to change in the very
history -a
to preserve history
from multiple terminals. This is a very neat trick !!!get=
Bash history handling with multiple terminals
The bash session that is saved is the one for the terminal that is closed the latest. If you want to save the commands for every session, you could use the trick explained here.
export PROMPT_COMMAND='history -a'To quote the manpage: "If set, the value is executed as a command prior to issuing each primary prompt."
So every time my command has finished, it appends the unwritten history item to
~/.bash
ATTENTION: If you use multiple shell sessions and do not use this trick, you need to write the history manually to preserver it using the command history -a
See also:
Jul 07, 2017 | opensource.com
7 comments your Bash prompt.Anyone who has started a terminal in Linux is familiar with the default Bash prompt:
[ user @ $host ~ ] $
But did you know is that this is completely customizable and can contain some very useful information? Here are a few hidden treasures you can use to customize your Bash prompt.
How is the Bash prompt set?The Bash prompt is set by the environment variable PS1 (Prompt String 1), which is used for interactive shell prompts. There is also a PS2 variable, which is used when more input is required to complete a Bash command.
[ dneary @ dhcp- 41 - 137 ~ ] $ export PS1 = "[Linux Rulez]$ "
[ Linux Rulez ] export PS2 = "... "
[ Linux Rulez ] if true ; then
... echo "Success!"
... fi
Success ! Where is the value of PS1 set?PS1 is a regular environment variable.
The system default value is set in /etc/bashrc . On my system, the default prompt is set with this line:
[ " $PS1 " = "\\s-\ \v \\ \$ " ] && PS1 = "[\u@\h \W]\ \$ "
This tests whether the value of PS1 is \s-\v$ (the system default value), and if it is, it sets PS1 to the value [\u@\h \W]\\$ .
If you want to see a custom prompt, however, you should not be editing /etc/bashrc . You should instead add it to .bashrc in your Home directory.
What do \u, \h, \W, \s, and \v mean? More Linux resources
- What is Linux?
- What are Linux containers?
- Download Now: Linux commands cheat sheet
- Advanced Linux commands cheat sheet
- Our latest Linux articles
In the PROMPTING section of man bash , you can find a description of all the special characters in PS1 and PS2 . The following are the default options:
What other special strings can I use in the prompts?
- \u : Username
- \h : Short hostname
- \W : Basename of the current working directory ( ~ for home, the end of the current directory elsewhere)
- \s : Shell name ( bash or sh , depending on how the shell is called)
- \v : The shell's version
There are a number of special strings that can be useful.
- \d : Expands to the date in the format "Tue Jun 27"
- \D{fmt} : Allows custom date formats!see man strftime for the available options
- \D{%c} : Gives the date and time in the current locale
- \n : Include a new line (see multi-line prompts below)
- \w : The full path of the current working directory
- \H : The full hostname for the current machine
- \! : History number!you can run any previous command with its history number by using the shell history event designator ! followed by the number for the specific command you are interested in. (Using Linux history is yet another tutorial...)
There are many other special characters!you can see the full list in the PROMPTING section of the Bash man page .
Multi-line promptsIf you use longer prompts (say if you include \H or \w or a full date-time ), you may want to break things over two lines. Here is an example of a multi-line prompt, with the date, time, and current working directory on one line, and username @hostname on the second line:
Are there any other interesting things I can do?PS1 = "\D{%c} \w \n [\u@\H]$ "
One thing people occasionally do is create colorful prompts. While I find them annoying and distracting, you may like them. For example, to change the date-time above to display in red text, the directory in cyan, and your username on a yellow background, you could try this:
PS1 = "\[\e[31m\]\D{%c}\[\e[0m\]
\[\e[36m\]\w\[\e[0m\] \n [\[\e[1;43m\]\u\[\e[0m\]@\H]$ "To dissect this:
- \[..\] declares some non-printed characters
- \e[.. is an escape character. What follows is a special escape sequence to change the color (or other characteristic) in the terminal
- 31m is red text ( 41m would be a red background)
- 36m is cyan text
- 1;43m declares a yellow background ( 1;33m would be yellow text)
- \[\e[0m\] at the end resets the colors to the terminal defaults
You can find more colors and tips in the Bash prompt HOWTO . You can even make text inverted or blinking! Why on earth anyone would want to do this, I don't know. But you can!
What are your favorite Bash prompt customizations? And which ones have you seen that drive you crazy? Let me know in the comments. Ben Cotton on 07 Jul 2017 Permalink I really like the Bash-Beautify setup by Chris Albrecht:
https://github.com/KeyboardCowboy/Bash-Beautify/blob/master/.bash_beautifyWhen you're in a version-controlled directory, it includes the VCS information (e.g. the git branch and status), which is really handy if you do development. Victorhck on 07 Jul 2017 Permalink An easy drag and drop interface to build your own .bashrc/PS1 configuration
've phun!
How Docker Is Growing Its Container Business (Apr 21, 2017, 07:00)
VIDEO: Ben Golub, CEO of Docker Inc., discusses the business of containers and where Docker is headed.
Understanding Shell Initialization Files and User Profiles in Linux (Apr 22, 2017, 10:00)
tecmint: Learn about shell initialization files in relation to user profiles for local user management in Linux.Cockpit An Easy Way to Administer Multiple Remote Linux Servers via a Web Browser (Apr 23, 2017, 18:00)
Cockpit is a free and open source web-based system management tool where users can easily monitor and manage multiple remote Linux servers.The Story of Getting SSH Port 22 (Apr 24, 2017, 13:00)
It's no coincidence that the SSH protocol got assigned to port 22.How To Suspend A Process And Resume It Later In Linux (Apr 24, 2017, 11:00)
This brief tutorial describes how to suspend or pause a running process and resume it later in Unix-like operating systems.
ShellCheck -A Tool That Shows Warnings and Suggestions for Shell Scripts (Apr 25, 2017, 06:00)
tecmint: ShellCheck is a static analysis tool that shows warnings and suggestions concerning bad code in bash/sh shell scripts.Quick guide for Linux check disk space (Apr 26, 2017, 14:00)
Do you know how much space is left on your Linux system?
Jul 16, 2017 | www.ostechnix.com
Today, I have stumbled upon a collection of useful BASH scripts for heavy commandline users. These scripts, known as Bash-Snippets , might be quite helpful for those who live in Terminal all day. Want to check the weather of a place where you live? This script will do that for you. Wondering what is the Stock prices? You can run the script that displays the current details of a stock. Feel bored? You can watch some youtube videos. All from commandline. You don't need to install any heavy memory consumable GUI applications.Bash-Snippets provides the following 12 useful tools:
Bash-Snippets – A Collection Of Useful BASH Scripts For Heavy Commandline Users Installation
- currency – Currency converter.
- stocks – Provides certain Stock details.
- weather – Displays weather details of your place.
- crypt – Encrypt and decrypt files.
- movies – Search and display a movie details.
- taste – Recommendation engine that provides three similar items like the supplied item (The items can be books, music, artists, movies, and games etc).
- short – URL Shortner
- geo – Provides the details of wan, lan, router, dns, mac, and ip.
- cheat – Provides cheat-sheets for various Linux commands .
- ytview – Watch YouTube from Terminal.
- cloudup – A tool to backup your GitHub repositories to bitbucket.
- qrify – Turns the given string into a qr code.
You can install these scripts on any OS that supports BASH.
First, clone the GIT repository using command:
git clone https://github.com/alexanderepstein/Bash-SnippetsSample output would be:
Cloning into 'Bash-Snippets'... remote: Counting objects: 1103, done. remote: Compressing objects: 100% (45/45), done. remote: Total 1103 (delta 40), reused 55 (delta 23), pack-reused 1029 Receiving objects: 100% (1103/1103), 1.92 MiB | 564.00 KiB/s, done. Resolving deltas: 100% (722/722), done.Go to the cloned directory:
cd Bash-Snippets/Git checkout to the latest stable release:
git checkout v1.11.0Finally, install the Bash-Snippets using command:
sudo ./install.shThis will ask you which scripts to install. Just type Y and press ENTER key to install the respective script. If you don't want to install a particular script, type N and hit ENTER.
Jul 16, 2017 | github.com
If i'm not wrong, all our download folder is pretty Sloppy compare with others because most of the downloaded files are sitting over there and we can't delete blindly, which leads to lose some important files. Also not possible to create bunch of folders based on the files and move appropriate files into folder manually.So, what to do to avoid this ? Better to organize files with help of classifier, later we can delete unnecessary files easily. Classifier app was written in Python.
How to Organize directory ? Simple navigate to corresponding directory, where you want to organize/classify your files and run the
classifier
command, it will take few mins or more depends on the directory files count or quantity.Make a note, there is no undo option, if you want to go back. So, finalize before run classifier in directory. Also, it wont move folders.
Install Classifier in Linux through pippip is a recommended tool for installing Python packages in Linux. Use pip command instead of package manager to get latest build.
For Debian based systems.
$ sudo apt-get install python-pipFor RHEL/CentOS based systems.
$ sudo yum install python-pipFor Fedora
$ sudo dnf install python-pipFor openSUSE
$ sudo zypper install python-pipFor Arch Linux based systems
$ sudo pacman -S python-pipFinally run the pip tool to install Classifier on Linux.
$ sudo pip install classifierOrganize pattern files into specific foldersFirst i will go with default option which will organize pattern files into specific folders. This will create bunch of directories based on the file types and move them into specific folders.
See my directory, how its looking now (Before run classifier command).
$ pwd /home/magi/classifier $ ls -lh total 139M -rw-r--r-- 1 magi magi 4.5M Mar 21 21:21 Aaluma_Doluma.mp3 -rw-r--r-- 1 magi magi 26K Mar 21 21:12 battery-monitor_0.4-xenial_all.deb -rw-r--r-- 1 magi magi 24K Mar 21 21:12 buku-command-line-bookmark-manager-linux.png -rw-r--r-- 1 magi magi 0 Mar 21 21:43 config.php -rw-r--r-- 1 magi magi 25 Mar 21 21:13 core.py -rw-r--r-- 1 magi magi 101K Mar 21 21:12 drawing.svg -rw-r--r-- 1 magi magi 86M Mar 21 21:12 go1.8.linux-amd64.tar.gz -rw-r--r-- 1 magi magi 28 Mar 21 21:13 index.html -rw-r--r-- 1 magi magi 27 Mar 21 21:13 index.php -rw-r--r-- 1 magi magi 48M Apr 30 2016 Kabali Tamil Movie _ Official Teaser _ Rajinikanth _ Radhika Apte _ Pa Ranjith-9mdJV5-eias.webm -rw-r--r-- 1 magi magi 28 Mar 21 21:12 magi1.txt -rw-r--r-- 1 magi magi 66 Mar 21 21:12 ppa.py -rw-r--r-- 1 magi magi 1.1K Mar 21 21:12 Release.html -rw-r--r-- 1 magi magi 45K Mar 21 21:12 v0.4.zipNavigate to corresponding directory where you want to organize files, then run
classifier
command without any option to achieve it.$ classifier Scanning Files Done!See the Directory look, after run classifier command
$ ls -lh total 44K drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Archives -rw-r--r-- 1 magi magi 0 Mar 21 21:43 config.php -rw-r--r-- 1 magi magi 25 Mar 21 21:13 core.py drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 DEBPackages drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Documents -rw-r--r-- 1 magi magi 28 Mar 21 21:13 index.html -rw-r--r-- 1 magi magi 27 Mar 21 21:13 index.php drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Music drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Pictures -rw-r--r-- 1 magi magi 66 Mar 21 21:12 ppa.py -rw-r--r-- 1 magi magi 1.1K Mar 21 21:12 Release.html drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 VideosMake a note, this will organize only general category files such docs, audio, video, pictures, archive, etc and wont organize .py, .html, .php, etc.,.
Classify specific file types into specific folderTo Classify specific file types into specific folder, just add
-st
(mention the file type) &-sf
(folder name) followed by classifier command.For best understanding, i'm going to move
.py
,.html
&.php
files intoDevelopment
folder. See the exact command to achieve it.$ classifier -st .py .html .php -sf "Development" Scanning Files Done!If the folder doesn't exit, it will create the new one and organize the files into that. See the following output. It created
Development
directory and moved all the files inside the directory.$ ls -lh total 28K drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Archives drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 DEBPackages drwxr-xr-x 2 magi magi 4.0K Mar 21 21:51 Development drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Documents drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Music drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 Pictures drwxr-xr-x 2 magi magi 4.0K Mar 21 21:28 VideosFor better clarification, i have listed Development folder files.
$ ls -lh Development/ total 12K -rw-r--r-- 1 magi magi 0 Mar 21 21:43 config.php -rw-r--r-- 1 magi magi 25 Mar 21 21:13 core.py -rw-r--r-- 1 magi magi 28 Mar 21 21:13 index.html -rw-r--r-- 1 magi magi 27 Mar 21 21:13 index.php -rw-r--r-- 1 magi magi 0 Mar 21 21:43 ppa.py -rw-r--r-- 1 magi magi 0 Mar 21 21:43 Release.htmlTo Organize files by Date. It will organize current directory files based on the date.
$ classifier -dtTo save organized files in different location, add
-d
(source directory) &-o
(destination directory) followed by classifier command.$ classifier -d /home/magi/organizer -o /home/magi/2g
Jul 06, 2017 | www.ibm.com
Demystify test, [, [[, ((, and if-then-else
Ian Shields
Published on February 20, 2007 > > >
economistsview.typepad.com
Do you sometimes wonder how to use parameters with your scripts, and how to pass them to internal functions or other scripts? Do you need to do simple validity tests on parameters or options, or perform simple extraction and replacement operations on the parameter strings? This tip helps you with parameter use and the various parameter expansions available in the bash shell.
Jun 18, 2017 | opensource.com
About conditional, substring, and substitution parameter expansion operators Conditional parameter expansionConditional parameter expansion allows branching on whether the parameter is unset, empty, or has content. Based on these conditions, the parameter can be expanded to its value, a default value, or an alternate value; throw a customizable error; or reassign the parameter to a default value. The following table shows the conditional parameter expansions-each row shows a parameter expansion using an operator to potentially modify the expansion, with the columns showing the result of that expansion given the parameter's status as indicated in the column headers. Operators with the ':' prefix treat parameters with empty values as if they were unset.
parameter expansion unset var var="" var="gnu" ${var-default} default - gnu ${var:-default} default default gnu ${var+alternate} - alternate alternate ${var:+alternate} - - alternate ${var?error} error - gnu ${var:?error} error error gnu The = and := operators in the table function identically to - and :- , respectively, except that the = variants rebind the variable to the result of the expansion.
As an example, let's try opening a user's editor on a file specified by the OUT_FILE variable. If either the EDITOR environment variable or our OUT_FILE variable is not specified, we will have a problem. Using a conditional expansion, we can ensure that when the EDITOR variable is expanded, we get the specified value or at least a sane default:
$ echo ${ EDITOR } /usr/bin/vi $ echo ${ EDITOR :- $( which nano ) } /usr/bin/vi $ unset EDITOR $ echo ${ EDITOR :- $( which nano ) } /usr/bin/nanoBuilding on the above, we can run the editor command and abort with a helpful error at runtime if there's no filename specified:
$ ${ EDITOR :- $( which nano ) } ${ OUT_FILE :? Missing filename } bash: OUT_FILE: Missing filenameSubstring parameter expansionParameters can be expanded to just part of their contents, either by offset or by removing content matching a pattern. When specifying a substring offset, a length may optionally be specified. If running Bash version 4.2 or greater, negative numbers may be used as offsets from the end of the string. Note the parentheses used around the negative offset, which ensure that Bash does not parse the expansion as having the conditional default expansion operator from above:
$ location = " CA 90095 " $ echo " Zip Code: ${ location : 3 } " Zip Code: 90095 $ echo " Zip Code: ${ location : (-5) } " Zip Code: 90095 $ echo " State: ${ location : 0 : 2 } " State: CAAnother way to take a substring is to remove characters from the string matching a pattern, either from the left edge with the # and ## operators or from the right edge with the % and %% operators. A useful mnemonic is that # appears left of a comment and % appears right of a number. When the operator is doubled, it matches greedily, as opposed to the single version, which removes the most minimal set of characters matching the pattern.
var="open source" parameter expansion offset of 5
length of 4${var:offset} source ${var:offset:length} sour pattern of *o? ${var#pattern} en source ${var##pattern} rce pattern of ?e* ${var%pattern} open sour ${var%%pattern} o The pattern-matching used is the same as with filename globbing: * matches zero or more of any character, ? matches exactly one of any character, [...] brackets introduce a character class match against a single character, supporting negation ( ^ ), as well as the posix character classes, e.g. . By excising characters from our string in this manner, we can take a substring without first knowing the offset of the data we need:
$ echo $ PATH /usr/local/bin:/usr/bin:/bin $ echo " Lowest priority in PATH: ${ PATH ## *: } " Lowest priority in PATH: /bin $ echo " Everything except lowest priority: ${ PATH % :* } " Everything except lowest priority: /usr/local/bin:/usr/bin $ echo " Highest priority in PATH: ${ PATH %% :* } " Highest priority in PATH: /usr/local/binSubstitution in parameter expansionThe same types of patterns are used for substitution in parameter expansion. Substitution is introduced with the / or // operators, followed by two arguments separated by another / representing the pattern and the string to substitute. The pattern matching is always greedy, so the doubled version of the operator, in this case, causes all matches of the pattern to be replaced in the variable's expansion, while the singleton version replaces only the leftmost.
var="free and open" parameter expansion pattern of
string of _${var/pattern/string} free_and open ${var//pattern/string} free_and_open The wealth of parameter expansion modifiers transforms Bash variables and other parameters into powerful tools beyond simple value stores. At the very least, it is important to understand how parameter expansion works when reading Bash scripts, but I suspect that not unlike myself, many of you will enjoy the conciseness and expressiveness that these expansion modifiers bring to your scripts as well as your interactive sessions.
Apr 26, 2017 | www.tecmint.com
by Aaron Kili | Published: April 24, 2017 | April 24, 2017
Download Your Free eBooks NOW - 10 Free Linux eBooks for Administrators | 4 Free Shell Scripting eBooksShellCheck is a static analysis tool that shows warnings and suggestions concerning bad code in bash/sh shell scripts. It can be used in several ways: from the web by pasting your shell script in an online editor (Ace a standalone code editor written in JavaScript) in https://www.shellcheck.net (it is always synchronized to the latest git commit, and is the simplest way to give ShellCheck a go) for instant feedback. Alternatively, you can install it on your machine and run it from the terminal, integrate it with your text editor as well as in your build or test suites.
There are three things ShellCheck does primarily:
- It points out and explains typical beginner's syntax issues that cause a shell to give cryptic error messages.
- It points out and explains typical intermediate level semantic problems that cause a shell to behave strangely and counter-intuitively.
- It also points out subtle caveats, corner cases and pitfalls that may cause an advanced user's otherwise working script to fail under future circumstances.
In this article, we will show how to install and use ShellCheck in the various ways to find bugs or bad code in your shell scripts in Linux.
How to Install and Use ShellCheck in LinuxShellCheck can be easily installed locally through your package manager as shown.
On Debian/Ubuntu# apt-get install shellcheckOn RHEL/CentOS# yum -y install epel-release # yum install ShellCheckOn Fedora# dnf install ShellCheckOnce ShellCheck installed, let's take a look at how to use ShellCheck in the various methods we mentioned before.
Using ShellCheck From the WebGo to https://www.shellcheck.net and paste your script in the Ace editor provided, you will view the output at the bottom of the editor as shown in the screen shot below.
In the following example, the test shell script consists of the following lines:
#!/bin/bash #declare variables MINARGS=2 E_NOTROOT=50 E_MINARGS=100 #echo values of variables echo $MINARGS echo $E_NONROOT exit 0;ShellCheck Online Shell Script Analysis Tool
From the screenshot above, the first two variables E_NOTROOT and E_MINARGS have been declared but are unused, ShellCheck reports these as "suggestive errors":
SC2034: E_NOTROOT appears unused. Verify it or export it. SC2034: E_MINARGS appears unused. Verify it or export it.Then secondly, the wrong name (in the statement echo $E_NONROOT ) was used to echo variable E_NOTROOT , that is why ShellCheck shows the error:
SC2153: Possible misspelling: E_NONROOT may not be assigned, but E_NOTROOT isAgain when you look at the echo commands , the variables have not been double quoted (helps to prevent globbing and word splitting), therefore Shell Check shows the warning:
SC2086: Double quote to prevent globbing and word splitting.Using ShellCheck From the TerminalYou can also run ShellCheck from the command-line, we'll use the same shell script above as follows:
$ shellcheck test.shShellCheck Checks Bad Code in Shell Scripts Using ShellCheck From the Text Editor
You can also view ShellCheck suggestions and warnings directly in a variety of editors, this is probably a more efficient way of using ShellCheck, once you save a files, it shows you any errors in the code.
In Vim , use ALE or Syntastic (we will use this):
Start by installing Pathogen so that it's easy to install syntastic. Run the commands below to get the pathogen.vim file and the directories it needs:
# mkdir -p ~/.vim/autoload ~/.vim/bundle && curl -LSso ~/.vim/autoload/pathogen.vim https://tpo.pe/pathogen.vimThen add this to your ~/.vimrc file:
execute pathogen#infect()Once you have installed pathogen, and you now can put syntastic into ~/.vim/bundle as follows:
# cd ~/.vim/bundle && git clone --depth=1 https://github.com/vim-syntastic/syntastic.gitNext, close vim and start it back up to reload it, then type the command below:
:HelptagsIf all goes well, you should have ShellCheck integrated with Vim , the following screenshots show how it works using the same script above.
Check Bad Shell Script Code in Vim
In case you get an error after following the steps above, then you possibly didn't install Pathogen correctly. Redo the steps but this ensure that you did the following:
- Created both the ~/.vim/autoload and ~/.vim/bundle directories.
- Added the execute pathogen#infect() line to your ~/.vimrc file.
- Did the git clone of syntastic inside ~/.vim/bundle .
- Use appropriate permissions to access all of the above directories.
You can also use other editors to check bad code in shell scripts like:
- In Emacs , use Flycheck .
- In Sublime , employ SublimeLinter.
- In Atom , make use of Linter.
- In most other editors, use GCC error compatibility.
Note : Use the gallery of bad code to carry out more ShellChecking.
ShellCheck Github Repository: https://github.com/koalaman/shellcheck
That's it! In this article, we showed how to install and use ShellCheck to finds bugs or bad code in your shell scripts in Linux. Share your thoughts with us via the comment section below.
Do you know of any other similar tools out there? If yes, then share info about them in the comments as well. Share + 0 9 16
If You Appreciate What We Do Here On TecMint, You Should Consider:
- Stay Connected to: Twitter | Facebook | Google Plus
- Subscribe to our email updates: Sign Up Now
- Use our Hosting referral link if you planning to start your blog ($3.82/month) .
- Become a Supporter - Make a contribution via PayPal
- Support us by purchasing our premium books in PDF format.
- Support us by taking our online Linux courses
We are thankful for your never ending support.
Tags: shell scripting
Aaron Kili
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.
Your name can also be listed here. Got a tip? Submit it here to become an TecMint author.
Mar 20, 2017 | www.ibm.com
Build intelligent, unattended scripts Martin Brown
Published on July 03, 2007 Share this pageFacebook Twitter Linked In Google+ E-mail this page 0 Content series: This content is part of the series: System Administration Toolkit
About this series
- Build intelligent, unattended scripts
- Distributed administration using SSH
- Get the most out of bash
- Get the most out of zsh
- Log file basics
- Managing NIS services for authorizations
- Migrating and moving UNIX directory trees
- Migrating and moving UNIX filesystems
- Monitor user usage
- Monitoring a slow system
- Monitoring disk space and usage
- Network scanning
- Process administration tricks
- Set up remote access in UNIX through OpenSSH
- Understanding DNS
- Using SNMP data
The typical UNIX administrator has a key range of utilities, tricks, and systems he or she uses regularly to aid in the process of administration. There are key utilities, command-line chains, and scripts that are used to simplify different processes. Some of these tools come with the operating system, but a majority of the tricks come through years of experience and a desire to ease the system administrator's life. The focus of this series is on getting the most from the available tools across a range of different UNIX environments, including methods of simplifying administration in a heterogeneous environment.
The unattended script problemThere are many issues around executing unattended scripts-that is, scripts that you run either automatically through a service like cron or
at
commands.The default mode of cron and
at
commands, for example, is for the output of the script to be captured and then emailed to the user that ran the script. You don't always want the user to get the email that cron sends by default (especially if everything ran fine)-sometimes the user who ran the script and the person actually responsible for monitoring that output are different.Therefore, you need better methods for trapping and identifying errors within the script, better methods for communicating problems, and optional successes to the appropriate person.
Getting the scripts set up correctly is vital; you need to ensure that the script is configured in such a way that it's easy to maintain and that the script runs effectively. You also need to be able to trap errors and output from programs and ensure the security and validity of the environment in which the script executes. Read along to find out how to do all of this.
Setting up the environmentBefore getting into the uses of unattended scripts, you need to make sure that you have set up your environment properly. There are various elements that need to be explicitly configured as part of your script, and taking the time to do this not only ensures that your script runs properly, but it also makes the script easier to maintain.
Some things you might need to think about include:
- Search path for applications
- Search path for libraries
- Directory locations
- Creating directories or paths
- Common files
Some of these elements are straightforward enough to organize. For example, you can set the path using the following in most Bourne-compatible shells (sh, Bash, ksh, and zsh):
1 PATH=/usr/bin:/bin:/usr/sbin
For directory and file locations, just set a variable at the header of the script. You can then use the variable in each place where you would have used the filename. For example, when writing to a log file, you might use Listing 1 .
Listing 1. Writing a log file
1 2 3 4 LOGFILE=/tmp/output.log
do_something >>$LOGFILE
do_another >>$LOGFILE
By setting the name once and then using the variable, you ensure that you don't get the filename wrong, and if you need to change the filename name, you only need to change the name once.
Using a single filename and variable also makes it very easy to create a complex filename. For example, adding a date to your log filename is made easier by using the
date
command with a format specification:
1 DATE='date +%Y%m%d.%H%M'
The above command creates a string containing the date in the format YYYYMMDD.HHMM, for example, 20070524.2359. You can insert that date variable into a filename so that your log file is tagged according to the date it was created.
If you are not using a date/time unique identifier in the log filename, it's a good idea to insert some other unique identifier in case two scripts are run simultaneously. If your script is writing to the same file from two different processes, you will end up either with corrupted information or missing information.
All shells support a unique shell ID, based on the shell process ID, and are accessible through the special $$ variable name. By using a global log variable, you can easily create a unique file to be used for logging:
1 LOGFILE=/tmp/$$.err
You can also apply the same global variable principles to directories:
1 LOGDIR=/var/log/my_app
To ensure that the directories are created, use the
-p
option for mkdir to create the entire path of the directory you want to use:
1 mkdir -p $LOGDIR
Fortunately, this format won't complain if the directories already exist, which makes it ideal for running in an unattended script.
Finally, it is generally a good idea to use full path names rather than localized paths in your unattended scripts so that you can use the previous principles together.
Listing 2. Using full path names in unattended scripts
1 2 3 4 DATE='date +%Y%m%d.%H%M'
LOGDIR=/usr/local/mcslp/logs/rsynclog
mkdir -p $LOGDIR
LOGNAME=$LOGDIR/$DATE.log
Now that you've set up the environment, let's look at how you can use these principles to help with the general, unattended scripts.
Writing a log fileProbably the simplest improvement you can make to your scripts is to write the output from your script to a log file. You might not think this is necessary, but the default operation of cron is to save the output from the script or command that was executed, and then email it to the user who owned the crontab or at job.
This is less than perfect for a number of reasons. First of all, the configured user that might be running the script might not be the same as the real person that needs to handle the output. You might be running the script as root, even though the output of the script or command when run needs to go to somebody else. Setting up a general filter or redirection won't work if you want to send the output of different commands to different users.
The second reason is a more fundamental one. Unless something goes wrong, it's not necessary to receive the output from a script . The cron daemon sends you the output from stdout and stderr, which means that you get a copy of the output, even if the script executed successfully.
The final reason is about the management and organization of the information and output generated. Email is not always an efficient way of recording and tracking the output from the scripts that are run automatically. Maybe you just want to keep an archive of the log file that was a success or email a copy of the error log in the event of a problem.
Writing out to a log file can be handled in a number of different ways. The most straightforward way is to redirect output to a file for each command (see Listing 3 ).
Listing 3. Redirecting output to a file
1 2 cd /shared
rsync --delete --recursive . /backups/shared >$LOGFILE
If you want to combine error and standard output into a single file, use numbered redirection (see Listing 4 ).
Listing 4. Combining error and standard output into a single file
1 2 cd /shared
rsync --delete --recursive . /backups/shared >$LOGFILE 2>&1
Listing 4 writes out the information to the same log file.
You might also want to write out the information to separate files (see Listing 5 ).
Listing 5. Writing out information to separate files
1 2 cd /shared
rsync --delete --recursive . /backups/shared >$LOGFILE 2>$ERRFILE
For multiple commands, the redirections can get complex and repetitive. You must ensure, for example, that you are appending, not overwriting, information to the log file (see Listing 6 ).
Listing 6. Appending information to the log file
1 2 cd /etc
rsync --delete --recursive . /backups/etc >>$LOGFILE >>$ERRFILE
A simpler solution, if your shell supports it, is to use an inline block for a group of commands, and then to redirect the output from the block as a whole. The result is that you can rewrite the lines in Listing 7 using the structure in Listing 8 .
Listing 7. Logging in long form
1 2 3 4 5 cd /shared
rsync --delete --recursive . /backups/shared >$LOGFILE 2>$ERRFILE
cd /etc
rsync --delete --recursive . /backups/etc >>$LOGFILE 2>>$ERRFILE
Listing 8 shows an inline block for grouping commands.
Listing 8. Logging using a block
1 2 3 4 5 6 7 8 {
cd /shared
rsync --delete --recursive . /backups/shared
cd /etc
rsync --delete --recursive . /backups/etc
} >$LOGFILE 2>$ERRFILE
The enclosing braces imply a subshell so that all the commands in the block are executed as if part of a separate process (although no secondary shell is created, the enclosing block is just treated as a different logical environment). Using the subshell, you can collectively redirect their standard and error output for the entire block instead of for each individual command.
Trapping errors and reporting themOne of the main advantages of the subshell is that you can place a wrapper around the main content of the script, redirect the errors, and then send a formatted email with the status of the script execution.
For example, Listing 9 shows a more complete script that sets up the environment, executes the actual commands and bulk of the process, traps the output, and then sends an email with the output and error information.
Listing 9. Using a subshell for emailing a more useful log
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 LOGFILE=/tmp/$$.log
ERRFILE=/tmp/$$.err
ERRORFMT=/tmp/$$.fmt
{
set -e
cd /shared
rsync --delete --recursive . /backups/shared
cd /etc
rsync --delete --recursive . /backups/etc
} >$LOGFILE 2>$ERRFILE
{
echo "Reported output"
echo
cat /tmp/$$.log
echo "Error output"
echo
cat /tmp/$$.err
} >$ERRORFMT 2>&1
mailx -s 'Log output for backup' root <$ERRORFMT
rm -f $LOGFILE $ERRFILE $ERRORFMT
If you use the subshell trick and your shell supports shell options (Bash, ksh, and zsh), then you might want to optionally set some shell options to ensure that the block is terminated correctly on an error. For example, the
-e
(errexit) option within Bash ensures that the shell terminates when a simple command (for example, any external command called through the script) causes immediate termination of the shell.In Listing 9 , for example, if the first rsync failed, then the subshell would just continue and run the next command. However, there are times when you want to stop the moment a command fails because continuing could be more damaging. By setting errexit, the subshell immediately terminates when the first command stops.
Setting options and ensuring securityAnother issue with automated scripts is ensuring the security of the script and, in particular, ensuring that script does not fail because of bad configuration. You can use shell options for this process.
Other options you might want to set in a shell-independent manner (and the richer the shell, the better, as a rule, at trapping these instances). In the Bash shell, for example,
-u
ensures that any unset variables are treated as an error. This can be useful to ensure that an unattended script does not try to execute when a required variable has not been configured correctly.The
-C
option (noclobber) ensures that files are not overwritten if they already exist, and it can prevent the script from overwriting files it shouldn't have access too (for example, the system files), unless the script has the correct commands to delete the original file first.Each of these options can be set using the
set
command (see Listing 10 ).Listing 10. Using the set command to set options
1 2 set -e
set -C
You can use a plus sign before the option to disable it.
Another area where you might want to improve the security and environment of your script is to use resource limits. Resource limits can be set by the
ulimit
command, which is generally specific to the shell, and enable you to limit the size of files, cores, memory use, and even the duration of the script to ensure that the script does not run away with itself.For example, you can set CPU time in seconds using the following command:
1 ulimit -t 600
Although ulimit does not offer complete protection, it helps in those scripts where the potential for the script to run away with itself, or a program to suddenly use a large amount of memory, might become a problem.
Capturing faultsYou have already seen how to trap errors, output, and create logs that can be emailed to the appropriate person when they occur, but what if you want to be more specific about the errors and responses?
Two tools are useful here. The first is the return status from a command, and the second is the
trap
command within your shell.The return status from a command can be used to identify whether a particular command ran correctly, or whether it generated some sort of error. The exact meaning for a specific return status code is unique to a particular command (check the man pages), but a generally accepted principle is that an error code of zero means that the command executed correctly.
For example, imagine that you want to trap an error when trying to create a directory. You can check the $? variable after mkdir and then email the output, as shown in Listing 11 .
Listing 11. Trapping return status
1 2 3 4 5 6 7 8 ERRLOG=/tmp/$$.err
mkdir /tmp 2>>$ERRLOG
if [ $? -ne 0 ]
then
mailx -s "Script failed when making directory" admin <$ERRLOG
exit 1
fi
Incidentally, you can use the return status code information inline by chaining commands with the && or || symbols to act as an
and
,or
, ortype
statement. For example, say you want to ensure that the directory gets created and the command gets executed but, if the directory is not created, the command does not get executed. You could do that using anif
statement (see Listing 12 ).Listing 12. Ensuring that a directory is created before executing a command
1 2 3 4 5 mkdir /tmp/out
if [ $? -eq 0 ]
then
do_something
fi
You can modify Listing 12 into a single line:
1 mkdir /tmp/out && do_something
The above statement basically reads, "Make a directory and, if it completes successfully, also run the command." In essence, only do the second command if the first completes correctly.
The || symbol works in the opposite way; if the first command does not complete successfully, then execute the second. This can be useful for trapping situations where a command would raise an error, but instead provides an alternative solution. For example, when changing to a directory, you might use the line:
1 cd /tmp/out || mkdir /tmp/out
This line of code tries to change the directory and, if it fails, (probably because the directory does not exist), you make it. Furthermore, you can combine these statements together. In the previous example, of course, what you want to do is change to the directory, or create it and then change to that directory if it doesn't already exist. You can write that in one line as:
1 cd /tmp/out || mkdir /tmp/out && cd /tmp/out
The
trap
command is a more generalized solution for trapping more serious errors based on the signals raised when a command fails, such as core dump, memory error, or when a command has been forcibly terminated by akill
command.To use trap, you specify the command or function to be executed when the signal is trapped, and the signal number or numbers that you want to trap, as shown here in Listing 13 .
Listing 13. Trapping signals
1 2 3 4 5 6 7 8 function catch_trap
{
echo "killed" mailx -s "Signal trapped" admin
}
trap catch_trap 1 2 3 4 5 6 7 8 9 10 11
sleep 9000
You can trap any signal in this way and it can be a good way of ensuring that a program that crashes out is caught and trapped effectively and reported.
Identifying reportable errorsThroughout this article, you've looked at ways of trapping errors, saving the output, and recording issues so that they can be dealt with and reported. However, what if the script or commands that you are using naturally output error information that you want to be able to use and report on but that you don't always want to know about?
There is no easy solution to this problem, but you can use a combination of the techniques shown in this article to log errors and information, read or filter the information, and mail and report or display it accordingly.
A simple way to do this is to choose which parts of the command that you output and report to the logs. Alternatively, you can post-process the logs to select or filter out the output that you need.
For example, say you have a script that builds a document in the background using the Formatting Objects Processor (FOP) system from Apache to generate a PDF version of the document. Unfortunately in the process, a number of errors are generated about hyphenation. These are errors that you know about, but they don't affect the output quality. In the script that generates the file, just filter out these lines from the error log:
1 sed -e '/hyphenation/d' <error.log
>mailerror.log
If there were no other errors, the mailerror.log file will be empty, and email is sent with the error information.
SummaryIn this article, you've looked at how to run commands in an unattended script, captured their output, and monitored the execution of different commands in the script. You can log the information in many ways, for example, on a command-by-command or global basis, and check and report on the progress.
For error trapping, you can monitor output and result codes, and you can even set up global traps that identify problems and trap them during execution for reporting purposes. The result is a range of options that handle and report problems for scripts that are running on their own and where their ability to recover from errors and problems is critical.
Mar 13, 2017 | name="KSH-CH-6-SECT-3">
So far we have seen two types of variables: character strings and integers. The third type of variable the Korn shell supports is an array . As you may know, an array is like a list of things; you can refer to specific elements in an array with integer indices , so that a[i] refers to the i th element of array a .
The Korn shell provides an array facility that, while useful, is much more limited than analogous features in conventional programming languages. In particular, arrays can be only one-dimensional (i.e., no arrays of arrays), and they are limited to 1024 elements. Indices can start at 0.
There are two ways to assign values to elements of an array. The first is the most intuitive: you can use the standard shell variable assignment syntax with the array index in brackets ( [] ). For example:
nicknames[2]=bob nicknames[3]=ed
puts the values bob and ed into the elements of the array nicknames with indices 2 and 3, respectively. As with regular shell variables, values assigned to array elements are treated as character strings unless the assignment is preceded by let .
The second way to assign values to an array is with a variant of the set statement, which we saw in Chapter 3, Customizing Your Environment . The statement:
set -A aname val1 val2 val3 ...
creates the array aname (if it doesn't already exist) and assigns val1 to aname[0] , val2 to aname[1] , etc. As you would guess, this is more convenient for loading up an array with an initial set of values.
To extract a value from an array, use the syntax ${ aname [ i ]} . For example, ${nicknames[2]} has the value "bob". The index i can be an arithmetic expression-see above. If you use
*
in place of the index, the value will be all elements, separated by spaces. Omitting the index is the same as specifying index 0.Now we come to the somewhat unusual aspect of Korn shell arrays. Assume that the only values assigned to nicknames are the two we saw above. If you type print
"
${nicknames[*
]}" , you will see the output:
bob ed
In other words, nicknames[0] and nicknames[1] don't exist. Furthermore, if you were to type:
nicknames[9]=pete nicknames[31]=ralph
and then type print
"
${nicknames[*
]}" , the output would look like this:
bob ed pete ralph
This is why we said "the elements of nicknames with indices 2 and 3" earlier, instead of "the 2nd and 3rd elements of nicknames ". Any array elements with unassigned values just don't exist; if you try to access their values, you will get null strings.
You can preserve whatever whitespace you put in your array elements by using
"
$ { aname [@] }"
(with the double quotes) instead of $ { aname [*
] }"
, just as you can with"
$@"
instead of $*
.The shell provides an operator that tells you how many elements an array has defined: ${# aname [
*
] } . Thus ${#nicknames[*
] } has the value 4. Note that you need the [*
] because the name of the array alone is interpreted as the 0th element. This means, for example, that ${#nicknames} equals the length of nicknames[0] (see Chapter 4 ). Since nicknames[0] doesn't exist, the value of ${#nicknames} is 0, the length of the null string.To be quite frank, we feel that the Korn shell's array facility is of little use to shell programmers. This is partially because it is so limited, but mainly because shell programming tasks are much more often oriented toward character strings and text than toward numbers. If you think of an array as a mapping from integers to values (i.e., put in a number, get out a value), then you can see why arrays are "number-dominated" data structures.
Nevertheless, we can find useful things to do with arrays. For example, here is a cleaner solution to Task 5-4, in which a user can select his or her terminal type ( TERM environment variable) at login time. Recall that the "user-friendly" version of this code used select and a case statement:
print 'Select your terminal type:' PS3='terminal? ' select term in 'Givalt GL35a' \ 'Tsoris T-2000' \ 'Shande 531' \ 'Vey VT99' do case $REPLY in 1 ) TERM=gl35a ;; 2 ) TERM=t2000 ;; 3 ) TERM=s531 ;; 4 ) TERM=vt99 ;; * ) print "invalid." ;; esac if [[ -n $term ]]; then print "TERM is $TERM" break fi done
We can eliminate the entire case construct by taking advantage of the fact that the select construct stores the user's number choice in the variable REPLY . We just need a line of code that stores all of the possibilities for TERM in an array, in an order that corresponds to the items in the select menu. Then we can use $REPLY to index the array. The resulting code is:
set -A termnames gl35a t2000 s531 vt99 print 'Select your terminal type:' PS3='terminal? ' select term in 'Givalt GL35a' \ 'Tsoris T-2000' \ 'Shande 531' \ 'Vey VT99' do if [[ -n $term ]]; then TERM=${termnames[REPLY-1]} print "TERM is $TERM" break fi done
This code sets up the array termnames so that ${termnames[0]} is "gl35a", ${termnames[1]} is "t2000", etc. The line TERM=${termnames[REPLY-1]} essentially replaces the entire case construct by using REPLY to index the array.
Notice that the shell knows to interpret the text in an array index as an arithmetic expression, as if it were enclosed in (( and )) , which in turn means that variable need not be preceded by a dollar sign ( $ ). We have to subtract 1 from the value of REPLY because array indices start at 0, while select menu item numbers start at 1.
6.3.1 typesetThe final Korn shell feature that relates to the kinds of values that variables can hold is the typeset command. If you are a programmer, you might guess that typeset is used to specify the type of a variable (integer, string, etc.); you'd be partially right.
typeset is a rather ad hoc collection of things that you can do to variables that restrict the kinds of values they can take. Operations are specified by options to typeset ; the basic syntax is:
typeset -o varname [= value ]
Options can be combined; multiple varname s can be used. If you leave out varname , the shell prints a list of variables for which the given option is turned on.
The options available break down into two basic categories:
6.3.2 Local Variables in Functions
String formatting operations, such as right- and left-justification, truncation, and letter case control.
Type and attribute functions that are of primary interest to advanced programmers.
typeset without options has an important meaning: if a typeset statement is inside a function definition, then the variables involved all become local to that function (in addition to any properties they may take on as a result of typeset options). The ability to define variables that are local to "subprogram" units (procedures, functions, subroutines, etc.) is necessary for writing large programs, because it helps keep subprograms independent of the main program and of each other.
If you just want to declare a variable local to a function, use typeset without any options. For example:
function afunc { typeset diffvar samevar=funcvalue diffvar=funcvalue print "samevar is $samevar" print "diffvar is $diffvar" } samevar=globvalue diffvar=globvalue print "samevar is $samevar" print "diffvar is $diffvar" afunc print "samevar is $samevar" print "diffvar is $diffvar"
This code will print the following:
samevar is globvalue diffvar is globvalue samevar is funcvalue diffvar is funcvalue samevar is funcvalue diffvar is globvalue
Figure 6.1 shows this graphically.
Figure 6.1: Local variables in functions
Mar 13, 2017 | docstore.mik.ua
6.2 Integer Variables and ArithmeticThe expression $(($OPTIND - 1)) in the last example gives a clue as to how the shell can do integer arithmetic. As you might guess, the shell interprets words surrounded by $(( and )) as arithmetic expressions. Variables in arithmetic expressions do not need to be preceded by dollar signs, though it is not wrong to do so.
Arithmetic expressions are evaluated inside double quotes, like tildes, variables, and command substitutions. We're finally in a position to state the definitive rule about quoting strings: When in doubt, enclose a string in single quotes, unless it contains tildes or any expression involving a dollar sign, in which case you should use double quotes.
date (1) command on System V-derived versions of UNIX accepts arguments that tell it how to format its output. The argument +%j tells it to print the day of the year, i.e., the number of days since December 31st of the previous year.
We can use +%j to print a little holiday anticipation message:
print "Only $(( (365-$(date +%j)) / 7 )) weeks until the New Year!"We'll show where this fits in the overall scheme of command-line processing in Chapter 7, Input/Output and Command-line Processing .
The arithmetic expression feature is built in to the Korn shell's syntax, and was available in the Bourne shell (most versions) only through the external command expr (1). Thus it is yet another example of a desirable feature provided by an external command (i.e., a syntactic kludge) being better integrated into the shell. [[ / ]] and getopts are also examples of this design trend.
Korn shell arithmetic expressions are equivalent to their counterparts in the C language. [5] Precedence and associativity are the same as in C. Table 6.2 shows the arithmetic operators that are supported. Although some of these are (or contain) special characters, there is no need to backslash-escape them, because they are within the $(( ... )) syntax.
[5] The assignment forms of these operators are also permitted. For example, $((x += 2)) adds 2 to x and stores the result back in x .
Table 6.2: Arithmetic Operators
Operator Meaning + Plus - Minus *
Times / Division (with truncation) % Remainder << Bit-shift left >> Bit-shift right & Bitwise and | Bitwise or ~ Bitwise not ^ Bitwise exclusive or Parentheses can be used to group subexpressions. The arithmetic expression syntax also (like C) supports relational operators as "truth values" of 1 for true and 0 for false. Table 6.3 shows the relational operators and the logical operators that can be used to combine relational expressions.
Table 6.3: Relational Operators
Operator Meaning < Less than > Greater than <= Less than or equal >= Greater than or equal == Equal != Not equal && Logical and || Logical or For example, $((3 > 2)) has the value 1; $(( (3 > 2) || (4 <= 1) )) also has the value 1, since at least one of the two subexpressions is true.
The shell also supports base N numbers, where N can be up to 36. The notation B # N means " N base B ". Of course, if you omit the B # , the base defaults to 10.
6.2.1 Arithmetic ConditionalsAnother construct, closely related to $((...)) , is ((...)) (without the leading dollar sign). We use this for evaluating arithmetic condition tests, just as [[...]] is used for string, file attribute, and other types of tests.
((...)) evaluates relational operators differently from $((...)) so that you can use it in if and while constructs. Instead of producing a textual result, it just sets its exit status according to the truth of the expression: 0 if true, 1 otherwise. So, for example, ((3 > 2)) produces exit status 0, as does (( (3 > 2) || (4 <= 1) )) , but (( (3 > 2) && (4 <= 1) )) has exit status 1 since the second subexpression isn't true.
You can also use numerical values for truth values within this construct. It's like the analogous concept in C, which means that it's somewhat counterintuitive to non-C programmers: a value of 0 means false (i.e., returns exit status 1), and a non-0 value means true (returns exit status 0), e.g., (( 14 )) is true. See the code for the kshdb debugger in Chapter 9 for two more examples of this.
6.2.2 Arithmetic Variables and Assignment
The (( ... )) construct can also be used to define integer variables and assign values to them. The statement:
((intvar=expression
))creates the integer variable intvar (if it doesn't already exist) and assigns to it the result of expression .
That syntax isn't intuitive, so the shell provides a better equivalent: the built-in command let . The syntax is:
let intvar=expressionIt is not necessary (because it's actually redundant) to surround the expression with $(( and )) in a let statement. As with any variable assignment, there must not be any space on either side of the equal sign ( = ). It is good practice to surround expressions with quotes, since many characters are treated as special by the shell (e.g.,
*
, # , and parentheses); furthermore, you must quote expressions that include whitespace (spaces or TABs). See Table 6.4 for examples.Table 6.4: Sample Integer Expression Assignments
Assignment Value let x= $x 1+4 5 '
1 + 4'
5 '
(2+3) * 5'
25 '
2 + 3 * 5'
17 '
17 / 3'
5 '
17 % 3'
2 '
1<<4'
16 '
48>>3'
6 '
17 & 3'
1 '
17 | 3'
19 '
17 ^ 3'
18 Here is a small task that makes use of integer arithmetic.
Task 6.1Write a script called pages that, given the name of a text file, tells how many pages of output it contains. Assume that there are 66 lines to a page but provide an option allowing the user to override that.
We'll make our option - N , a la head . The syntax for this single option is so simple that we need not bother with getopts . Here is the code:
if [[ $1 = -+([0-9]) ]]; then let page_lines=${1#-} shift else let page_lines=66 fi let file_lines="$(wc -l < $1)" let pages=file_lines/page_lines if (( file_lines % page_lines > 0 )); then let pages=pages+1 fi print "$1 has $pages pages of text."Notice that we use the integer conditional (( file_lines % page_lines > 0 )) rather than the [[ ... ]] form.
At the heart of this code is the UNIX utility wc(1) , which counts the number of lines, words, and characters (bytes) in its input. By default, its output looks something like this:
8 34 161 bobwc 's output means that the file bob has 8 lines, 34 words, and 161 characters. wc recognizes the options -l , -w , and -c , which tell it to print only the number of lines, words, or characters, respectively.
wc normally prints the name of its input file (given as argument). Since we want only the number of lines, we have to do two things. First, we give it input from file redirection instead, as in wc -l < bob instead of wc -l bob . This produces the number of lines preceded by a single space (which would normally separate the filename from the number).
Unfortunately, that space complicates matters: the statement let file_lines=$(wc -l < $1) becomes "let file_lines= N " after command substitution; the space after the equal sign is an error. That leads to the second modification, the quotes around the command substitution expression. The statement let file_lines=" N " is perfectly legal, and let knows how to remove the leading space.
The first if clause in the pages script checks for an option and, if it was given, strips the dash ( - ) off and assigns it to the variable page_lines . wc in the command substitution expression returns the number of lines in the file whose name is given as argument.
The next group of lines calculates the number of pages and, if there is a remainder after the division, adds 1. Finally, the appropriate message is printed.
As a bigger example of integer arithmetic, we will complete our emulation of the C shell's pushd and popd functions (Task 4-8). Remember that these functions operate on DIRSTACK , a stack of directories represented as a string with the directory names separated by spaces. The C shell's pushd and popd take additional types of arguments, which are:
pushd +n takes the n th directory in the stack (starting with 0), rotates it to the top, and cd s to it.
pushd without arguments, instead of complaining, swaps the two top directories on the stack and cd s to the new top.
popd +n takes the n th directory in the stack and just deletes it.
The most useful of these features is the ability to get at the n th directory in the stack. Here are the latest versions of both functions:
function pushd { # push current directory onto stack dirname=$1 if [[ -d $dirname && -x $dirname ]]; then cd $dirname DIRSTACK="$dirname ${DIRSTACK:-$PWD}" print "$DIRSTACK" else print "still in $PWD." fi } function popd { # pop directory off the stack, cd to new top if [[ -n $DIRSTACK ]]; then DIRSTACK=${DIRSTACK#* } cd ${DIRSTACK%% *} print "$PWD" else print "stack empty, still in $PWD." fi }To get at the n th directory, we use a while loop that transfers the top directory to a temporary copy of the stack n times. We'll put the loop into a function called getNdirs that looks like this:
function getNdirs{ stackfront='' let count=0 while (( count < $1 )); do stackfront="$stackfront ${DIRSTACK%% *}" DIRSTACK=${DIRSTACK#* } let count=count+1 done }The argument passed to getNdirs is the n in question. The variable stackfront is the temporary copy that will contain the first n directories when the loop is done. stackfront starts as null; count , which counts the number of loop iterations, starts as 0.
The first line of the loop body appends the top of the stack ( ${DIRSTACK%%
*
} ) to stackfront ; the second line deletes the top from the stack. The last line increments the counter for the next iteration. The entire loop executes N times, for values of count from 0 to N -1.When the loop finishes, the last directory in $stackfront is the N th directory. The expression ${stackfront##
*
} extracts this directory. Furthermore, DIRSTACK now contains the "back" of the stack, i.e., the stack without the first n directories. With this in mind, we can now write the code for the improved versions of pushd and popd :function pushd { if [[ $1 = ++([0-9]) ]]; then # case of pushd +n: rotate n-th directory to top let num=${1#+} getNdirs $num newtop=${stackfront##* } stackfront=${stackfront%$newtop} DIRSTACK="$newtop $stackfront $DIRSTACK" cd $newtop elif [[ -z $1 ]]; then # case of pushd without args; swap top two directories firstdir=${DIRSTACK%% *} DIRSTACK=${DIRSTACK#* } seconddir=${DIRSTACK%% *} DIRSTACK=${DIRSTACK#* } DIRSTACK="$seconddir $firstdir $DIRSTACK" cd $seconddir else cd $dirname # normal case of pushd dirname dirname=$1 if [[ -d $dirname && -x $dirname ]]; then DIRSTACK="$dirname ${DIRSTACK:-$PWD}" print "$DIRSTACK" else print still in "$PWD." fi fi } function popd { # pop directory off the stack, cd to new top if [[ $1 = ++([0-9]) ]]; then # case of popd +n: delete n-th directory from stack let num={$1#+} getNdirs $num stackfront=${stackfront% *} DIRSTACK="$stackfront $DIRSTACK" else # normal case of popd without argument if [[ -n $DIRSTACK ]]; then DIRSTACK=${DIRSTACK#* } cd ${DIRSTACK%% *} print "$PWD" else print "stack empty, still in $PWD." fi fi }These functions have grown rather large; let's look at them in turn. The if at the beginning of pushd checks if the first argument is an option of the form + N . If so, the first body of code is run. The first let simply strips the plus sign (+) from the argument and assigns the result - as an integer - to the variable num . This, in turn, is passed to the getNdirs function.
The next two assignment statements set newtop to the N th directory - i.e., the last directory in $stackfront - and delete that directory from stackfront . The final two lines in this part of pushd put the stack back together again in the appropriate order and cd to the new top directory.
The elif clause tests for no argument, in which case pushd should swap the top two directories on the stack. The first four lines of this clause assign the top two directories to firstdir and seconddir , and delete these from the stack. Then, as above, the code puts the stack back together in the new order and cd s to the new top directory.
The else clause corresponds to the usual case, where the user supplies a directory name as argument.
popd works similarly. The if clause checks for the + N option, which in this case means delete the N th directory. A let extracts the N as an integer; the getNdirs function puts the first n directories into stackfront . Then the line stackfront=${stackfront% *} deletes the last directory (the N th directory) from stackfront . Finally, the stack is put back together with the N th directory missing.
The else clause covers the usual case, where the user doesn't supply an argument.
Before we leave this subject, here are a few exercises that should test your understanding of this code:
Add code to pushd that exits with an error message if the user supplies no argument and the stack contains fewer than two directories.
Verify that when the user specifies + N and N exceeds the number of directories in the stack, both pushd and popd use the last directory as the N th directory.
Modify the getNdirs function so that it checks for the above condition and exits with an appropriate error message if true.
Change getNdirs so that it uses cut (with command substitution), instead of the while loop, to extract the first N directories. This uses less code but runs more slowly because of the extra processes generated.
Feb 14, 2017 | bash.cyberciti.biz
# MS-DOS / XP cmd like stuff alias edit = $VISUAL alias copy = 'cp' alias cls = 'clear' alias del = 'rm' alias dir = 'ls' alias md = 'mkdir' alias move = 'mv' alias rd = 'rmdir' alias ren = 'mv' alias ipconfig = 'ifconfig'
Feb 04, 2017 | www.cyberciti.biz
The diff command compare files line by line. It can also compare two directories:
# Compare two folders using diff ## diff /etc /tmp/etc_oldRafal Matczak September 29, 2015, 7:36 amQuickly find differences between two directories
And quicker:diff -y <(ls -l ${DIR1}) <(ls -l ${DIR2})
Feb 04, 2017 | hints.macworld.com
The variable CDPATH defines the search path for the directory containing directories. So it served much like "directories home". The dangers are in creating too complex CDPATH. Often a single directory works best. For example export CDPATH = /srv/www/public_html . Now, instead of typing cd /srv/www/public_html/CSS I can simply type: cd CSS
Use CDPATH to access frequent directories in bash
Mar 21, '05 10:01:00AM Contributed by: jonbaumanI often find myself wanting to cd to the various directories beneath my home directory (i.e. ~/Library, ~/Music, etc.), but being lazy, I find it painful to have to type the ~/ if I'm not in my home directory already. Enter CDPATH , as desribed in man bash ):
The search path for the cd command. This is a colon-separated list of directories in which the shell looks for destination directories specified by the cd command. A sample value is ".:~:/usr".Personally, I use the following command (either on the command line for use in just that session, or in .bash_profile for permanent use):This way, no matter where I am in the directory tree, I can just cd dirname , and it will take me to the directory that is a subdirectory of any of the ones in the list. For example:CDPATH=".:~:~/Library"
[ robg adds: No, this isn't some deeply buried treasure of OS X, but I'd never heard of the CDPATH variable, so I'm assuming it will be of interest to some other readers as well.]$ cd $ cd Documents /Users/baumanj/Documents $ cd Pictures /Users/username/Pictures $ cd Preferences /Users/username/Library/Preferences etc...
cdable_vars is also nice
Check out the bash command shopt -s cdable_vars
Authored by: clh on Mar 21, '05 08:16:26PMFrom the man bash page:
cdable_varsWith this set, if I give the following bash command:If set, an argument to the cd builtin command that is not a directory is assumed to be the name of a variable whose value is the directory to change to.
export d="/Users/chap/Desktop"
I can then simply type
cd d
to change to my Desktop directory.
I put the shopt command and the various export commands in my .bashrc file.
Feb 04, 2017 | www.cyberciti.biz
Instead of running:cp /path/to/file /usr/dir1 cp /path/to/file /var/dir2 cp /path/to/file /nas/dir3Run the following command to copy file into multiple dirs:
echo /usr/dir1 /var/dir2 /nas/dir3 | xargs -n 1 cp -v /path/to/file
Feb 04, 2017 | www.cyberciti.biz
Locking a directoryFor privacy of my data I wanted to lock down /downloads on my file server. So I ran:
chmod 0000 / downloadschmod 0000 /downloads
The root user can still has access and ls and cd commands will not work. To go back:
chmod 0755 / downloadschmod 0755 /downloads Clear gibberish all over the screen
Just type:
resetreset Becoming human
Pass the -h or -H (and other options) command line option to GNU or BSD utilities to get output of command commands like ls, df, du, in human-understandable formats:
ls -lh # print sizes in human readable format (e.g., 1K 234M 2G) df -h df -k # show output in bytes, KB, MB, or GB free -b free -k free -m free -g # print sizes in human readable format (e.g., 1K 234M 2G) du -h # get file system perms in human readable format stat -c % A / boot # compare human readable numbers sort -h -a file # display the CPU information in human readable format on a Linux lscpu lscpu -e lscpu -e =cpu,node # Show the size of each file but in a more human readable way tree -h tree -h / bootls -lh # print sizes in human readable format (e.g., 1K 234M 2G) df -h df -k # show output in bytes, KB, MB, or GB free -b free -k free -m free -g # print sizes in human readable format (e.g., 1K 234M 2G) du -h # get file system perms in human readable format stat -c %A /boot # compare human readable numbers sort -h -a file # display the CPU information in human readable format on a Linux lscpu lscpu -e lscpu -e=cpu,node # Show the size of each file but in a more human readable way tree -h tree -h /boot Show information about known users in the Linux based system
Just type:
## linux version ## lslogins ## BSD version ## logins## linux version ## lslogins## BSD version ## logins
Sample outputs:
UID USER PWD-LOCK PWD-DENY LAST-LOGIN GECOS 0 root 0 0 22:37:59 root 1 bin 0 1 bin 2 daemon 0 1 daemon 3 adm 0 1 adm 4 lp 0 1 lp 5 sync 0 1 sync 6 shutdown 0 1 2014-Dec17 shutdown 7 halt 0 1 halt 8 mail 0 1 mail 10 uucp 0 1 uucp 11 operator 0 1 operator 12 games 0 1 games 13 gopher 0 1 gopher 14 ftp 0 1 FTP User 27 mysql 0 1 MySQL Server 38 ntp 0 1 48 apache 0 1 Apache 68 haldaemon 0 1 HAL daemon 69 vcsa 0 1 virtual console memory owner 72 tcpdump 0 1 74 sshd 0 1 Privilege-separated SSH 81 dbus 0 1 System message bus 89 postfix 0 1 99 nobody 0 1 Nobody 173 abrt 0 1 497 vnstat 0 1 vnStat user 498 nginx 0 1 nginx user 499 saslauth 0 1 "Saslauthd user"Confused on a top command output?Seriously, you need to try out htop instead of top:
sudo htopsudo htop Want to run the same command again?
Just type !! . For example:
/ myhome / dir / script / name arg1 arg2 # To run the same command again !! ## To run the last command again as root user sudo !!/myhome/dir/script/name arg1 arg2# To run the same command again !!## To run the last command again as root user sudo !!
The !! repeats the most recent command. To run the most recent command beginning with "foo":
! foo # Run the most recent command beginning with "service" as root sudo ! service!foo # Run the most recent command beginning with "service" as root sudo !service
The !$ use to run command with the last argument of the most recent command:
# Edit nginx.conf sudo vi / etc / nginx / nginx.conf # Test nginx.conf for errors / sbin / nginx -t -c / etc / nginx / nginx.conf # After testing a file with "/sbin/nginx -t -c /etc/nginx/nginx.conf", you # can edit file again with vi sudo vi ! $# Edit nginx.conf sudo vi /etc/nginx/nginx.conf# Test nginx.conf for errors /sbin/nginx -t -c /etc/nginx/nginx.conf# After testing a file with "/sbin/nginx -t -c /etc/nginx/nginx.conf", you # can edit file again with vi sudo vi !$ Get a reminder you when you have to leave
If you need a reminder to leave your terminal, type the following command:
leave +hhmmleave +hhmm
Where,
Home sweet home
- hhmm The time of day is in the form hhmm where hh is a time in hours (on a 12 or 24 hour clock), and mm are minutes. All times are converted to a 12 hour clock, and assumed to be in the next 12 hours.
Want to go the directory you were just in? Run:
cd -
Need to quickly return to your home directory? Enter:
cd
The variable CDPATH defines the search path for the directory containing directories:
export CDPATH = / var / www: / nas10export CDPATH=/var/www:/nas10
Now, instead of typing cd /var/www/html/ I can simply type the following to cd into /var/www/html path:
cd htmlcd html Editing a file being viewed with less pager
To edit a file being viewed with less pager, press v . You will have the file for edit under $EDITOR:
less * .c less foo.html ## Press v to edit file ## ## Quit from editor and you would return to the less pager again ##less *.c less foo.html ## Press v to edit file ## ## Quit from editor and you would return to the less pager again ## List all files or directories on your system
To see all of the directories on your system, run:
find / -type d | less # List all directories in your $HOME find $HOME -type d -ls | lessfind / -type d | less# List all directories in your $HOME find $HOME -type d -ls | less
To see all of the files, run:
find / -type f | less # List all files in your $HOME find $HOME -type f -ls | lessfind / -type f | less# List all files in your $HOME find $HOME -type f -ls | less Build directory trees in a single command
You can create directory trees one at a time using mkdir command by passing the -p option:
mkdir -p / jail / { dev,bin,sbin,etc,usr,lib,lib64 } ls -l / jail /mkdir -p /jail/{dev,bin,sbin,etc,usr,lib,lib64} ls -l /jail/ Copy file into multiple directories
Instead of running:
cp / path / to / file / usr / dir1 cp / path / to / file / var / dir2 cp / path / to / file / nas / dir3cp /path/to/file /usr/dir1 cp /path/to/file /var/dir2 cp /path/to/file /nas/dir3
Run the following command to copy file into multiple dirs:
echo / usr / dir1 / var / dir2 / nas / dir3 | xargs -n 1 cp -v / path / to / fileecho /usr/dir1 /var/dir2 /nas/dir3 | xargs -n 1 cp -v /path/to/file
Creating a shell function is left as an exercise for the reader
Quickly find differences between two directoriesThe diff command compare files line by line. It can also compare two directories:
ls -l / tmp / r ls -l / tmp / s # Compare two folders using diff ## diff / tmp / r / / tmp / s /
Feb 04, 2017 | www.cyberciti.biz
List all files or directories on your systemTo see all of the directories on your system, run:
find / -type d | less # List all directories in your $HOME find $HOME -type d -ls | lessfind / -type d | less# List all directories in your $HOME find $HOME -type d -ls | less
To see all of the files, run:
find / -type f | less # List all files in your $HOME find $HOME -type f -ls | less
Nov 04, 2016 | github.com
Relax-and-Recover is written in Bash (at least bash version 3 is needed), a language that can be used in many styles. We want to make it easier for everybody to understand the Relax-and-Recover code and subsequently to contribute fixes and enhancements.
Here is a collection of coding hints that should help to get a more consistent code base.
Don't be afraid to contribute to Relax-and-Recover even if your contribution does not fully match all this coding hints. Currently large parts of the Relax-and-Recover code are not yet in compliance with this coding hints. This is an ongoing step by step process. Nevertheless try to understand the idea behind this coding hints so that you know how to break them properly (i.e. "learn the rules so you know how to break them properly").
The overall idea behind this coding hints is:
Make yourself understoodMake yourself understood to enable others to fix and enhance your code properly as needed.
From this overall idea the following coding hints are derived.
For the fun of it an extreme example what coding style should be avoided:
#!/bin/bash for i in `seq 1 2 $((2*$1-1))`;do echo $((j+=i));done
Try to find out what that code is about - it does a useful thing.
Code must be easy to readCode should be easy to understand
- Variables and functions must have names that explain what they do, even if it makes them longer. Avoid too short names, in particular do not use one-letter-names (like a variable named
i
- just try to 'grep' for it over the whole code to find code that is related toi
). In general names should consist of two parts, a generic part plus a specific part to make them meaningful. For exampledev
is basically meaningless because there are so many different kind of device-like thingies. Use names likeboot_dev
or even betterboot_partition
versusbootloader_install_device
to make it unambiguous what that thingy actually is about. Use different names for different things so that others can 'grep' over the whole code and get a correct overview what actually belongs to a particular name.- Introduce intermediate variables with meaningful names to tell what is going on.
For example instead of running commands with obfuscated arguments like
rm -f $( ls ... | sed ... | grep ... | awk ... )
which looks scaring (what the heck gets deleted here?) better usefoo_dirs="..." foo_files=$( ls $foo_dirs | sed ... | grep ... ) obsolete_foo_files=$( echo $foo_files | awk ... ) rm -f $obsolete_foo_files
that tells the intent behind (regardless whether or not that code is the best way to do it - but now others can easily improve it).- Use functions to structure longer programs into code blocks that can be understood independently.
- Don't use
||
and&&
one-liners, write proper if-then-else-fi blocks.
Exceptions are simple do-or-die statements like
COMMAND || Error "meaningful error message"
and only if it aids readability compared to a full if-then-else clause.- Use
$( COMMAND )
instead of backticks`COMMAND`
- Use spaces when possible to aid readability like
output=( $( COMMAND1 OPTION1 | COMMAND2 OPTION2 ) )
instead of
output=($(COMMAND1 OPTION1|COMMAND2 OPTION2))
Do not only tell what the code does (i.e. the implementation details) but also explain what the intent behind is (i.e. why ) to make the code maintainable.
- Provide meaningful comments that tell what the computer should do and also explain why it should do it so that others understand the intent behind so that they can properly fix issues or adapt and enhance it as needed.
- If there is a GitHub issue or another URL available for a particular piece of code provide a comment with the GitHub issue or any other URL that tells about the reasoning behind current implementation details.
Here the initial example so that one can understand what it is about:
#!/bin/bash # output the first N square numbers # by summing up the first N odd numbers 1 3 ... 2*N-1 # where each nth partial sum is the nth square number # see https://en.wikipedia.org/wiki/Square_number#Properties # this way it is a little bit faster for big N compared to # calculating each square number on its own via multiplication N=$1 if ! [[ $N =~ ^[0-9]+$ ]] ; then echo "Input must be non-negative integer." 1>&2 exit 1 fi square_number=0 for odd_number in $( seq 1 2 $(( 2 * N - 1 )) ) ; do (( square_number += odd_number )) && echo $square_number done
Now the intent behind is clear and now others can easily decide if that code is really the best way to do it and easily improve it if needed.
Try to care about possible errorsBy default bash proceeds with the next command when something failed. Do not let your code blindly proceed in case of errors because that could make it hard to find the root cause of a failure when it errors out somewhere later at an unrelated place with a weird error message which could lead to false fixes that cure only a particular symptom but not the root cause.
Maintain Backward Compatibility
- In case of errors better abort than to blindly proceed.
- At least test mandatory conditions before proceeding. If a mandatory condition is not fulfilled abort with
Error "meaningful error message"
, see 'Relax-and-Recover functions' below.- Preferably in new scripts use
set -ue
to die from unset variables and unhandled errors and useset -o pipefail
to better notice failures in a pipeline. When leaving the script restore the Relax-and-Recover default bash flags and options withapply_bash_flags_and_options_commands "$DEFAULT_BASH_FLAGS_AND_OPTIONS_COMMANDS"
see usr/sbin/rear .- TODO Use
set -eu
andset -o pipefail
also in existing scripts, see make rear working with ''set -ue -o pipefail" .Implement adaptions and enhancements in a backward compatible way so that your changes do not cause regressions for others.
Dirty hacks welcome
- One same Relax-and-Recover code must work on various different systems. On older systems as well as on newest systems and on various different Linux distributions.
- Preferably use simple generic functionality that works on any Linux system. Better very simple code than oversophisticated (possibly fragile) constructs. In particular avoid special bash version 4 features (Relax-and-Recover code should also work with bash version 3).
- When there are incompatible differences on different systems distinction of cases with separated code is needed because it is more important that the Relax-and-Recover code works everywhere than having generic code that sometimes fails.
When there are special issues on particular systems it is more important that the Relax-and-Recover code works than having nice looking clean code that sometimes fails. In such special cases any dirty hacks that intend to make it work everywhere are welcome. But for dirty hacks the above listed coding hints become mandatory rules:
- Provide explanatory comments that tell what a dirty hack does together with a GitHub issue or any other URL that tell about the reasoning behind the dirty hack to enable others to properly adapt or clean up a dirty hack at any time later when the reason for it had changed or gone away.
- Try as good as you can to foresee possible errors or failures of a dirty hack and error out with meaningful error messages if things go wrong to enable others to understand the reason behind a failure.
- Implement the dirty hack in a way so that it does not cause regressions for others.
For example a dirty hack like the following is perfectly acceptable:
# FIXME: Dirty hack to make it work # on "FUBAR Linux version 666" # where COMMAND sometimes inexplicably fails # but always works after at most 3 attempts # see http://example.org/issue12345 # Retries should have no bad effect on other systems # where the first run of COMMAND works. COMMAND || COMMAND || COMMAND || Error "COMMAND failed."
Character EncodingUse only traditional (7-bit) ASCII charactes. In particular do not use UTF-8 encoded multi-byte characters.
Text Layout
- Non-ASCII characters in scripts may cause arbitrary unexpected failures on systems that do not support other locales than POSIX/C. During "rear recover" only the POSIX/C locale works (the ReaR rescue/recovery system has no support for non-ASCII locales) and /usr/sbin/rear sets the C locale so that non-ASCII characters are invalid in scripts. Have in mind that basically all files in ReaR are scripts. E.g. also /usr/share/rear/conf/default.conf and /etc/rear/local.conf are sourced (and executed) as scripts.
- English documentation texts do not need non-ASCII characters. Using non-ASCII characters in documentation texts makes it needlessly hard to display the documentation correctly for any user on any system. When non-ASCII characters are used but the user does not have the exact right matching locale set arbitrary nonsense can happen, cf. https://en.opensuse.org/SDB:Plain_Text_versus_Locale
Variables
- Indentation with 4 blanks, not tabs.
- Block level statements in same line:
if CONDITION ; then
Functions
- Curly braces only where really needed:
$FOO
instead of${FOO}
, but${FOO:-default_foo}
.- All variables that are used in more than a single script must be all-caps:
$FOO
instead of$foo
or$Foo
.- Variables that are used only locally should be lowercased and should be marked with
local
like:
local $foo="default_value"
Relax-and-Recover functions
- Use the
function
keyword to define a function.- Function names are lower case, words separated by underline (
_
).Use the available Relax-and-Recover functions when possible instead of re-implementing basic functionality again and again. The Relax-and-Recover functions are implemented in various lib/*-functions.sh files .
test, [, [[, ((
is_true
andis_false
:
See lib/global-functions.sh how to use them.
For example instead of using
if [[ ! "$FOO" =~ ^[yY1] ]] ; then
use
if ! is_true "$FOO" ; then
Paired parenthesis
- Use
[[
where it is required (e.g. for pattern matching or complex conditionals) and[
ortest
everywhere else.((
is the preferred way for numeric comparison, variables don't need to be prefixed with$
there.See also
- Use paired parenthesis for
case
patterns as in
case WORD in (PATTERN) COMMANDS ;; esac
so that editor commands (like '%' in 'vi') that check for matching opening and closing parenthesis work everywhere in the code.
June 24, 2015 | cyberciti.biz
... ... ...
Bash v4.0+ has inbuilt support for setting up a step value using {START..END..INCREMENT} syntax:
#!/bin/bash echo "Bash version ${BASH_VERSION}..." for i in {0..10..2} do echo "Welcome $i times" doneSample outputs:
Bash version 4.0.33(0)-release... Welcome 0 times Welcome 2 times Welcome 4 times Welcome 6 times Welcome 8 times Welcome 10 times... ... ...
Three-expression bash for loops syntax
This type of for loop share a common heritage with the C programming language. It is characterized by a three-parameter loop control expression; consisting of an initializer (EXP1), a loop-test or condition (EXP2), and a counting expression (EXP3).
for (( EXP1; EXP2; EXP3 )) do command1 command2 command3 doneA representative three-expression example in bash as follows:
#!/bin/bash for (( c=1; c<=5; c++ )) do echo "Welcome $c times" done... ... ...Jadu Saikia, November 2, 2008, 3:37 pm
Nice one. All the examples are explained well, thanks Vivek.Andi Reinbrech, November 18, 2010, 7:42 pmseq 1 2 20
output can also be produced using jotjot 1 20 2
The infinite loops as everyone knows have the following alternatives.
while(true)
or
while ://Jadu
I know this is an ancient thread, but thought this trick might be helpful to someone:Peko, July 16, 2009, 6:11 pmFor the above example with all the cuts, simply do
set `echo $line`
This will split line into positional parameters and you can after the set simply say
F1=$1; F2=$2; F3=$3
I used this a lot many years ago on solaris with "set `date`", it neatly splits the whole date string into variables and saves lots of messy cutting :-)
no, you can't change the FS, if it's not space, you can't use this method
Hi Vivek,Michal Kaut July 22, 2009, 6:12 am
Thanks for this a useful topic.IMNSHO, there may be something to modify here
=======================
Latest bash version 3.0+ has inbuilt support for setting up a step value:#!/bin/bash
for i in {1..5}
=======================
1) The increment feature seems to belong to the version 4 of bash.
Reference: http://bash-hackers.org/wiki/doku.php/syntax/expansion/brace
Accordingly, my bash v3.2 does not include this feature.BTW, where did you read that it was 3.0+ ?
(I ask because you may know some good website of interest on the subject).2) The syntax is {from..to..step} where from, to, step are 3 integers.
You code is missing the increment.Note that GNU Bash documentation may be bugged at this time,
because on GNU Bash manual, you will find the syntax {x..y[incr]}
which may be a typo. (missing the second ".." between y and increment).see http://www.gnu.org/software/bash/manual/bashref.html#Brace-Expansion
The Bash Hackers page
again, see http://bash-hackers.org/wiki/doku.php/syntax/expansion/brace
seeems to be more accurate,
but who knows ? Anyway, at least one of them may be right ;-)Keep on the good work of your own,
Thanks a million.- Peko
Hello,is there a simple way to control the number formatting? I use several computers, some of which have non-US settings with comma as a decimal point. This means that
for x in $(seq 0 0.1 1)
gives 0 0.1 0.2 1 one some machines and 0 0,1 0,2 1 on other.
Is there a way to force the first variant, regardless of the language settings? Can I, for example, set the keyboard to US inside the script? Or perhaps some alternative to$x
that would convert commas to points?
(I am sending these as parameters to another code and it won't accept numbers with commas)The best thing I could think of is adding
x=`echo $x | sed s/,/./`
as a first line inside the loop, but there should be a better solution? (Interestingly, the sed command does not seem to be upset by me rewriting its variable.)Thanks,
MichalPeko July 22, 2009, 7:27 am
To Michal Kaut:
Hi Michal,
Such output format is configured through LOCALE settings.
I tried :
export LC_CTYPE="en_EN.UTF-8″; seq 0 0.1 1
and it works as desired.
You just have to find the exact value for LC_CTYPE that fits to your systems and your needs.
Peko
Peko July 22, 2009, 2:29 pm
To Michal Kaus [2]
Ooops ;-)
Instead of LC_CTYPE,
LC_NUMERIC should be more appropriate
(Although LC_CTYPE is actually yielding to the same result I tested both)By the way, Vivek has already documented the matter : http://www.cyberciti.biz/tips/linux-find-supportable-character-sets.html
Philippe Petrinko October 30, 2009, 8:35 am
To Vivek:
Regarding your last example, that is : running a loop through arguments given to the script on the command line, there is a simplier way of doing this:
# instead of:
# FILES="$@"
# for f in $FILES# use the following syntax
for arg
do
# whatever you need here try : echo "$arg"
doneOf course, you can use any variable name, not only "arg".
Philippe Petrinko November 11, 2009, 11:25 am
To tdurden:
Why would'nt you use
1) either a [for] loop
for old in * ; do mv ${old} ${old}.new; done2) Either the [rename] command ?
excerpt form "man rename" :RENAME(1) Perl Programmers Reference Guide RENAME(1)
NAME
rename renames multiple filesSYNOPSIS
rename [ -v ] [ -n ] [ -f ] perlexpr [ files ]DESCRIPTION
"rename" renames the filenames supplied according to the rule specified
as the first argument. The perlexpr argument is a Perl expression
which is expected to modify the $_ string in Perl for at least some of
the filenames specified. If a given filename is not modified by the
expression, it will not be renamed. If no filenames are given on the
command line, filenames will be read via standard input.For example, to rename all files matching "*.bak" to strip the
extension, you might sayrename 's/\.bak$//' *.bak
To translate uppercase names to lower, you'd use
rename 'y/A-Z/a-z/' *
- Philippe
Philippe Petrinko November 11, 2009, 9:27 pm
If you set the shell option extglob, Bash understands some more powerful patterns. Here, a is one or more pattern, separated by the pipe-symbol (|).
?() Matches zero or one occurrence of the given patterns
*() Matches zero or more occurrences of the given patterns
+() Matches one or more occurrences of the given patterns
@() Matches one of the given patterns
!() Matches anything except one of the given patternssource: http://www.bash-hackers.org/wiki/doku.php/syntax/pattern
Philippe Petrinko November 12, 2009, 3:44 pm
To Sean:
Right, the more sharp a knife is, the easier it can cut your fingersI mean: There are side-effects to the use of file globbing (like in [ for f in * ] ) , when the globbing expression matches nothing: the globbing expression is not susbtitued.
Then you might want to consider using [ nullglob ] shell extension,
to prevent this.
see: http://www.bash-hackers.org/wiki/doku.php/syntax/expansion/globs#customizationDevil hides in detail ;-)
Dominic January 14, 2010, 10:04 am
There is an interesting difference between the exit value for two different for looping structures (hope this comes out right):
for (( c=1; c<=2; c++ )) do echo -n "inside (( )) loop c is $c, "; done; echo "done (( )) loop c is $c"
for c in {1..2}; do echo -n "inside { } loop c is $c, "; done; echo "done { } loop c is $c"
You see that the first structure does a final increment of c, the second does not. The first is more useful IMO because if you have a conditional break in the for loop, then you can subsequently test the value of $c to see if the for loop was broken or not; with the second structure you can't know whether the loop was broken on the last iteration or continued to completion.Dominic January 14, 2010, 10:09 am
sorry, my previous post would have been clearer if I had shown the output of my code snippet, which is:
inside (( )) loop c is 1, inside (( )) loop c is 2, done (( )) loop c is 3
inside { } loop c is 1, inside { } loop c is 2, done { } loop c is 2Philippe Petrinko March 9, 2010, 2:34 pm
@Dmitry
And, again, as stated many times up there, using [seq] is counter productive, because it requires a call to an external program, when you should Keep It Short and Simple, using only bash internals functions:
for ((c=1; c<21; c+=2)); do echo "Welcome $c times" ; done
(and I wonder why Vivek is sticking to that old solution which should be presented only for historical reasons when there was no way of using bash internals.
By the way, this historical recall should be placed only at topic end, and not on top of the topic, which makes newbies sticking to the not-up-to-date technique ;-) )Sean March 9, 2010, 11:15 pm
Andi Reinbrech November 18, 2010, 8:35 pmI have a comment to add about using the builtin for (( )) syntax. I would agree the builtin method is cleaner, but from what I've noticed with other builtin functionality, I had to check the speed advantage for myself. I wrote the following files:
builtin_count.sh:
#!/bin/bash
for ((i=1;i<=1000000;i++))
do
echo "Output $i"
done
seq_count.sh:
#!/bin/bash
for i in $(seq 1 1000000)
do
echo "Output $i"
done
And here were the results that I got:
time ./builtin_count.sh
real 0m22.122s
user 0m18.329s
sys 0m3.166stime ./seq_count.sh
real 0m19.590s
user 0m15.326s
sys 0m2.503sThe performance increase isn't too significant, especially when you are probably going to be doing something a little more interesting inside of the for loop, but it does show that builtin commands are not necessarily faster.
The reason why the external seq is faster, is because it is executed only once, and returns a huge splurb of space separated integers which need no further processing, apart from the for loop advancing to the next one for the variable substitution.
The internal loop is a nice and clean/readable construct, but it has a lot of overhead. The check expression is re-evaluated on every iteration, and a variable on the interpreter's heap gets incremented, possibly checked for overflow etc. etc.
Note that the check expression cannot be simplified or internally optimised by the interpreter because the value may change inside the loop's body (yes, there are cases where you'd want to do this, however rare and stupid they may seem), hence the variables are volatile and get re-evaluted.
I.e. botom line, the internal one has more overhead, the "seq" version is equivalent to either having 1000000 integers inside the script (hard coded), or reading once from a text file with 1000000 integers with a cat. Point being that it gets executed only once and becomes static.
OK, blah blah fishpaste, past my bed time :-)
Cheers,
AndiAnthony Thyssen June 4, 2010, 6:53 am
TheBonsai June 4, 2010, 9:57 amThe {1..10} syntax is pretty useful as you can use a variable with it!
limit=10 echo {1..${limit}} {1..10}You need to eval it to get it to work!
limit=10 eval "echo {1..${limit}}" 1 2 3 4 5 6 7 8 9 10'seq' is not avilable on ALL system (MacOSX for example)
and BASH is not available on all systems either.You are better off either using the old while-expr method for computer compatiblity!
limit=10; n=1; while [ $n -le 10 ]; do echo $n; n=`expr $n + 1`; doneAlternativally use a seq() function replacement
# seq_count 10 seq_count() { i=1; while [ $i -le $1 ]; do echo $i; i=`expr $i + 1`; done } # simple_seq 1 2 10 simple_seq() { i=$1; while [ $i -le $3 ]; do echo $i; i=`expr $i + $2`; done } seq_integer() { if [ "X$1" = "X-f" ] then format="$2"; shift; shift else format="%d" fi case $# in 1) i=1 inc=1 end=$1 ;; 2) i=$1 inc=1 end=$2 ;; *) i=$1 inc=$2 end=$3 ;; esac while [ $i -le $end ]; do printf "$format\n" $i; i=`expr $i + $inc`; done }Edited: by Admin added code tags.
The Bash C-style for loop was taken from KSH93, thus I guess it's at least portable towards Korn and Z.
The seq-function above could use i=$((i + inc)), if only POSIX matters. expr is obsolete for those things, even in POSIX.
Philippe Petrinko June 4, 2010, 10:15 am
Right Bonsai,
( http://www.opengroup.org/onlinepubs/009695399/utilities/xcu_chap02.html#tag_02_06_04 )But FOR C-style does not seem to be POSIXLY-correct
Read on-line reference issue 6/2004,
Top is here, http://www.opengroup.org/onlinepubs/009695399/mindex.htmland the Shell and Utilities volume (XCU) T.OC. is here
http://www.opengroup.org/onlinepubs/009695399/utilities/toc.html
doc is:
http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap01.htmland FOR command:
http://www.opengroup.org/onlinepubs/009695399/utilities/xcu_chap02.html#tag_02_09_04_03Anthony Thyssen June 6, 2010, 7:18 am
TheBonsai wrote. "The seq-function above could use i=$((i + inc)), if only POSIX matters. expr is obsolete for those things, even in POSIX."
I am not certain it is in Posix. It was NOT part of the original Bourne Shell, and on some machines, I deal with Bourne Shell. Not Ksh, Bash, or anything else.
Bourne Shell syntax works everywhere! But as 'expr' is a builtin in more modern shells, then it is not a big loss or slow down.
This is especially important if writing a replacement command, such as for "seq" where you want your "just-paste-it-in" function to work as widely as possible.
I have been shell programming pretty well all the time since 1988, so I know what I am talking about! Believe me.
MacOSX has in this regard been the worse, and a very big backward step in UNIX compatibility. 2 year after it came out, its shell still did not even understand most of the normal 'test' functions. A major pain to write shells scripts that need to also work on this system.
TheBonsai June 6, 2010, 12:35 pm
Philippe Petrinko November 22, 2010, 8:23 amYea, the question was if it's POSIX, not if it's 100% portable (which is a difference). The POSIX base more or less is a subset of the Korn features (88, 93), pure Bourne is something "else", I know. Real portability, which means a program can go wherever UNIX went, only in C ;)
And if you want to get rid of double-quotes, use:
one-liner code:
while read; do record=${REPLY}; echo ${record}|while read -d ","; do field="${REPLY#\"}"; field="${field%\"}"; echo ${field}; done; done<data
script code, added of some text to better see record and field breakdown:
#!/bin/bash
while read
do
echo "New record"
record=${REPLY}
echo ${record}|while read -d ,
do
field="${REPLY#\"}"
field="${field%\"}"
echo "Field is :${field}:"
done
done<data
Does it work with your data?
- PP
Philippe Petrinko November 22, 2010, 9:01 am
Of course, all the above code was assuming that your CSV file is named "data".
If you want to use anyname with the script, replace:
done<data
With:
done
And then use your script file (named for instance "myScript") with standard input redirection:
myScript < anyFileNameYouWant
Enjoy!
Philippe Petrinko November 22, 2010, 11:28 am
well no there is a bug, last field of each record is not read it needs a workout and may be IFS modification ! After all that's what it was built for :O)
Anthony Thyssen November 22, 2010, 11:31 pm
Another bug is the inner loop is a pipeline, so you can't assign variables for use later in the script. but you can use '<<<' to break the pipeline and avoid the echo.
But this does not help when you have commas within the quotes! Which is why you needed quotes in the first place.
In any case It is a little off topic. Perhaps a new thread for reading CVS files in shell should be created.
Philippe Petrinko November 24, 2010, 6:29 pm
Anthony,
Would you try this one-liner script on your CSV file?This one-liner assumes that CSV file named [data] has __every__ field double-quoted.
while read; do r="${REPLY#\"}";echo "${r//\",\"/\"}"|while read -d \";do echo "Field is :${REPLY}:";done;done<data
Here is the same code, but for a script file, not a one-liner tweak.
#!/bin/bash
# script csv01.sh
#
# 1) Usage
# This script reads from standard input
# any CSV with double-quoted data fields
# and breaks down each field on standard output
#
# 2) Within each record (line), _every_ field MUST:
# - Be surrounded by double quotes,
# - and be separated from preceeding field by a comma
# (not the first field of course, no comma before the first field)
#
while read
do
echo "New record" # this is not mandatory-just for explanation
#
#
# store REPLY and remove opening double quote
record="${REPLY#\"}"
#
#
# replace every "," by a single double quote
record=${record//\",\"/\"}
#
#
echo ${record}|while read -d \"
do
# store REPLY into variable "field"
field="${REPLY}"
#
#
echo "Field is :${field}:" # just for explanation
done
done
This script named here [cvs01.sh] must be used so:
cvs01.sh < my-cvs-file-with-doublequotes
Philippe Petrinko November 24, 2010, 6:35 pm
TheBonsai March 8, 2011, 6:26 am@Anthony,
By the way, using [REPLY] in the outer loop _and_ the inner loop is not a bug.
As long as you know what you do, this is not problem, you just have to store [REPLY] value conveniently, as this script shows.for ((i=1; i<=20; i++)); do printf "%02d\n" "$i"; donenixCraft March 8, 2011, 6:37 am
+1 for printf due to portability, but you can use bashy .. syntax too
for i in {01..20}; do echo "$i"; doneTheBonsai March 8, 2011, 6:48 am
Well, it isn't portable per se, it makes it portable to pre-4 Bash versions.
I think a more or less "portable" (in terms of POSIX, at least) code would be
i=0 while [ "$((i >= 20))" -eq 0 ]; do printf "%02d\n" "$i" i=$((i+1)) donePhilip Ratzsch April 20, 2011, 5:53 am
I didn't see this in the article or any of the comments so I thought I'd share. While this is a contrived example, I find that nesting two groups can help squeeze a two-liner (once for each range) into a one-liner:
for num in {{1..10},{15..20}};do echo $num;done
Great reference article!
Philippe Petrinko April 20, 2011, 8:23 am
@Philip
Philippe Petrinko May 6, 2011, 10:13 am
Nice thing to think of, using brace nesting, thanks for sharing.Hello Sanya,
That would be because brace expansion does not support variables. I have to check this.
Anyway, Keep It Short and Simple: (KISS) here is a simple solution I already gave above:xstart=1;xend=10;xstep=1
for (( x = $xstart; x <= $xend; x += $xstep)); do echo $x;doneActually, POSIX compliance allows to forget $ in for quotes, as said before, you could also write:
xstart=1;xend=10;xstep=1
for (( x = xstart; x <= xend; x += xstep)); do echo $x;donePhilippe Petrinko May 6, 2011, 10:48 am
Sanya,
Actually brace expansion happens __before__ $ parameter exapansion, so you cannot use it this way.
Nevertheless, you could overcome this this way:
max=10; for i in $(eval echo {1..$max}); do echo $i; done
Sanya May 6, 2011, 11:42 am
Hello, Philippe
Thanks for your suggestions
You basically confirmed my findings, that bash constructions are not as simple as zsh ones.
But since I don't care about POSIX compliance, and want to keep my scripts "readable" for less experienced people, I would prefer to stick to zsh where my simple for-loop worksCheers, Sanya
Philippe Petrinko May 6, 2011, 12:07 pm
Sanya,
First, you got it wrong: solutions I gave are not related to POSIX, I just pointed out that POSIX allows not to use $ in for (( )), which is just a little bit more readable sort of.
Second, why do you see this less readable than your [zsh] [for loop]?
for (( x = start; x <= end; x += step)) do
echo "Loop number ${x}"
doneIt is clear that it is a loop, loop increments and limits are clear.
IMNSHO, if anyone cannot read this right, he should not be allowed to code. :-D
BFN
Anthony Thyssen May 8, 2011, 11:30 pm
If you are going to do $(eval echo {1..$max});
You may as well use "seq" or one of the many other forms.
See all the other comments on doing for loops.Tom P May 19, 2011, 12:16 pm
I am trying to use the variable I set in the for line on to set another variable with a different extension. Couldn't get this to work and couldnt find it anywhere on the web Can someone help.
Example:
FILE_TOKEN=`cat /tmp/All_Tokens.txt` for token in $FILE_TOKEN do A1_$token=`grep $A1_token /file/path/file.txt | cut -d ":" -f2`my goal is to take the values from the ALL Tokens file and set a new variable with A1_ infront of it This tells be that A1_ is not a command
Many people hack together shell scripts quickly to do simple tasks, but these soon take on a life of their own. Unfortunately shell scripts are full of subtle effects which result in scripts failing in unusual ways. It's possible to write scripts which minimise these problems. In this article, I explain several techniques for writing robust bash scripts.
Use set -u
How often have you written a script that broke because a variable wasn't set? I know I have, many times.
chroot=$1 ... rm -rf $chroot/usr/share/docIf you ran the script above and accidentally forgot to give a parameter, you would have just deleted all of your system documentation rather than making a smaller chroot. So what can you do about it? Fortunately bash provides you with set -u, which will exit your script if you try to use an uninitialised variable. You can also use the slightly more readable set -o nounset.
david% bash /tmp/shrink-chroot.sh $chroot= david% bash -u /tmp/shrink-chroot.sh /tmp/shrink-chroot.sh: line 3: $1: unbound variable david%Use set -e
Every script you write should include set -e at the top. This tells bash that it should exit the script if any statement returns a non-true return value. The benefit of using -e is that it prevents errors snowballing into serious issues when they could have been caught earlier. Again, for readability you may want to use set -o errexit.
Using -e gives you error checking for free. If you forget to check something, bash will do it or you. Unfortunately it means you can't check $? as bash will never get to the checking code if it isn't zero. There are other constructs you could use:
command if [ "$?"-ne 0]; then echo "command failed"; exit 1; ficould be replaced with
command || { echo "command failed"; exit 1; }or
if ! command; then echo "command failed"; exit 1; fiWhat if you have a command that returns non-zero or you are not interested in its return value? You can use command || true, or if you have a longer section of code, you can turn off the error checking, but I recommend you use this sparingly.
set +e command1 command2 set -eOn a slightly related note, by default bash takes the error status of the last item in a pipeline, which may not be what you want. For example, false | true will be considered to have succeeded. If you would like this to fail, then you can use set -o pipefail to make it fail.
Program defensively - expect the unexpected
Your script should take into account of the unexpected, like files missing or directories not being created. There are several things you can do to prevent errors in these situations. For example, when you create a directory, if the parent directory doesn't exist, mkdir will return an error. If you add a -p option then mkdir will create all the parent directories before creating the requested directory. Another example is rm. If you ask rm to delete a non-existent file, it will complain and your script will terminate. (You are using -e, right?) You can fix this by using -f, which will silently continue if the file didn't exist.
Be prepared for spaces in filenames
Someone will always use spaces in filenames or command line arguments and you should keep this in mind when writing shell scripts. In particular you should use quotes around variables.
if [ $filename = "foo" ];will fail if $filename contains a space. This can be fixed by using:
if [ "$filename" = "foo" ];When using $@ variable, you should always quote it or any arguments containing a space will be expanded in to separate words.
david% foo() { for i in $@; do echo $i; done }; foo bar "baz quux" bar baz quux david% foo() { for i in "$@"; do echo $i; done }; foo bar "baz quux" bar baz quuxI can not think of a single place where you shouldn't use "$@" over $@, so when in doubt, use quotes.
If you use find and xargs together, you should use -print0 to separate filenames with a null character rather than new lines. You then need to use -0 with xargs.
david% touch "foo bar" david% find | xargs ls ls: ./foo: No such file or directory ls: bar: No such file or directory david% find -print0 | xargs -0 ls ./foo barSetting traps
Often you write scripts which fail and leave the filesystem in an inconsistent state; things like lock files, temporary files or you've updated one file and there is an error updating the next file. It would be nice if you could fix these problems, either by deleting the lock files or by rolling back to a known good state when your script suffers a problem. Fortunately bash provides a way to run a command or function when it receives a unix signal using the trap command.
trap command signal [signal ...]There are many signals you can trap (you can get a list of them by running kill -l), but for cleaning up after problems there are only 3 we are interested in: INT, TERM and EXIT. You can also reset traps back to their default by using - as the command.
Signal Description INT Interrupt - This signal is sent when someone kills the script by pressing ctrl-c. TERM Terminate - this signal is sent when someone sends the TERM signal using the kill command. EXIT Exit - this is a pseudo-signal and is triggered when your script exits, either through reaching the end of the script, an exit command or by a command failing when using set -e. Usually, when you write something using a lock file you would use something like:
if [ ! -e $lockfile ]; then touch $lockfile critical-section rm $lockfile else echo "critical-section is already running" fiWhat happens if someone kills your script while critical-section is running? The lockfile will be left there and your script won't run again until it's been deleted. The fix is to use:
if [ ! -e $lockfile ]; then trap "rm -f $lockfile; exit" INT TERM EXIT touch $lockfile critical-section rm $lockfile trap - INT TERM EXIT else echo "critical-section is already running" fiNow when you kill the script it will delete the lock file too. Notice that we explicitly exit from the script at the end of trap command, otherwise the script will resume from the point that the signal was received.
Race conditions
It's worth pointing out that there is a slight race condition in the above lock example between the time we test for the lockfile and the time we create it. A possible solution to this is to use IO redirection and bash's noclobber mode, which won't redirect to an existing file. We can use something similar to:
if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null; then trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT critical-section rm -f "$lockfile" trap - INT TERM EXIT else echo "Failed to acquire lockfile: $lockfile." echo "Held by $(cat $lockfile)" fiA slightly more complicated problem is where you need to update a bunch of files and need the script to fail gracefully if there is a problem in the middle of the update. You want to be certain that something either happened correctly or that it appears as though it didn't happen at all.Say you had a script to add users.
add_to_passwd $user cp -a /etc/skel /home/$user chown $user /home/$user -RThere could be problems if you ran out of diskspace or someone killed the process. In this case you'd want the user to not exist and all their files to be removed.
rollback() { del_from_passwd $user if [ -e /home/$user ]; then rm -rf /home/$user fi exit } trap rollback INT TERM EXIT add_to_passwd $user cp -a /etc/skel /home/$user chown $user /home/$user -R trap - INT TERM EXITWe needed to remove the trap at the end or the rollback function would have been called as we exited, undoing all the script's hard work.
Be atomic
Sometimes you need to update a bunch of files in a directory at once, say you need to rewrite urls form one host to another on your website. You might write:
for file in $(find /var/www -type f -name "*.html"); do perl -pi -e 's/www.example.net/www.example.com/' $file doneNow if there is a problem with the script you could have half the site referring to www.example.com and the rest referring to www.example.net. You could fix this using a backup and a trap, but you also have the problem that the site will be inconsistent during the upgrade too.
The solution to this is to make the changes an (almost) atomic operation. To do this make a copy of the data, make the changes in the copy, move the original out of the way and then move the copy back into place. You need to make sure that both the old and the new directories are moved to locations that are on the same partition so you can take advantage of the property of most unix filesystems that moving directories is very fast, as they only have to update the inode for that directory.
cp -a /var/www /var/www-tmp for file in $(find /var/www-tmp -type f -name "*.html"); do perl -pi -e 's/www.example.net/www.example.com/' $file done mv /var/www /var/www-old mv /var/www-tmp /var/wwwThis means that if there is a problem with the update, the live system is not affected. Also the time where it is affected is reduced to the time between the two mvs, which should be very minimal, as the filesystem just has to change two entries in the inodes rather than copying all the data around.
The disadvantage of this technique is that you need to use twice as much disk space and that any process that keeps files open for a long time will still have the old files open and not the new ones, so you would have to restart those processes if this is the case. In our example this isn't a problem as apache opens the files every request. You can check for files with files open by using lsof. An advantage is that you now have a backup before you made your changes in case you need to revert.
m. The printf builtin has a new %(fmt)T specifier, which allows time values to use strftime-like formatting. n. There is a new `compat41' shell option. o. The cd builtin has a new Posix-mandated `-e' option. p. Negative subscripts to indexed arrays, previously errors, now are treated as offsets from the maximum assigned index + 1. q. Negative length specifications in the ${var:offset:length} expansion, previously errors, are now treated as offsets from the end of the variable. ... ... ... t. There is a new `lastpipe' shell option that runs the last command of a pipeline in the current shell context. The lastpipe option has no effect if job control is enabled. ------------------------------------------------------------------------------- This is a terse description of the new features added to bash-4.1 since the release of bash-4.0. As always, the manual page (doc/bash.1) is the place to look for complete descriptions. e. `printf -v' can now assign values to array indices. f. New `complete -E' and `compopt -E' options that work on the "empty" completion: completion attempted on an empty command line. g. New complete/compgen/compopt -D option to define a `default' completion: a completion to be invoked on command for which no completion has been defined. If this function returns 124, programmable completion is attempted again, allowing a user to dynamically build a set of completions as completion is attempted by having the default completion function install individual completion functions each time it is invoked. h. When displaying associative arrays, subscripts are now quoted. i. Changes to dabbrev-expand to make it more `emacs-like': no space appended after matches, completions are not sorted, and most recent history entries are presented first. j. The [[ and (( commands are now subject to the setting of `set -e' and the ERR trap. ... ... ... q. The < and > operators to the [[ conditional command now do string comparison according to the current locale if the compatibility level is greater than 40.
- NEWS
This is a terse description of the new features added to bash-4.2 since the release of bash-4.1. As always, the manual page (doc/bash.1) is the place to look for complete descriptions. 1. New Features in Bash m. The printf builtin has a new %(fmt)T specifier, which allows time values to use strftime-like formatting. n. There is a new `compat41' shell option. o. The cd builtin has a new Posix-mandated `-e' option. p. Negative subscripts to indexed arrays, previously errors, now are treated as offsets from the maximum assigned index + 1. q. Negative length specifications in the ${var:offset:length} expansion, previously errors, are now treated as offsets from the end of the variable. ... ... ... t. There is a new `lastpipe' shell option that runs the last command of a pipeline in the current shell context. The lastpipe option has no effect if job control is enabled. u. History expansion no longer expands the `$!' variable expansion. v. Posix mode shells no longer exit if a variable assignment error occurs with an assignment preceding a command that is not a special builtin. w. Non-interactive mode shells exit if -u is enabled and an attempt is made to use an unset variable with the % or # expansions, the `//', `^', or `,' expansions, or the parameter length expansion. x. Posix-mode shells use the argument passed to `.' as-is if a $PATH search fails, effectively searching the current directory. Posix-2008 change.This is from bash 41, but still important to know:q. The < and > operators to the [[ conditional command now do string comparison according to the current locale if the compatibility level is greater than 40.
Bash Hackers Wik]
The new "coproc" keywordBash 4 introduces the concepts of coprocesses, a well known feature in other shells. The basic concept is simple: It will start any command in the background and set up an array that is populated with accessible files that represent the filedescriptors of the started process.
In other words: It lets you start a process in background and communicate with its input and output data streams.
Themapfile
builtin is able to map the lines of a file directly into an array. This avoids to fill an array yourself using a loop. It allows to define the range of lines to read and optionally calling a callback, for example to display a progress bar.See: The mapfile builtin
Thecase
construct understands two new action list terminators:The
;&
terminator causes execution to continue with the next action list (rather than terminate thecase
construct).The
;;&
terminator causes thecase
construct to test the next given pattern instead of terminating the whole execution.Changes to the "declare" builtin
The-p
option now prints all attributes and values of declared variables (or functions, when used with-f
). The output is fully re-usable as input.The new option
-l
declares a variable in a way that the content ist converted to lowercase on assignment. Same, but for uppercase, applies to-u
. The option-c
causes the content to be capitalized before assignment.The
declare -A
declares associative arrays (see below).read
builtin command got some interesting new features.The
-t
option to specify a timeout value has been slightly tuned. It now accepts fractional values and the special value 0 (zero). When-t 0
is specified,read
immediately returns with an exit status indicating if there's data waiting or not. However, when a timeout is given and theread
builtin times out, any partial data recieved up to the timeout is stored in the given variable, rather than lost. When a timeout is hit,read
exits with a code greater than 128.A new option,
-i
, was introduced to be able to preload the input buffer with some text (when Readline is used, with-e
). The user is able to change the text or just press return to accept it.See The read builtin
The builtin itself didn't change much, but the data displayed is more structured now. The help texts are in a better format, much easier to read.There are two new options:
-d
displays the summary of a help text,-m
displays a manpage-like format.Changes to the "ulimit" builtin
Beside the use of the 512 bytes blocksize everywhere in POSIX mode,ulimit
supports two new limits:-b
for max. socket buffer size and-T
for max. number of threads. The brace expancion was tuned to provide expansion results with leading zeros when requesting a row of numbers.See Brace expansion
Methods to modify the case on expansion time have been added.On expansion time you can modify the syntax by adding operators to the parameter name.
See Case modification on parameter expansion
When using substring expansion on the positional parameters, a starting index of 0 now causes $0 to be prefixed to the list (if the positional parameters are used at all). Before, this expansion started with $1:# this should display $0 on Bash v4, $1 on Bash v3 echo ${@:0:1}There's a new shell optionglobstar
. When enabled, Bash will perform recursive globbing on**
this means it matches all directories and files from the current position in the filesystem, rather that only the current level.The new shell option
dirspell
enables spelling corrections on directory names during globbing.See Pathname expansion (globbing)
Beside the classic method of integer indexed arrays, Bash 4 supports associative arrays.An associative array is an array indexed by an arbitrary string, something like
declare -A ASSOC ASSOC[First]="first element" ASSOC[Hello]="second element" ASSOC[Peter Pan]="A weird guy"See Arrays
There is a new&>>
redirection operator, which appends the standard output and standard error to the named file. This is the same as the good old>>FILE 2>&1
notation.The parser now understands
|&2>&1 |
, which redirects the standard error for a command through a pipe.See Redirection
Interesting new shell variables
Variable Description BASHPID
contains the PID of the current shell (this is different to what $$
does!)PROMPT_DIRTRIM
specifies the max. level of unshortened pathname elements in the prompt See Special parameters and shell variables
The mentioned shell options are off by default unless otherwise mentioned.
Option Description checkjobs
check for and report any running jobs at shell exit compat*
set compatiblity modes for older shell versions (influences regular expression matching in [[ ... ]]
dirspell
enables spelling corrections on directory names during globbing globstar
enables recursive globbing with **
- If a command is not found, the shell attempts to execute a shell function named
command_not_found_handle
, supplying the command words as the function arguments. This can be used to display userfriendly messages or perform different command searches.- The behaviour of the
set -e
(errexit
) mode was changed, it now acts more intuitive (and is better documented in the manpage).- The output target for the
xtrace
(set -x
/set +x
) feature ist configurable since Bash 4.1 (before it's fixed tostderr
): a variable namedBASH_XTRACEFD
can be set to the filedescriptor that should get the output- Bash 4.1 is able to log the history to syslog
Look more like bash 3.3 then bash 4.0. complete absence of interesting features and very poor understanding of the necessary path of shell development... (just look at William Park's BASFDIFF and other patches and you understand this negative comment better).
a. A new variable, rl_sort_completion_matches; allows applications to inhibit match list sorting (but beware: some things don't work right if applications do this).
b. A new variable, rl_completion_invoking_key; allows applications to discover the key that invoked rl_complete or rl_menu_complete.
c. The functions rl_block_sigint and rl_release_sigint are now public and available to calling applications who want to protect critical sections (like redisplay).
d. The functions rl_save_state and rl_restore_state are now public and available to calling applications; documented rest of readline's state flag values.
e. A new user-settable variable, `history-size', allows setting the maximum number of entries in the history list.
f. There is a new implementation of menu completion, with several improvements over the old; the most notable improvement is a better `completions browsing' mode.
g. The menu completion code now uses the rl_menu_completion_entry_function variable, allowing applications to provide their own menu completion generators.
h. There is support for replacing a prefix of a pathname with a `...' when displaying possible completions. This is controllable by setting the `completion-prefix-display-length' variable. Matches with a common prefix longer than this value have the common prefix replaced with `...'.
i. There is a new `revert-all-at-newline' variable. If enabled, readline will undo all outstanding changes to all history lines when `accept-line' is executed.
j. If the kernel supports it, readline displays special characters corresponding to a keyboard-generated signal when the signal is received.
May 22, 2008 Linux Journal
In addition to the fairly common forms of input/output redirection the shell recognizes something called process substitution. Although not documented as a form of input/output redirection, its syntax and its effects are similar.
The syntax for process substitution is:
<(list) or >(list)where each list is a command or a pipeline of commands. The effect of process substitution is to make each list act like a file. This is done by giving the list a name in the file system and then substituting that name in the command line. The list is given a name either by connecting the list to named pipe or by using a file in /dev/fd (if supported by the O/S). By doing this, the command simply sees a file name and is unaware that its reading from or writing to a command pipeline.To substitute a command pipeline for an input file the syntax is:
command ... <(list) ...To substitute a command pipeline for an output file the syntax is:command ... >(list) ...At first process substitution may seem rather pointless, for example you might imagine something simple like:
uniq <(sort a)to sort a file and then find the unique lines in it, but this is more commonly (and more conveniently) written as:sort a | uniqThe power of process substitution comes when you have multiple command pipelines that you want to connect to a single command.For example, given the two files:
# cat a e d c b a # cat b g f e d c bTo view the lines unique to each of these two unsorted files you might do something like this:# sort a | uniq >tmp1 # sort b | uniq >tmp2 # comm -3 tmp1 tmp2 a f g # rm tmp1 tmp2With process substitution we can do all this with one line:# comm -3 <(sort a | uniq) <(sort b | uniq) a f gDepending on your shell settings you may get an error message similar to:
syntax error near unexpected token `('when you try to use process substitution, particularly if you try to use it within a shell script. Process substitution is not a POSIX compliant feature and so it may have to be enabled via:set +o posixBe careful not to try something like:if [[ $use_process_substitution -eq 1 ]]; then set +o posix comm -3 <(sort a | uniq) <(sort b | uniq) fiThe command set +o posix enables not only the execution of process substitution but the recognition of the syntax. So, in the example above the shell tries to parse the process substitution syntax before the "set" command is executed and therefore still sees the process substitution syntax as illegal.
Of course, note that all shells may not support process substitution, these examples will work with bash.
http://personal.riverusers.com/~thegrendel/abs-guide.pdf
This version is specially book-formatted for duplex printing and is usually more up-to-he version you can download from the LDP site. Note that it's a 2.6 MB download.
Ease your system administration tasks by taking advantage of key parts of the Bourne-again shell (bash) and its features. Bash is a popular alternative to the original Bourne and Korn shells. It provides an impressive range of additional functionality that includes improvements to the scripting environment, extensive aliasing techniques, and improved methods for automatically completing different commands, files, and paths.
Do you sometimes wonder how to use parameters with your scripts, and how to pass them to internal functions or other scripts? Do you need to do simple validity tests on parameters or options, or perform simple extraction and replacement operations on the parameter strings? This tip helps you with parameter use and the various parameter expansions available in the bash shell.
3.1-0.09 October 27, 2007
About: The Advanced Bash Scripting Guide is both a reference and a tutorial on shell scripting. This comprehensive book (the equivalent of 880+ print pages) covers almost every aspect of shell scripting. It contains 340 profusely commented illustrative examples, a number of tables, and a cross-linked index/glossary. Not just a shell scripting tutorial, this book also provides an introduction to basic programming techniques, such as sorting and recursion. It is well suited for either individual study or classroom use. It covers Bash, up to and including version 3.2x.
Changes: Many bugfixes and stylistic cleanups were done. Four new example scripts were added. A new subsection on version 3.2 Bash update was added. Explanations of certain difficult concepts were clarified. This is an important update.
BashStyle is a Graphical Tool for changing the Bash's behaviour and look'n'feel.It has been part of *NixStyle-NG but is now splitt-off for easier maintainance.
It features some predefinied Themes, most of them are re-colorable. The first 4 Extra Options are only for the Separator Style (Theme).
Themes are 100% compatible with the Linux Console. When on it, colors will be deactivated automatically (to avoid ugly look'n'feel).
You can also enable colored manpages or get colored output from grep.
INSTALLATION:
run ./configure to check the dependencies
and run (sudo/su) make install to install BashStyle
then run bashstyle or go to Menu -> Accesoires -> BashStyle
Have Fun!
Requirements
This application requires GTK+ version 2.10.x. Other dependencies include:
Required: python 2.4+,pygtk 2.4+, pyglade 2.4+, bash 3.0+, sed, coreutils, gconfOptional: psyco, /dev/random
B1) What's new in version 3.2?Bash-3.2 is the second maintenance release of the third major release of bash. It contains the following significant new features (see the manual page for complete descriptions and the CHANGES and NEWS files in the bash-3.2 distribution).
Bash-3.2 now checks shell scripts for NUL characters rather than non-printing characters when deciding whether or not a script is a binary file. Quoting the string argument to the [[ command's =~ (regexp) operator now forces string matching, as with the other pattern-matching operators. A short feature history dating from Bash-2.0:
Bash-3.1 contained the following new features:
Bash-3.1 may now be configured and built in a mode that enforces strict POSIX compliance. The `+=' assignment operator, which appends to the value of a string or array variable, has been implemented. It is now possible to ignore case when matching in contexts other than filename generation using the new `nocasematch' shell option. Bash-3.0 contained the following new features:
Features to support the bash debugger have been implemented, and there is a new `extdebug' option to turn the non-default options on HISTCONTROL is now a colon-separated list of options and has been extended with a new `erasedups' option that will result in only one copy of a command being kept in the history list Brace expansion has been extended with a new {x..y} form, producing sequences of digits or characters Timestamps are now kept with history entries, with an option to save and restore them from the history file; there is a new HISTTIMEFORMAT variable describing how to display the timestamps when listing history entries The `[[' command can now perform extended regular expression (egrep-like) matching, with matched subexpressions placed in the BASH_REMATCH array variable A new `pipefail' option causes a pipeline to return a failure status if any command in it fails The `jobs', `kill', and `wait' builtins now accept job control notation in their arguments even if job control is not enabled The `gettext' package and libintl have been integrated, and the shell messages may be translated into other languages Bash-2.05b introduced the following new features:
support for multibyte characters has been added to both bash and readline the DEBUG trap is now run *before* simple commands, ((...)) commands, [[...]] conditional commands, and for ((...)) loops the shell now performs arithmetic in the largest integer size the machine supports (intmax_t) there is a new \D{...} prompt expansion; passes the `...' to strftime(3) and inserts the result into the expanded prompt there is a new `here-string' redirection operator: <<< word when displaying variables, function attributes and definitions are shown separately, allowing them to be re-used as input (attempting to re-use the old output would result in syntax errors). o `read' has a new `-u fd' option to read from a specified file descriptor
the bash debugger in examples/bashdb has been modified to work with the new DEBUG trap semantics, the command set has been made more gdb-like, and the changes to $LINENO make debugging functions work better the expansion of $LINENO inside a shell function is only relative to the function start if the shell is interactive -- if the shell is running a script, $LINENO expands to the line number in the script. This is as POSIX-2001 requires Bash-2.05a introduced the following new features:
The `printf' builtin has undergone major work There is a new read-only `shopt' option: login_shell, which is set by login shells and unset otherwise New `\A' prompt string escape sequence; expanding to time in 24-hour HH:MM format New `-A group/-g' option to complete and compgen; goes group name completion New [+-]O invocation option to set and unset `shopt' options at startup ksh-like `ERR' trap o `for' loops now allow empty word lists after the `in' reserved word
new `hard' and `soft' arguments for the `ulimit' builtin Readline can be configured to place the user at the same point on the line when retrieving commands from the history list Readline can be configured to skip `hidden' files (filenames with a leading `.' on Unix) when performing completion Bash-2.05 introduced the following new features:
This version has once again reverted to using locales and strcoll(3) when processing pattern matching bracket expressions, as POSIX requires. Added a new `--init-file' invocation argument as a synonym for `--rcfile', per the new GNU coding standards. The /dev/tcp and /dev/udp redirections now accept service names as well as port numbers.<li>`complete' and `compgen' now take a `-o value' option, which controls some of the aspects of that compspec. Valid values are: default - perform bash default completion if programmable completion produces no matches dirnames - perform directory name completion if programmable completion produces no matches filenames - tell readline that the compspec produces filenames, so it can do things like append slashes to directory names and suppress trailing spaces
A new loadable builtin, realpath, which canonicalizes and expands symlinks in pathname arguments. When `set' is called without options, it prints function defintions in a way that allows them to be reused as input. This affects `declare' and `declare -p' as well. This only happens when the shell is not in POSIX mode, since POSIX.2 forbids this behavior. Bash-2.04 introduced the following new features:
Programmable word completion with the new `complete' and `compgen' builtins; examples are provided in examples/complete/complete-examples `history' has a new `-d' option to delete a history entry `bind' has a new `-x' option to bind key sequences to shell commands The prompt expansion code has new `\j' and `\l' escape sequences The `no_empty_cmd_completion' shell option, if enabled, inhibits command completion when TAB is typed on an empty line `help' has a new `-s' option to print a usage synopsis New arithmetic operators: var++, var--, ++var, --var, expr1,expr2 (comma) New ksh93-style arithmetic for command: for ((expr1 ; expr2; expr3 )); do list; done<li>`read' has new options: `-t', `-n', `-d', `-s' The redirection code handles several filenames specially: /dev/fd/N, /dev/stdin, /dev/stdout, /dev/stderr The redirection code now recognizes /dev/tcp/HOST/PORT and /dev/udp/HOST/PORT and tries to open a TCP or UDP socket, respectively, to the specified port on the specified host The ${!prefix*} expansion has been implemented A new FUNCNAME variable, which expands to the name of a currently-executing function The GROUPS variable is no longer readonly A new shopt `xpg_echo' variable, to control the behavior of echo with respect to backslash-escape sequences at runtime The NON_INTERACTIVE_LOGIN_SHELLS #define has returned The version of Readline released with Bash-2.04, Readline-4.1, had several new features as well:
Parentheses matching is always compiled into readline, and controllable with the new `blink-matching-paren' variable The history-search-forward and history-search-backward functions now leave point at the end of the line when the search string is empty, like reverse-search-history, and forward-search-history A new function for applications: rl_on_new_line_with_prompt() New variables for applications: rl_already_prompted, and rl_gnu_readline_p Bash-2.03 had very few new features, in keeping with the convention that odd-numbered releases provide mainly bug fixes. A number of new features were added to Readline, mostly at the request of the Cygnus folks.
A new shopt option, `restricted_shell', so that startup files can test whether or not the shell was started in restricted mode Filename generation is now performed on the words between ( and ) in compound array assignments (this is really a bug fix) OLDPWD is now auto-exported, as POSIX.2 requires ENV and BASH_ENV are read-only variables in a restricted shell Bash may now be linked against an already-installed Readline library, as long as the Readline library is version 4 or newer All shells begun with the `--login' option will source the login shell startup files, even if the shell is not interactive
There were lots of changes to the version of the Readline library released along with Bash-2.03. For a complete list of the changes, read the file CHANGES in the Bash-2.03 distribution.
Bash-2.02 contained the following new features:
a new version of malloc (based on the old GNU malloc code in previous bash versions) that is more page-oriented, more conservative with memory usage, does not `orphan' large blocks when they are freed, is usable on 64-bit machines, and has allocation checking turned on unconditionally POSIX.2-style globbing character classes ([:alpha:], [:alnum:], etc.) POSIX.2-style globbing equivalence classes POSIX.2-style globbing collating symbols the ksh [[...]] extended conditional command the ksh egrep-style extended pattern matching operators a new `printf' builtin the ksh-like $(<filename) command substitution, which is equivalent to $(cat filename) new tilde prefixes that expand to directories from the directory stack new `**' arithmetic operator to do exponentiation case-insensitive globbing (filename expansion) menu completion a la tcsh `magic-space' history expansion function like tcsh the readline inputrc `language' has a new file inclusion directive ($include)
Bash-2.01 contained only a few new features:
new `GROUPS' builtin array variable containing the user's group list new bindable readline commands: history-and-alias-expand-line and alias-expand-line
Bash-2.0 contained extensive changes and new features from bash-1.14.7. Here's a short list:
new `time' reserved word to time pipelines, shell builtins, and shell functions one-dimensional arrays with a new compound assignment statement, appropriate expansion constructs and modifications to some of the builtins (read, declare, etc.) to use them new quoting syntaxes for ANSI-C string expansion and locale-specific string translation new expansions to do substring extraction, pattern replacement, and indirect variable expansion new builtins: `disown' and `shopt' new variables: HISTIGNORE, SHELLOPTS, PIPESTATUS, DIRSTACK, GLOBIGNORE, MACHTYPE, BASH_VERSINFO special handling of many unused or redundant variables removed (e.g., $notify, $glob_dot_filenames, $no_exit_on_failed_exec) dynamic loading of new builtin commands; many loadable examples provided new prompt expansions: \a, \e, \n, \H, \T, \@, \v, \V history and aliases available in shell scripts new readline variables: enable-keypad, mark-directories, input-meta, visible-stats, disable-completion, comment-begin new readline commands to manipulate the mark and operate on the region new readline emacs mode commands and bindings for ksh-88 compatibility updated and extended builtins new DEBUG trap expanded (and now documented) restricted shell mode
implementation stuff: autoconf-based configuration nearly all of the bugs reported since version 1.14 have been fixed most builtins converted to use builtin `getopt' for consistency most builtins use -p option to display output in a reusable form (for consistency) grammar tighter and smaller (66 reduce-reduce conflicts gone) lots of code now smaller and faster test suite greatly expanded
B2) Are there any user-visible incompatibilities between bash-3.2 and bash-2.05b?
There are a few incompatibilities between version 2.05b and version 3.2. They are detailed in the file COMPAT in the bash distribution. That file is not meant to be all-encompassing; send mail to bash-maintain...@gnu.org if if you find something that's not mentioned there.
$HISTIGNORE
- Set this to to avoid having consecutive duplicate commands and other not so useful information appended to the history list. This will cut down on hitting the up arrow endlessly to get to the command before the one you just entered twenty times. It will also avoid filling a large percentage of your history list with useless commands.
Try this:
$ export HISTIGNORE="&:ls:ls *:mutt:[bf]g:exit"
Using this, consecutive duplicate commands, invocations of ls, executions of the mutt mail client without any additional parameters, plus calls to the bg, fg and exit built-ins will not be appended to the history list.
readline Tips and Tricks
The readline library is used by bash and many other programs to read a line from the terminal, allowing the user to edit the line with standard Emacs editing keys.
- set show-all-if-ambiguous on
If you have this in your /etc/inputrc or ~/.inputrc, you will no longer have to hit the <Tab> key twice to produce a list of all possible completions. A single <Tab> will suffice. This setting is highly recommended.
- set visible-stats on
Adding this to your /etc/inputrc or ~/.inputrc will result in a character being appended to any file-names returned by completion, in much the same way as ls -F works.
- If you're a fan of vi as opposed to Emacs, you might prefer to operate bash in vi editing mode. Being a GNU program, bash uses Emacs bindings unless you specify otherwise.
Set the following in your /etc/inputrc or ~/.inputrc:
set editing-mode vi set keymap vi
and this in your /etc/bashrc or ~/.bashrc:
set -o vi
- Some people prefer the non-incremental style of history completion, as opposed to the incremental style offered by C-r and C-s. This is the style of history completion offered by csh.
bash offers bindings for this, but they are unbound by default.
Set the following in your /etc/inputrc or ~/.inputrc:
"\ep": history-search-backward
"\en": history-search-forwardHenceforth, ESC-p and ESC-n (or M-p and M-n) will give you non-incremental history completion backwards and forwards, respectively.
Do you sometimes wonder how to use parameters with your scripts, and how to pass them to internal functions or other scripts? Do you need to do simple validity tests on parameters or options, or perform simple extraction and replacement operations on the parameter strings? This tip helps you with parameter use and the various parameter expansions available in the bash shell.
BashDiff is a patch against Bash-3.0 shell, incorporating many useful features from Awk, Python, Zsh, Ksh, and others. It implements in the main core
- new brace expansion {a..b} --- integer/letter generation, positional parameters and array expansion
- new parameter expansion ${var|...} --- content filtering, list comprehension (like Python), regex/string splitting and joining, Python-like string methods, emulation of associative array lookup, etc.
- new command substitution $(=...) --- floating-point hook to Awk
- extended case statement --- regex, continuation, then/else sections
- extended for/while/until loops --- then/else sections, multiple for-loop variables
- try-block with string exception (like Python)
- new <<+ here-document --- relative indentation
- new <<<< here-file
and as dynamically loadable builtins
- extended read/echo builtins --- DOS lines, CSV format, and Awk emulation
- sscanf(3), <string.h> and <ctype.h> wrappers, ASCII/string conversion, and binary number conversion.
- new raise builtin for try-block
- array cut/splicing, array filter/map/zip/unzip (like Python)
- HTML template engine (like PHP, JSP, ASP)
- GDBM, SQLite, PostgreSQL, and MySQL database interface
- Expat XML parser interface
- stack/queue operations on arrays and positional parameters
- x-y character plot
- Libwebserver (embedded web server) interface
- GTK+2 interface for simple GUI dialog or layout
Release focus: Major feature enhancements
Changes:
This release adds a shell interface to GTK+2 widget library, for building a simple GUI dialog or layout. It uses XML syntax for layout, and returns the user's selection in a shell variable or runs a shell command as callback. The name of the 'xml' builtin has been changed to 'expat'. The <<+ here document now removes space and tab indents.Author:
William Park [contact developer]
The Advanced Bash Scripting Guide is both a reference and a tutorial on shell scripting. This comprehensive book (the equivalent of about 646 print pages) covers almost every aspect of shell scripting. It contains over 300 profusely commented illustrative examples, and a number of tables. Not just a shell scripting tutorial, this book also provides an introduction to basic programming techniques, such as sorting and recursion. It is well suited for either individual study or classroom use.
|
How Bash executes startup files.
For Login shells (subject to the -noprofile option):
On logging in:
If `/etc/profile' exists, then source it.If `~/.bash_profile' exists, then source it,
else if `~/.bash_login' exists, then source it,
else if `~/.profile' exists, then source it.On logging out:
If `~/.bash_logout' exists, source it.For non-login interactive shells (subject to the -norc and -rcfile options):
On starting up:
If `~/.bashrc' exists, then source it.For non-interactive shells:
On starting up:
If the environment variable `ENV' is non-null, expand the variable and source the file named by the value. If Bash is not started in Posix mode, it looks for `BASH_ENV' before `ENV'.So, typically, your `~/.bash_profile' contains the line
`if [ -f `~/.bashrc' ]; then source `~/.bashrc'; fi' after (or before) any login specific initializations.If Bash is invoked as `sh', it tries to mimic the behavior of `sh' as closely as possible. For a login shell, it attempts to source only `/etc/profile' and `~/.profile', in that order. The `-noprofile' option may still be used to disable this behavior. A shell invoked as `sh' does not attempt to source any other startup files.
When Bash is started in POSIX mode, as with the `-posix' command line option, it follows the Posix 1003.2 standard for startup files. In this mode, the `ENV' variable is expanded and that file sourced; no other startup files are read.
My .bashrc can be found here.
My .bash_profile can be found here.
.inputrc (readline)
Although the Readline library comes with a set of Emacs-like key bindings installed by default, it is possible that you would like to use a different set of keybindings. You can customize programs that use Readline by putting commands in an "init" file in your home directory. The name of this file is taken from the value of the shell variable `INPUTRC'. If that variable is unset, the default is `~/.inputrc'.
When a program which uses the Readline library starts up, the init file is read, and the key bindings are set.
In addition, the `C-x C-r' command re-reads this init file, thus incorporating any changes that you might have made to it.
Conditional Init Constructs within readline
Readline implements a facility similar in spirit to the conditional compilation features of the C preprocessor which allows key bindings and variable settings to be performed as the result of tests. There are three parser directives used.
`$if' The `$if' construct allows bindings to be made based on the editing mode, the terminal being used, or the application using Readline. The text of the test extends to the end of the line; no characters are required to isolate it. `mode' The `mode=' form of the `$if' directive is used to test whether Readline is in `emacs' or `vi' mode. This may be used in conjunction with the `set keymap' command, for instance, to set bindings in the `emacs-standard' and `emacs-ctlx' keymaps only if Readline is starting out in `emacs' mode. `term' The `term=' form may be used to include terminal-specific key bindings, perhaps to bind the key sequences output by the terminal's function keys. The word on the right side of the `=' is tested against the full name of the terminal and the portion of the terminal name before the first `-'. This allows SUN to match both SUN and SUN-CMD, for instance. `application' The APPLICATION construct is used to include application-specific settings. Each program using the Readline library sets the APPLICATION NAME, and you can test for it. This could be used to bind key sequences to
functions useful for a specific program.`$endif' This command terminates an `$if' command. `$else' Commands in this branch of the `$if' directive are executed if the test fails. The following command adds a key sequence that quotes the current or previous word in Bash:
$if bash
# Quote the current or previous word
"\C-xq": "\eb\"\ef\""
$endif
My .inputrc file is here
Last update by Hermann Heimhardt on October 7, 2001
From: Eli Subject: Patch against 2.05 Bash to make cd take 2 Args like ksh Date: Fri, 22 Jun 2001 14:38:17 -0400
*** origBash/bash-2.05/builtins/cd.def Wed Oct 11 11:10:20 2000 --- bash-2.05/builtins/cd.def Fri Jun 22 14:31:08 2001 *************** *** 187,192 **** --- 187,225 ---- } lflag = interactive ? LCD_PRINTPATH : 0; } + else if (list->next) + { + /* if next then 2 args, so replace in PWD arg1 with arg2 */ + int beginLen, oldLen, newLen, endLen; + char *replace; + path = get_string_value ("PWD"); + if (( replace = strstr( path,list->word->word)) == (char *)0 ) + { + builtin_error ("Couldn't find arg1 in PWD"); + return (EXECUTION_FAILURE); + } + beginLen = replace - path; + oldLen = strlen( list->word->word); + newLen = strlen( list->next->word->word); + endLen = strlen( path + beginLen + oldLen ) + 1 ; + + dirname = xmalloc( beginLen + newLen + endLen ); + /* copy path up to begining of string to replace */ + memcpy( dirname, path, beginLen ); + + /* then add new replacement string */ + memcpy( dirname + beginLen, list->next->word->word, newLen ); + + /* finally add end of path after replacement */ + memcpy( dirname + beginLen + newLen, path + beginLen+ oldLen,endLen); + + printf("%s\n",dirname); + if (change_to_directory (dirname, no_symlinks)) + { + free(dirname); + return (bindpwd (posixly_correct || no_symlinks)); + } + } else if (absolute_pathname (list->word->word)) dirname = list->word->word; else if (cdpath = get_string_value ("CDPATH"))
kcd is a directory change utility under Linux or any other Unix clones. It helps you navigate the directory tree. You can supply the desired directory name in the command line and let kcd find it for you or let kcd show the entire directory tree and use arrow keys to go to the destination directory.
Here is a list some features available in kcd:
- Fast directory rescanning. All directory timestamp is saved so that certain directories do not need rescanning if they are not changed.
- When supply directory in command line and kcd find too many matches. kcd shows all of them and let you select using cursor keys.
- You can tell kcd to skip some directory. You can also chose whether you want the whole directory tree, inside your home directory, etc. They can be set in kcd configuration file.
- Supports bash, ash, pdksh, zsh and tcsh.
- Multiple configuration profiles.
- Priority directory matching via bookmark.
- Fuzzy directory searching (Contributed by Robert Sandilands).
- Supports UTF-8 Unicode encoding with combining characters.
- Supports localization.
- Default, vi, and emacs key binding modes.
- Partial directory tree display.
- Display directory tree without saved data.
kcd is available as stable version and development version. You can distinguish development version from stable version by looking at its version number. Beginning from version 5.0.0, any version x.y.z where y is even is a stable version. Those where y is odd is a development version. Features currently present in the development version will eventually appear in the future stable version 8.0.0.
kcd is distributed in source form under General Public License (GPL).
The program and this web page is maintained by Kriang Lerdsuwanakij
Wcd is a program to change directory fast. It saves time typing at the keyboard. One needs to type only a part of a directory name and wcd will jump to it. By default wcd searches for a directory with a name that begins with what has been typed, but the use of wildcards is also fully supported.
For instance:wcd Deskwill change to directory /home/waterlan/Desktop
But also
wcd *topwill do that.
Wcd is free to use and you can get the source code too.
Some features of wcd:
Full screen interactive directory browser with speed search. Present the user a list in case of multiple matches. Wildcards *, ? and [SET] supported. Directory stack, push pop. Subdir definition possible. E.g. wcd subdira/subdirb Long directory names support in Win95/98/NT DOS-box Windows LAN UNC paths supported. Change drive and directory at once. Alias directories. Ban directories. 'cd' behaviour Free portable source-code, no special libraries required Multi platform:
DOS 16 bit, DOS 32 bit, DOS bash, Windows 3.1/95/NT DOS-box, Cygwin bash, Unix ksh, csh, bash and zsh.Wcd has been tested on: FreeDOS, MS-DOS 6.2, Win95, Win98, Windows NT 4.0, Linux, FreeBSD, HP-UX, SunOS, Solaris, SGI IRIX. Wcd works on any PC and can be ported to any Unix system.
WCD is free software, distributed under GNU General Public License.
cc. The [[ ... ]] command has a new binary `=~' operator that performs extended regular expression (egrep-like) matching.
l. New invocation option: --debugger. Enables debugging and turns on new `extdebug' shell option.
f. HISTCONTROL may now include the `erasedups' option, which causes all lines matching a line being added to be removed from the history list.
j. for, case, select, arithmetic commands now keep line number information for the debugger.
p. `declare -F' now prints out extra line number and source file information if the `extdebug' option is set.
r. New `caller' builtin to provide a call stack for the bash debugger.
t. `for', `select', and `case' command heads are printed when `set -x' is enabled.
u. There is a new {x..y} brace expansion, which is shorthand for {x.x+1, x+2,...,y}. x and y can be integers or single characters; the sequence may ascend or descend; the increment is always 1.
v. New ksh93-like ${!array[@]} expansion, expands to all the keys (indices) of array.
z. New `-o plusdirs' option to complete and compgen; if set, causes directory name completion to be performed and the results added to the rest of the possible completions.
ee. Subexpressions matched by the =~ operator are placed in the new BASH_REMATCH array variable.
gg. New `set -o pipefail' option that causes a pipeline to return a failure status if any of the processes in the pipeline fail, not just the last one.
kk. The `\W' prompt expansion now abbreviates $HOME as `~', like `\w'.
ll. The error message printed when bash cannot open a shell script supplied as argument 1 now includes the name of the shell, to better identify the error as coming from bash.
2. New Features in Readline
a. History expansion has a new `a' modifier equivalent to the `g' modifier for compatibility with the BSD csh.
b. History expansion has a new `G' modifier equivalent to the BSD csh `g'
modifier, which performs a substitution once per word.c. All non-incremental search operations may now undo the operation of replacing the current line with the history line.
d. The text inserted by an `a' command in vi mode can be reinserted with `.'.
e. New bindable variable, `show-all-if-unmodified'. If set, the readline completer will list possible completions immediately if there is more than one completion and partial completion cannot be performed.
g. History list entries now contain timestamp information; the history file functions know how to read and write timestamp information associated with each entry.
n. When listing completions, directories have a `/' appended if the `mark-directories' option has been enabled.
Not much changed (Score:5, Insightful)
by opk (149665) on Thursday July 29, @10:28AM (#9831965)Doesn't seem to be much changed given the version number increase. [[ =~ ]] can match regexes and it can do zsh style {1..3} expansions. Improved multibyte support too. There were bigger changes in some of the 2.0x updates.
- Re:First "zsh rules" post! by Anonymous Coward (Score:3) Thursday July 29, @10:30AM
Re:First "zsh rules" post! (Score:5, Informative)
by opk (149665) on Thursday July 29, @10:46AM (#9832230)Globs are more powerful: **/*.c will recursively search for .c files: much quicker to type than find.
You can match file types: e.g. *(@) will get you symlinks. *(U) gets files owned by you.Syntax for alternation is a lot easier. No @(this|that) or !(*.f). Instead, it is (this|that) and ^*.f
Next point is completion. It includes a vast range of definitions so completion works well for lots of commands. The completion system handles completing parts of words so it better handles user@host completion. You get descriptions with completion match listings. Completion also has a really powerful context sensitive configuration system so you can make it work the way you like.
It has modules. For running a simple shell script it will actually use less space than bash because it doesn't need to load the line editor and other interactive related code into memory.
There is much much more. It takes a while to learn everything but if you just enable the completion functions (autoload -U compinit; compinit) you'll find it better than bash or tcsh from day 1.
Re:Just wondering... (Score:5, Informative)
by opk (149665) on Thursday July 29, @11:05AM (#9832448)Zsh is still the best. Bash developers have different priorities.
Bash became the default primarily because it is GNU.
Zsh has some ugly but powerful features like nested expansions. The two areas where bash is better than zsh is multibyte support and POSIX compliance. Much of that was contributed by IBM and Apple respectively. But if you use the shell a lot, you'll find zsh does a lot of things better. The completion is amazing. And when it isn't emulating sh/posix, it fixes some of the broken design decisions (like word splitting of variables) which saves you from doing stupid things.The FSF actually does development in a very closed manner when it can (the gcc egcs split was partly because of this). Bash is a good example of this. That perhaps a good thing because it is probably good that bash doesn't get some of zsh's nasty (but powerful) features. And if zsh didn't exist, bash might have been forked by now. If you care about your shell, you'll find much more of a community on the zsh lists than the spam filled bug-bash list. You can't even get at alpha releases of bash without being one of the chosen few.
Can arrow key history be like Matlab's? (Score:3, Interesting)
by dara (119068) on Thursday July 29, @10:54AM (#9832323)I read the announcement and it mentions "History traversal with arrow keys", but what I would really like doesn't seem to be mentioned (but perhaps it is possible with bash-2.05, I'm not much of a shell expert). In Matlab, the up-arrow key searches the history for commands that match all the characters on the line. No characters and it acts like a normal bash arrow, if "figure, plot" is at the beginning of the line, it will quickly scroll through all plotting commands that have been entered at the shell. Any idea if this is possible?
Dara Parsavand
TThe Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way. Re:Can arrow key history be like Matlab's? (Score:2)
by Atzanteol (99067) on Thursday July 29, @11:10AM (#9832500)
(http://www.edespot.com/~amackenz/)Try 'ctrl+r'. Not *exactly* what you're looking for, but it lets you search through your history. i.e. on an empty line hit ctrl+r, then start typing.
[ Parent ]
- It's not necessary to start on an empty line [n/t] by piranha(jpl) (Score:1) Thursday July 29, @07:25PM
Re:Can arrow key history be like Matlab's? (Score:3, Informative)
by Anonymous Coward on Thursday July 29, @12:38PM (#9833770)cat >> ~/.inputrc
"\e[A": history-search-backward
"\e[B": history-search-forward
^D
Re:Can arrow key history be like Matlab's? (Score:1, Informative)
by Anonymous Coward on Thursday July 29, @01:17PM (#9834391)Here you are,put this in .inputrc: "\e[6~": history-search-forward
"\e[5~": history-search-backwardand use page-up pagge-down to search
I can live whithout it since 4dos
Autocompletion like W2K? (Score:2)
by tstoneman (589372) on Thursday July 29, @01:12PM (#9834316)I like bash, but the one thing that it doesn't support (out-of-the-box anyway) is auto-completion a la W2K. In NT, when you hit tab, you can cycle through all the words that can complete the letters you typed... on bash, it shows you a list. Is there a way to make bash behave more like W2K in this sense?
bash completion getting better (Score:3, Informative)
by AT (21754) on Thursday July 29, @11:08AM (#9832478)The completion ability of bash has been steadily improving. There is a nice script here [caliban.org] that sets up a lot of good completion rules for bash.
Re:First "zsh rules" post! (Score:2)
by Just Some Guy (3352) <`kirk+slashdot' `at' `strauser.com'> on Thursday July 29, @05:30PM (#9837797)
(http://subwiki.honeypot.net/ | Last Journal: Monday September 27, @09:09AM)That's just... I mean... Wow. Really. I just patched my .zshenv with - export SSH_AUTH_SOCK=$(find 2>/dev/null
/tmp -maxdepth 2 -type s -user $USER -regex '/tmp/ssh-.*/agent\..*')
+ export SSH_AUTH_SOCK=$(echo/tmp/ssh-*/agent.*(=UN)) That's sweet. Thanks for the pointer! The more I learn about zsh, the more I love it.
Re:First "zsh rules" post! (Score:1)
by sorbits (516598) on Saturday July 31, @01:09PM (#9853305)
(http://macromates.com/)I will certainly give it a try then! Until now I have sticked with tcsh for one single reason: history substition [go.dlr.de]!
Basically it lets me insert text from my history (including the current line) using few symbols (e.g. !$ is the last argument of the previous line) -- it's extremely powerful, e.g. it allows to search in the history and can do substitutions in results, or head/tail for paths etc.
I use it a lot to keep down the number of characters I need to type, and I have even assigned hotkeys to some of the substitutions I use the most.
This is really the make-or-break feature for wether or not I want to use a shell, so I really hope zsh has something similar!?!
Re:First "zsh rules" post! (Score:5, Informative)
by Just Some Guy (3352) <`kirk+slashdot' `at' `strauser.com'> on Thursday July 29, @10:50AM (#9832279)
(http://subwiki.honeypot.net/ | Last Journal: Monday September 27, @09:09AM)Bigs ones for me:
- A sane auto-completion system. That is, "cvs <tab>" gives a list of all of the commands that cvs understands. "cvs -<tab>" (same as above but tabbing after typing "-") gives a list of all of the options that cvs understands. These are good things. Now, in fairness, bash also has a command completion library. Unfortunately, it's implemented as a huge set of Bash functions. In zsh, "set|wc" returns 179 lines. In bash, "set|wc" returns 3,961 lines. The net effect is that zsh's system is noticeably faster and less polluting to the local environment.
- Modules. Wrappers for TCP connections, a built-in cron thingy, and PCRE are all loadable modules to do tricky things easily.
- Lots of pre-defined things. Load the "colors" and "zsh/terminfo" modules and you get defined associative arrays like $fg, which emits terminal-appropriate escape codes to set the foreground color of printed text. The command "echo ${fg[red]}red text${fg[default]}normal text" prints "red text" in red, and "normal text" in your default color.
Bash is a good shell, and I have nothing bad to say about it. However, zsh seems to have been designed from the ground up by power users and for power users. I absolutely love it and everyone that I've given a example config file to (to get them running with little hassle) has permanently switched.
Re:First "zsh rules" post! (Score:5, Informative)
by Just Some Guy (3352) <`kirk+slashdot' `at' `strauser.com'> on Thursday July 29, @11:21AM (#9832638)
(http://subwiki.honeypot.net/ | Last Journal: Monday September 27, @09:09AM)As the maintainer of FreeBSD's bash-completion [freshports.org] port, I'm reasonably familiar with it. Yes, it's approximately as powerful as zsh's completion module. Still, have you ever looked at it? It's a giant set of defined functions and glue. Seriously, get to a bash prompt and type "set" to see all of the things that've been stuffed into your shell's namespace. Now, try that with zsh and be pleasantly surprised. As I said in another post, a big side effect is that zsh's completions seem to be much faster than bash's. That alone is worth the price of admission for me.
Re:Dear Apple haters... (Score:5, Informative)
by Jahf (21968) on Thursday July 29, @10:58AM (#9832361)
(Last Journal: Thursday August 05, @03:55PM)Believe it or not, -most- of the large companies that use GPL'ed tools give back to the community. Apple has done numerous fixes, not just on BASH.
Sun (disclaimer: for whom I work) has done -tons- of work on GNOME, Mozilla and don't forget Open Office (just to name a few).
IBM works on many projects and gives back
... plus contributing all new things like JFS. All the distro makers like Red Hat, Novell, etc give back tons.
Each of those companies pay engineers to fix pieces not done in Open Source projects as well as to extend them for their customers. The patches are covered under GPL just like the main code, and these companies know it and yet knowingly dedicate serious money and hours to these projects. And then they satisfy the GPL by putting them out on source CDs or submitting them back to the main projects.
The big problem for getting submitted code accepted is that these companies are usually fixing and developing on a codebase that is aging. For instance, Sun did numerous I18N fixes for GNOME 2.6, but by the time they were ready the main GNOME organization had moved on to 2.8. That means there is a disconnect between the two and the changes have to be ported forward before they will hit the main code branch. The same problem can happen with kernel patches and just about any other codebase that changes versions so quickly.
Sorry, you were doing the good thing and pointing out Apple's contributions. But so many people think these companies violate the GPL (in spirit if not in law) when they are very large contributors to open source. Sure, some do, and the community usually find out about it and shame them into minimal compliance (Linksys and Sveasoft come to mind after my delving into alternate WRT54G firmwares last night), but generally speaking the big companies have been a good part of the community.
Re:On the list of changes: (Score:5, Informative)
by Prowl (554277) on Thursday July 29, @11:12AM (#9832526)GNU or Unix would seem to be the most appropriate bash has been around since 1989 (according to copywrite on man page). Linux 1.0 came around 5 years later.
The editors should know better, unless they're intentionally trying to piss off RMS
Looks great, but prefer Ash for scripts (Score:3, Interesting)
by Etcetera (14711) * <<cleaver> <at> <rohan.sdsu.edu>> on Thursday July 29, @10:41AM (#9832171)
(http://www-rohan.sdsu.edu/~cleaver/software/)
Looks like a nice Unicode-savvy release that should help with dealing with international languages at the command line. And yay to Apple for giving back (again). When will people finally accept that Apple is indeed helping out the OSS community through GCC, bash, and other tools...?Kind of off-topic, but for speed purposes in scripts that have to run fast, I find nothing better or more convenient than Ash, especially on systems where
/bin/sh is a symlink to /bin/bash. Does anyone know any history on this shell? Is it a clone of the original bourne shell or of bash? I can't seem to find anything useful on Google
...
Re:Looks great, but prefer Ash for scripts (Score:2)
by mihalis (28146) on Thursday July 29, @10:52AM (#9832301)
(http://www.mihalis.net)As I understand it, ash was written by Kenneth Almquist. I used to see his name on some of the Ada related mailing lists and newsgroups.
Re:Looks great, but prefer Ash for scripts (Score:2, Informative)
by Stephen Williams (23750) on Thursday July 29, @11:46AM (#9832931)
(http://nysa.cx/journal/ | Last Journal: Thursday December 05, @05:02AM)Ash (or dash, as it's called nowadays) is a Linux port of NetBSD's /bin/sh. It's a POSIX shell with separate lineage from bash. http://gondor.apana.org.au/~herbert/dash/
It's
/bin/sh on my system too. Faster and smaller than bash; watch those configure scripts fly! -Stephen
Some people don't agree. (Score:2)
by emil (695) on Thursday July 29, @12:48PM (#9833940)
(http://rhadmin.org/)Ash appears to consume large amounts of memory, and some people in BSD circles have serious objections to it. See the discussion here [undeadly.org] (scroll down a bit into the postings). I don't have an opinion on the issue one way or another.
And can I set up bash so I can, for instance, move from rc2.d to rc3.d by typing
$ cd 2 3
BASH Debugger provides a patched BASH that enables better debugging support as well as improved error reporting. It also contains the most comprehensive source code debugger for BASH that has been written. It can be used as a springboard for other experimental features (such as a timestamped history file), since dnter"
Freeware for Solaris -- downloadable version of bash for Solaris on SPARC and Intel
- Solaris 8 Software companion disk - Download
- The Bourne-Again Shell -- bash homepage
- Bash Features -- explained from what shell each feature originated and how it was extended and integrated. Recommended.
Google matched content |
[Jul 07, 2020] The Missing Readline Primer by Ian Miell Published on Jul 07, 2020 | zwischenzugs.com
[Jul 05, 2020] Learn Bash the Hard Way by Ian Miell [Leanpub PDF-iPad-Kindle] Published on Jul 05, 2020 | leanpub.com
[Jul 04, 2020] Eleven bash Tips You Might Want to Know by Ian Miell Published on Jul 04, 2020 | zwischenzugs.com
[Jul 04, 2020] Learn Bash Debugging Techniques the Hard Way by Ian Miell Published on Jul 04, 2020 | zwischenzugs.com
[Jul 02, 2020] 7 Bash history shortcuts you will actually use by Ian Miell Published on Oct 02, 2019 | opensource.com
[Aug 14, 2019] linux - How to get PID of background process - Stack Overflow Published on Aug 14, 2019 | stackoverflow.com
[Jan 26, 2019] Shell startup script order of execution Published on Jan 26, 2019 | flowblok.id.au
[Jan 26, 2019] Ten Things I Wish I'd Known About about bash Published on Jan 06, 2018 | zwischenzugs.com
[Jul 25, 2017] Beginner Mistakes Published on Jul 25, 2017 | wiki.bash-hackers.org
Please visit Heiner Steven
SHELLdorado the best shell scripting site on the Internet |
Path | Description | X-ref |
---|---|---|
./bashdb | Deprecated sample implementation of a bash debugger. | |
./complete | Shell completion code. | |
./functions | Example functions. | |
./functions/array-stuff | Various array functions (ashift, array_sort, reverse). | |
./functions/array-to-string | Convert an array to a string. | |
./functions/autoload | An almost ksh-compatible 'autoload' (no lazy load). | ksh |
./functions/autoload.v2 | An almost ksh-compatible 'autoload' (no lazy load). | ksh |
./functions/autoload.v3 | A more ksh-compatible 'autoload' (with lazy load). | ksh |
./functions/basename | A replacement for basename(1). | basename |
./functions/basename2 | Fast basename(1) and dirname(1) functions for bash/sh. | basename, dirname |
./functions/coproc.bash | Start, control, and end co-processes. | |
./functions/coshell.bash | Control shell co-processes (see coprocess.bash). | |
./functions/coshell.README | README for coshell and coproc. | |
./functions/csh-compat | A C-shell compatibility package. | csh |
./functions/dirfuncs | Directory manipulation functions from the book The Korn Shell. | |
./functions/dirname | A replacement for dirname(1). | dirname |
./functions/emptydir | Find out if a directory is empty. | |
./functions/exitstat | Display the exit status of processes. | |
./functions/external | Like command, but forces the use of external command. | |
./functions/fact | Recursive factorial function. | |
./functions/fstty | Front-end to sync TERM changes to both stty(1) and readline 'bind'. | stty.bash |
./functions/func | Print out definitions for functions named by arguments. | |
./functions/gethtml | Get a web page from a remote server (wget(1) in bash). | |
./functions/getoptx.bash | getopt function that parses long-named options. | |
./functions/inetaddr | Internet address conversion (inet2hex and hex2inet). | |
./functions/inpath | Return zero if the argument is in the path and executable. | inpath |
./functions/isnum.bash | Test user input on numeric or character value. | |
./functions/isnum2 | Test user input on numeric values, with floating point. | |
./functions/isvalidip | Test user input for valid IP addresses. | |
./functions/jdate.bash | Julian date conversion. | |
./functions/jj.bash | Look for running jobs. | |
./functions/keep | Try to keep some programs in the foreground and running. | |
./functions/ksh-cd | ksh-like cd: cd [-LP] [dir[change]]. | ksh |
./functions/ksh-compat-test | ksh-like arithmetic test replacements. | ksh |
./functions/kshenv | Functions and aliases to provide the beginnings of a ksh environment for bash | ksh |
./functions/login | Replace the login and newgrp built-ins in old Bourne shells. | |
./functions/lowercase | Rename files to lowercase. | rename lower |
./functions/manpage | Find and print a manpage. | fman |
./functions/mhfold | Print MH folders, useful only because folders(1) doesn't print mod date/times. | |
./functions/notify.bash | Notify when jobs change status. | |
./functions/pathfuncs | Path related functions (no_path, add_path, pre-path, del_path). | path |
./functions/README | README | |
./functions/recurse | Recursive directory traverser. | |
./functions/repeat2 | A clone of the C shell built-in repeat. | repeat, csh |
./functions/repeat3 | A clone of the C shell built-in repeat. | repeat, csh |
./functions/seq | Generate a sequence from m to n;m defaults to 1. | |
./functions/seq2 | Generate a sequence from m to n;m defaults to 1. | |
./functions/shcat | Readline-based pager. | cat, readline pager |
./functions/shcat2 | Readline-based pagers. | cat, readline pager |
./functions/sort-pos-params | Sort the positional parameters. | |
./functions/substr | A function to emulate the ancient ksh built-in. | ksh |
./functions/substr2 | A function to emulate the ancient ksh built-in. | ksh |
./functions/term | A shell function to set the terminal type interactively or not. | |
./functions/whatis | An implementation of the 10th Edition Unix sh built-in whatis(1) command. | |
./functions/whence | An almost ksh-compatible whence(1) command. | |
./functions/which | An emulation of which(1) as it appears in FreeBSD. | |
./functions/xalias.bash | Convert csh alias commands to bash functions. | csh, aliasconv |
./functions/xfind.bash | A find(1) clone. | |
./loadables/ | Example loadable replacements. | |
./loadables/basename.c | Return nondirectory portion of pathname. | basename |
./loadables/cat.c | cat(1) replacement with no options-the way cat was intended. | cat, readline pager |
./loadables/cut.c | cut(1) replacement. | |
./loadables/dirname.c | Return directory portion of pathname. | dirname |
./loadables/finfo.c | Print file info. | |
./loadables/getconf.c | POSIX.2 getconf utility. | |
./loadables/getconf.h | Replacement definitions for ones the system doesn't provide. | |
./loadables/head.c | Copy first part of files. | |
./loadables/hello.c | Obligatory "Hello World" / sample loadable. | |
./loadables/id.c | POSIX.2 user identity. | |
./loadables/ln.c | Make links. | |
./loadables/logname.c | Print login name of current user. | |
./loadables/Makefile.in | Simple makefile for the sample loadable built-ins. | |
./loadables/mkdir.c | Make directories. | |
./loadables/necho.c | echo without options or argument interpretation. | |
./loadables/pathchk.c | Check pathnames for validity and portability. | |
./loadables/print.c | Loadable ksh-93 style print built-in. | |
./loadables/printenv.c | Minimal built-in clone of BSD printenv(1). | |
./loadables/push.c | Anyone remember TOPS-20? | |
./loadables/README | README | |
./loadables/realpath.c | Canonicalize pathnames, resolving symlinks. | |
./loadables/rmdir.c | Remove directory. | |
./loadables/sleep.c | Sleep for fractions of a second. | |
./loadables/strftime.c | Loadable built-in interface to strftime(3). | |
./loadables/sync.c | Sync the disks by forcing pending filesystem writes to complete. | |
./loadables/tee.c | Duplicate standard input. | |
./loadables/template.c | Example template for loadable built-in. | |
./loadables/truefalse.c | True and false built-ins. | |
./loadables/tty.c | Return terminal name. | |
./loadables/uname.c | Print system information. | |
./loadables/unlink.c | Remove a directory entry. | |
./loadables/whoami.c | Print out username of current user. | |
./loadables/perl/ | Illustrates how to build a Perl interpreter into bash. | |
./misc | Miscellaneous | |
./misc/aliasconv.bash | Convert csh aliases to bash aliases and functions. | csh, xalias |
./misc/aliasconv.sh | Convert csh aliases to bash aliases and functions. | csh, xalias |
./misc/cshtobash | Convert csh aliases, environment variables, and variables to bash equivalents. | csh, xalias |
./misc/README | README | |
./misc/suncmd.termcap | SunView TERMCAP string. | |
./obashdb | Modified version of the Korn Shell debugger from Bill Rosenblatt's Learning the Korn Shell. | |
./scripts.noah | Noah Friedman's collection of scripts (updated to bash v2 syntax by Chet Ramey). | |
./scripts.noah/aref.bash | Pseudo-arrays and substring indexing examples. | |
./scripts.noah/bash.sub.bash | Library functions used by require.bash. | |
./scripts.noah/bash_version. bash | A function to slice up $BASH_VERSION. | |
./scripts.noah/meta.bash | Enable and disable eight-bit readline input. | |
./scripts.noah/mktmp.bash | Make a temporary file with a unique name. | |
./scripts.noah/number.bash | A fun hack to translate numerals into English. | |
./scripts.noah/PERMISSION | Permissions to use the scripts in this directory. | |
./scripts.noah/prompt.bash | A way to set PS1 to some predefined strings. | |
./scripts.noah/README | README | |
./scripts.noah/remap_keys.bash | A front end to bind to redo readline bindings. | readline |
./scripts.noah/require.bash | Lisp-like require/provide library functions for bash. | |
./scripts.noah/send_mail. | Replacement SMTP client written in bash. | |
./scripts.noah/shcat.bash | bash replacement for cat(1). | cat |
./scripts.noah/source.bash | Replacement for source that uses current directory. | |
./scripts.noah/string.bash | The string(3) functions at the shell level. | |
./scripts.noah/stty.bash | Front-end to stty(1) that changes readline bindings too. | fstty |
./scripts.noah/y_or_n_p.bash | Prompt for a yes/no/quit answer. | ask |
./scripts.v2 | John DuBois' ksh script collection (converted to bash v2 syntax by Chet Ramey). | |
./scripts.v2/arc2tarz | Convert an arc archive to a compressed tar archive. | |
./scripts.v2/bashrand | Random number generator with upper and lower bounds and optional seed. | random |
./scripts.v2/cal2day.bash | Convert a day number to a name. | |
./scripts.v2/cdhist.bash | cd replacement with a directory stack added. | |
./scripts.v2/corename | Tell what produced a core file. | |
./scripts.v2/fman | Fast man(1) replacement. | manpage |
./scripts.v2/frcp | Copy files using ftp(1) but with rcp-type command-line syntax. | |
./scripts.v2/lowercase | Change filenames to lowercase. | rename lower |
./scripts.v2/ncp | A nicer front end for cp(1) (has -i, etc).. | |
./scripts.v2/newext | Change the extension of a group of files. | rename |
./scripts.v2/nmv | A nicer front end for mv(1) (has -i, etc).. | rename |
./scripts.v2/pages | Print specified pages from files. | |
./scripts.v2/PERMISSION | Permissions to use the scripts in this directory. | |
./scripts.v2/pf | A pager front end that handles compressed files. | |
./scripts.v2/pmtop | Poor man's top(1) for SunOS 4.x and BSD/OS. | |
./scripts.v2/README | README | |
./scripts.v2/ren | Rename files by changing parts of filenames that match a pattern. | rename |
./scripts.v2/rename | Change the names of files that match a pattern. | rename |
./scripts.v2/repeat | Execute a command multiple times. | repeat |
./scripts.v2/shprof | Line profiler for bash scripts. | |
./scripts.v2/untar | Unarchive a (possibly compressed) tarfile into a directory. | |
./scripts.v2/uudec | Carefully uudecode(1) multiple files. | |
./scripts.v2/uuenc | uuencode(1) multiple files. | |
./scripts.v2/vtree | Print a visual display of a directory tree. | tree |
./scripts.v2/where | Show where commands that match a pattern are. | |
./scripts | Example scripts. | |
./scripts/adventure.sh | Text adventure game in bash! | |
./scripts/bcsh.sh | Bourne shell's C shell emulator. | csh |
./scripts/cat.sh | Readline-based pager. | cat, readline pager |
./scripts/center | Center a group of lines. | |
./scripts/dd-ex.sh | Line editor using only /bin/sh, /bin/dd, and /bin/rm. | |
./scripts/fixfiles.bash | Recurse a tree and fix files containing various bad characters. | |
./scripts/hanoi.bash | The inevitable Towers of Hanoi in bash. | |
./scripts/inpath | Search $PATH for a file the same name as $1; return TRUE if found. | inpath |
./scripts/krand.bash | Produces a random number within integer limits. | random |
./scripts/line-input.bash | Line input routine for GNU Bourne Again Shell plus terminal-control primitives. | |
./scripts/nohup.bash | bash version of nohup command. | |
./scripts/precedence | Test relative precedences for && and || operators. | |
./scripts/randomcard.bash | Print a random card from a card deck. | random |
./scripts/README | README | |
./scripts/scrollbar | Display scrolling text. | |
./scripts/scrollbar2 | Display scrolling text. | |
./scripts/self-repro | A self-reproducing script (careful!). | |
./scripts/showperm.bash | Convert ls(1) symbolic permissions into octal mode. | |
./scripts/shprompt | Display a prompt and get an answer satisfying certain criteria. | ask |
./scripts/spin.bash | Display a spinning wheel to show progress. | |
./scripts/timeout | Give rsh(1) a shorter timeout. | |
./scripts/vtree2 | Display a tree printout of the direcotry with disk use in 1k blocks. | tree |
./scripts/vtree3 | Display a graphical tree printout of dir. | tree |
./scripts/vtree3a | Display a graphical tree printout of dir. | tree |
./scripts/websrv.sh | A web server in bash! | |
./scripts/xterm_title | Print the contents of the xterm title bar. | |
./scripts/zprintf | Emulate printf (obsolete since printf is now a bash built-in). | |
./startup-files | Example startup files. | |
./startup-files/Bash_aliases | Some useful aliases (written by Fox). | |
./startup-files/Bash_profile | Sample startup file for bash login shells (written by Fox). | |
./startup-files/bash-profile | Sample startup file for bash login shells (written by Ramey). | |
./startup-files/bashrc | Sample Bourne Again Shell init file (written by Ramey). | |
./startup-files/Bashrc.bfox | Sample Bourne Again Shell init file (written by Fox). | |
./startup-files/README | README | |
./startup-files/apple | Example startup files for Mac OS X. | |
./startup-files/apple/aliases | Sample aliases for Mac OS X. | |
./startup-files/apple/bash.defaults | Sample User preferences file. | |
./startup-files/apple/environment | Sample Bourne Again Shell environment file. | |
./startup-files/apple/login | Sample login wrapper. | |
./startup-files/apple/logout | Sample logout wrapper. | |
./startup-files/apple/rc | Sample Bourne Again Shell config file. | |
./startup-files/apple/README | README |
Things bash has or uses that ksh88 does not:
long invocation options
[-+]O invocation option
`!' reserved word
arithmetic for command: for ((expr1 ; expr2; expr3 )); do list; done
posix mode and posix conformance
command hashing
tilde expansion for assignment statements that look like $PATH
process substitution with named pipes if /dev/fd is not available
the ${!param} indirect parameter expansion operator
the ${!param*} prefix expansion operator
the ${param:offset[:length]} parameter substring operator
the ${param/pat[/string]} parameter pattern substitution operator
variables: BASH, BASH_VERSION, BASH_VERSINFO, UID, EUID, SHLVL,
TIMEFORMAT, HISTCMD, HOSTTYPE, OSTYPE, MACHTYPE,
HISTFILESIZE, HISTIGNORE, HISTCONTROL, PROMPT_COMMAND,
IGNOREEOF, FIGNORE, INPUTRC, HOSTFILE, DIRSTACK,
PIPESTATUS, HOSTNAME, OPTERR, SHELLOPTS, GLOBIGNORE,
GROUPS, FUNCNAME, histchars, auto_resume
prompt expansion with backslash escapes and command substitution
redirection: &> (stdout and stderr)
more extensive and extensible editing and programmable completion
builtins: bind, builtin, command, declare, dirs, echo -e/-E, enable,
exec -l/-c/-a, fc -s, export -n/-f/-p, hash, help, history,
jobs -x/-r/-s, kill -s/-n/-l, local, logout, popd, pushd,
read -e/-p/-a/-t/-n/-d/-s, readonly -a/-n/-f/-p,
set -o braceexpand/-o histexpand/-o interactive-comments/
-o notify/-o physical/-o posix/-o hashall/-o onecmd/
-h/-B/-C/-b/-H/-P, set +o, suspend, trap -l, type,
typeset -a/-F/-p, ulimit -u, umask -S, alias -p, shopt,
disown, printf, complete, compgen
`!' csh-style history expansion
POSIX.2-style globbing character classes
POSIX.2-style globbing equivalence classes
POSIX.2-style globbing collating symbols
egrep-like extended pattern matching operators
case-insensitive pattern matching and globbing
`**' arithmetic operator to do exponentiation
redirection to /dev/fd/N, /dev/stdin, /dev/stdout, /dev/stderr
arrays of unlimited size
Things ksh88 has or uses that bash does not:
tracked aliases (alias -t)
variables: ERRNO, FPATH, EDITOR, VISUAL
co-processes (|&, >&p, <&p)
weirdly-scoped functions
typeset +f to list all function names without definitions
text of command history kept in a file, not memory
builtins: alias -x, cd old new, fc -e -, newgrp, print,
read -p/-s/-u/var?prompt, set -A/-o gmacs/
-o bgnice/-o markdirs/-o nolog/-o trackall/-o viraw/-s,
typeset -H/-L/-R/-Z/-A/-ft/-fu/-fx/-l/-u/-t, whence
using environment to pass attributes of exported variables
arithmetic evaluation done on arguments to some builtins
reads .profile from $PWD when invoked as login shell
Implementation differences:
function | korn | bash |
simple output | echo | |
discipline functions | yes | no |
POSIX character classes | yes | no |
help | no | yes |
'cd' spelling correction | no | yes |
arithmetic (C-style) for | yes | no |
arithmetic bases | 2-36 | 2-64 |
array initialization | set -A USERVAR value1 .... | USERVAR=(value1 ....) |
array size | limited | unlimited |
associative arrays | yes | no |
compond arrays | yes | no |
[gnu.bash.bug] BASH Frequently-Asked Questions (FAQ version 3.29)
FAQ This is the Bash FAQ, version 3.29, for Bash version 3.0.
Re Suggestions for corrections to executable.el - use of PATHEXT
From: | Lennart Borgman |
Subject: | Re: Suggestions for corrections to executable.el - use of PATHEXT |
Date: | Sun, 12 Sep 2004 12:56:08 +0200 |
From: "Eli Zaretskii" <[email protected]>
> First, I'm not sure we should look at PATHEXT. That variable is AFAIK
> looked at by the shell, so if we want Emacs behave _exactly_ like the
> shell does, we should at least look at the value of SHELL and/or
> ComSpec (and COMSPEC for older systems). I mean, what if the user's
> shell is Bash, which AFAIK doesn't look at PATHEXT at all? And if the
> shell is COMMAND.COM, then ".cmd" should not be in the list. Etc.,
> etc.
PATHEXT is looked at by cmd.exe (the default shell on the NT hereditary
line). I do not know if it is used by command.com (the default shell on the
95 line) but I doubt it. When I tested now I found that the Run entry in
Windows Start menu honor the default extensions for PATHEXT (.com, .exe.,
.bat, .cmd). It does not however not recognize .pl which I have in my
PATHEXT (cmd.exe recognize it). I am using NT4 when testing this.
So perhaps not even ms windows is consistent here. What seems clear however
is that the main purpose of PATHEXT is as far as I can see to make it easier
for the user when entering a command interactively. The user may for example
type "notepad" instead of "notepad.exe".
PATHEXT is set by the user and expresses the users wish to type less. It
seems reasonable to use PATHEXT for this purpose in Emacs too. The variable
executable-binary-suffixes is (if I understand this correctly) used for this
purpose by executable-find. This is however not clearly expressed in the
documentation.
A note: w32-shell-execute does something quite different. It calls the ms
windows API ShellExecute to do the action associated with a certain "verb"
on a file type (on windows this means file extension). Typical verbs are
"open" and "print". Windows Explorer uses this.
Having said all this I just want to say that I regret that I took this issue
up without looking closer at the problem.
- Lennart
Society
Groupthink : Two Party System as Polyarchy : Corruption of Regulators : Bureaucracies : Understanding Micromanagers and Control Freaks : Toxic Managers : Harvard Mafia : Diplomatic Communication : Surviving a Bad Performance Review : Insufficient Retirement Funds as Immanent Problem of Neoliberal Regime : PseudoScience : Who Rules America : Neoliberalism : The Iron Law of Oligarchy : Libertarian Philosophy
Quotes
War and Peace : Skeptical Finance : John Kenneth Galbraith :Talleyrand : Oscar Wilde : Otto Von Bismarck : Keynes : George Carlin : Skeptics : Propaganda : SE quotes : Language Design and Programming Quotes : Random IT-related quotes : Somerset Maugham : Marcus Aurelius : Kurt Vonnegut : Eric Hoffer : Winston Churchill : Napoleon Bonaparte : Ambrose Bierce : Bernard Shaw : Mark Twain Quotes
Bulletin:
Vol 25, No.12 (December, 2013) Rational Fools vs. Efficient Crooks The efficient markets hypothesis : Political Skeptic Bulletin, 2013 : Unemployment Bulletin, 2010 : Vol 23, No.10 (October, 2011) An observation about corporate security departments : Slightly Skeptical Euromaydan Chronicles, June 2014 : Greenspan legacy bulletin, 2008 : Vol 25, No.10 (October, 2013) Cryptolocker Trojan (Win32/Crilock.A) : Vol 25, No.08 (August, 2013) Cloud providers as intelligence collection hubs : Financial Humor Bulletin, 2010 : Inequality Bulletin, 2009 : Financial Humor Bulletin, 2008 : Copyleft Problems Bulletin, 2004 : Financial Humor Bulletin, 2011 : Energy Bulletin, 2010 : Malware Protection Bulletin, 2010 : Vol 26, No.1 (January, 2013) Object-Oriented Cult : Political Skeptic Bulletin, 2011 : Vol 23, No.11 (November, 2011) Softpanorama classification of sysadmin horror stories : Vol 25, No.05 (May, 2013) Corporate bullshit as a communication method : Vol 25, No.06 (June, 2013) A Note on the Relationship of Brooks Law and Conway Law
History:
Fifty glorious years (1950-2000): the triumph of the US computer engineering : Donald Knuth : TAoCP and its Influence of Computer Science : Richard Stallman : Linus Torvalds : Larry Wall : John K. Ousterhout : CTSS : Multix OS Unix History : Unix shell history : VI editor : History of pipes concept : Solaris : MS DOS : Programming Languages History : PL/1 : Simula 67 : C : History of GCC development : Scripting Languages : Perl history : OS History : Mail : DNS : SSH : CPU Instruction Sets : SPARC systems 1987-2006 : Norton Commander : Norton Utilities : Norton Ghost : Frontpage history : Malware Defense History : GNU Screen : OSS early history
Classic books:
The Peter Principle : Parkinson Law : 1984 : The Mythical Man-Month : How to Solve It by George Polya : The Art of Computer Programming : The Elements of Programming Style : The Unix Haters Handbook : The Jargon file : The True Believer : Programming Pearls : The Good Soldier Svejk : The Power Elite
Most popular humor pages:
Manifest of the Softpanorama IT Slacker Society : Ten Commandments of the IT Slackers Society : Computer Humor Collection : BSD Logo Story : The Cuckoo's Egg : IT Slang : C++ Humor : ARE YOU A BBS ADDICT? : The Perl Purity Test : Object oriented programmers of all nations : Financial Humor : Financial Humor Bulletin, 2008 : Financial Humor Bulletin, 2010 : The Most Comprehensive Collection of Editor-related Humor : Programming Language Humor : Goldman Sachs related humor : Greenspan humor : C Humor : Scripting Humor : Real Programmers Humor : Web Humor : GPL-related Humor : OFM Humor : Politically Incorrect Humor : IDS Humor : "Linux Sucks" Humor : Russian Musical Humor : Best Russian Programmer Humor : Microsoft plans to buy Catholic Church : Richard Stallman Related Humor : Admin Humor : Perl-related Humor : Linus Torvalds Related humor : PseudoScience Related Humor : Networking Humor : Shell Humor : Financial Humor Bulletin, 2011 : Financial Humor Bulletin, 2012 : Financial Humor Bulletin, 2013 : Java Humor : Software Engineering Humor : Sun Solaris Related Humor : Education Humor : IBM Humor : Assembler-related Humor : VIM Humor : Computer Viruses Humor : Bright tomorrow is rescheduled to a day after tomorrow : Classic Computer Humor
The Last but not Least Technology is dominated by two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt. Ph.D
Copyright 1996-2021 by Softpanorama Society. www.softpanorama.org was initially created as a service to the (now defunct) UN Sustainable Development Networking Programme (SDNP) without any remuneration. This document is an industrial compilation designed and created exclusively for educational use and is distributed under the Softpanorama Content License. Original materials copyright belong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine.
FAIR USE NOTICE This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to advance understanding of computer science, IT technology, economic, scientific, and social issues. We believe this constitutes a 'fair use' of any such copyrighted material as provided by section 107 of the US Copyright Law according to which such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...
|
You can use PayPal to to buy a cup of coffee for authors of this site |
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or referenced source) and are not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society. We do not warrant the correctness of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be tracked by Google please disable Javascript for this site. This site is perfectly usable without Javascript.
Last modified: June 08, 2021