Aliases will not provide information on how to use commands, but can be a great boon to remembering them – especially those that
are complex or require a string of options to do what you want. Here are some examples that I use to avoid command complexity:
alias dirsBySize='du -kx | egrep -v "\./.+/" | sort -n'
alias myip='hostname -I | awk '\''{print }'\'''
alias oct2dec='f(){ echo "obase=10; ibase=8; $1" | bc; unset -f f; }; f'
alias recent='ls -ltr | tail -5'
alias rel='lsb_release -r'
alias side-by-side='pr -mt '
cheat
There's a very useful snap called "cheat" that can be used to print a cheat sheet for a particular command, It will contain a lot
of useful examples of how to use the command. You do, however, have to be using a system that supports snaps (distribution-neutral
packages) and install cheat.
Here's a truncated example of what you might see:
shs@firefly:~$ cheat grep
# To search a file for a pattern:
grep <pattern> <file>
# To perform a case-insensitive search (with line numbers):
grep -in <pattern> <file>
# To recursively grep for string <pattern> in <dir>:
grep -R <pattern> <dir>
# Read search patterns from a file (one per line):
grep -f <pattern-file> <<file>
# Find lines NOT containing pattern:
grep -v <pattern> <file>
# To grep with regular expressions:
grep "^00" <file> # Match lines starting with 00
grep -E "[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}" <file> # Find IP add
…
cheat sheets
You can also locate and use a prepared Linux cheat sheet, whether you print it and keep it on your desktop or download a PDF that
you can open when needed. It's hard to know all of the commands available on Linux or all of the options available with any
particular command. Good cheat sheets can save you a lot of trouble by providing common usage examples.
Replace man pages with Tealdeer on LinuxTealdeer is a Rust implementation of
tldr, which provides easy-to-understand information about common commands. 21 Jun 2021
Sudeshna Sur (Red Hat,
Correspondent) Feed 10
up Image by : Opensource.com x Subscribe now
Get the highlights in your inbox every week.
https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0
More Linux resources
Man pages were my go-to resource when I started exploring Linux. Certainly,
man is the most frequently used command when a beginner starts getting familiar
with the world of the command line. But man pages, with their extensive lists of options and
arguments, can be hard to decipher, which makes it difficult to understand whatever you wanted
to know. If you want an easier solution with example-based output, I think tldr is the best option. What's Tealdeer?
Tealdeer is a wonderful
implementation of tldr in Rust. It's a community-driven man page that gives very simple
examples of how commands work. The best part about Tealdeer is that it has virtually every
command you would normally use.
Install Tealdeer
On Linux, you can install Tealdeer from your software repository. For example, on Fedora :
[ c ] reate a compressed archive and write it to a [ f ] ile, using [ a ] rchive suffix to
determine the compression program:
tar caf target.tar.xz file1 file2 file3
To control the cache:
$ tldr --update
$ tldr --clear-cache
You can give Tealdeer output some color with the --color option, setting it to
always , auto , and never . The default is
auto , but I like the added context color provides, so I make mine permanent with
this addition to my ~/.bashrc file:
alias tldr='tldr --color always'
Conclusion
The beauty of Tealdeer is you don't need a network connection to use it, except when you're
updating the cache. So, even if you are offline, you can still search for and learn about your
new favorite command. For more information, consult the tool's documentation .
Would you use Tealdeer? Or are you already using it? Let us know what you think in the
comments below.
Step 1:
Open
the file using vim editor with command:
$ vim ostechnix.txt
Step 2:
Highlight
the lines that you want to comment out. To do so, go to the line you want to comment and move the cursor to the beginning of a line.
Press
SHIFT+V
to
highlight the whole line after the cursor. After highlighting the first line, press
UP
or
DOWN
arrow
keys or
k
or
j
to
highlight the other lines one by one.
Here is how the lines will look like after highlighting them.
Step 3:
After
highlighting the lines that you want to comment out, type the following and hit
ENTER
key:
:s/^/# /
Please mind
the
space
between
#
and
the last forward slash (
/
).
Now you will see the selected lines are commented out i.e.
#
symbol
is added at the beginning of all lines.
Here,
s
stands
for
"substitution"
.
In our case, we substitute the
caret
symbol
^
(in
the beginning of the line) with
#
(hash).
As we all know, we put
#
in-front
of a line to comment it out.
Step 4:
After
commenting the lines, you can type
:w
to
save the changes or type
:wq
to
save the file and exit.
Let us move on to the next method.
Method 2:
Step 1:
Open
the file in vim editor.
$ vim ostechnix.txt
Step 2:
Set
line numbers by typing the following in vim editor and hit ENTER.
:set number
Step 3:
Then
enter the following command:
:1,4s/^/#
In this case, we are commenting out the lines from
1
to
4
.
Check the following screenshot. The lines from
1
to
4
have
been commented out.
Step 4:
Finally,
unset the line numbers.
:set nonumber
Step 5:
To
save the changes type
:w
or
:wq
to
save the file and exit.
The same procedure can be used for uncommenting the lines in a file. Open the file and set the line numbers as shown in Step 2.
Finally type the following command and hit ENTER at the Step 3:
:1,3s/^#/
After uncommenting the lines, simply remove the line numbers by entering the following command:
:set nonumber
Let us go ahead and see the third method.
Method 3:
This one is similar to Method 2 but slightly different.
Step 1:
Open
the file in vim editor.
$ vim ostechnix.txt
Step 2:
Set
line numbers by typing:
:set number
Step 3:
Type
the following to comment out the lines.
:1,4s/^/# /
The above command will comment out lines from 1 to 4.
Step 4:
Finally,
unset the line numbers by typing the following.
:set nonumber
Method 4:
This method is suggested by one of our reader
Mr.Anand
Nande
in the comment section below.
Step 1:
Open
file in vim editor:
$ vim ostechnix.txt
Step 2:
Go
to the line you want to comment. Press
Ctrl+V
to
enter into
'Visual
block'
mode.
Step 3:
Press
UP
or
DOWN
arrow
or the letter
k
or
j
in
your keyboard to select all the lines that you want to be commented in your file.
Step 4:
Press
Shift+i
to
enter into
INSERT
mode.
This will place your cursor on the first line.
Step 5:
And
then insert
#
(press
Shift+3
)
before your first line.
Step 6:
Finally,
press
ESC
key.
This will insert
#
on
all other selected lines.
As you see in the above screenshot, all other selected lines including the first line are commented out.
Method 5:
This method is suggested by one of our Twitter follower and friend
Mr.Tim
Chase
. We can even target lines to comment out by
regex
.
In other words, we can comment all the lines that contains a specific word.
Step 1:
Open
the file in vim editor.
$ vim ostechnix.txt
Step 2:
Type
the following and press ENTER key:
:g/\Linux/s/^/# /
The above command will comment out all lines that contains the word
"Linux"
.
Replace
"Linux"
with
a word of your choice.
As you see in the above output, all the lines have the word
"Linux"
,
hence all of them are commented out.
And, that's all for now. I hope this was useful. If you know any other method than the given methods here, please let me know in the
comment section below. I will check and add them in the guide.
What if you needed to execute a specific command again, one which you used a while back? And
you can't remember the first character, but you can remember you used the word "serve".
You can use the up key and keep on pressing up until you find your command. (That could take
some time)
Or, you can enter CTRL + R and type few keywords you used in your last command. Linux will
help locate your command, requiring you to press enter once you found your command. The example
below shows how you can enter CTRL + R and then type "ser" to find the previously run "PHP
artisan serve" command. For sure, this tip will help you speed up your command-line
experience.
You can also use the history command to output all the previously stored commands. The
history command will give a list that is ordered in ascending relative to its execution.
In Bash scripting, $? prints the exit status. If it returns zero, it means there is no error. If it is non-zero,
then you can conclude the earlier task has some issue.
If you run the above script once, it will print 0 because the directory does not exist, therefore the script will
create it. Naturally, you will get a non-zero value if you run the script a second time, as seen below:
$ ./debug.sh
Testing Debudding
+ a=2
+ b=3
+ c=5
+ DEBUG set +x
+ '[' on == on ']'
+ set +x
2 + 3 = 5
Standard error redirection
You can redirect all the system errors to a custom file using standard errors, which can be denoted by the number 2 . Execute
it in normal Bash commands, as demonstrated below:
Most of the time, it is difficult to find the exact line number in scripts. To print the line number with the error, use the PS4
option (supported with Bash 4.1 or later). Example below:
Even small to medium-sized companies have some sort of governance surrounding server
decommissioning. They might not call it decommissioning but the process usually goes something
like the following:
Send out a notification or multiple notifications of system end-of-life to
stakeholders
Make complete backups of the entire system and its data
Unplug the system from the network but leave the system running (2-week Scream test)
Shutdown and unplug from power but leave the system racked (2-week incubation
period)
Unracking and palletizing or in some cases recommissioning
We can display the formatted date from the date string provided by the user using the -d or
""date option to the command. It will not affect the system date, it only parses the requested
date from the string. For example,
$ date -d "Feb 14 1999"
Parsing string to date.
$ date --date="09/10/1960"
Parsing string to date.
Displaying Upcoming Date & Time With -d Option
Aside from parsing the date, we can also display the upcoming date using the -d option with
the command. The date command is compatible with words that refer to time or date values such
as next Sun, last Friday, tomorrow, yesterday, etc. For examples,
Displaying Next Monday
Date
$ date -d "next Mon"
Displaying upcoming date.
Displaying Past Date & Time With -d Option
Using the -d option to the command we can also know or view past date. For
examples,
Displaying Last Friday Date
$ date -d "last Fri"
Displaying past date
Parse Date From File
If you have a record of the static date strings in the file we can parse them in the
preferred date format using the -f option with the date command. In this way, you can format
multiple dates using the command. In the following example, I have created the file that
contains the list of date strings and parsed it with the command.
$ date -f datefile.txt
Parse date from the file.
Setting Date & Time on Linux
We can not only view the date but also set the system date according to your preference. For
this, you need a user with Sudo access and you can execute the command in the following
way.
$ sudo date -s "Sun 30 May 2021 07:35:06 PM PDT"
Display File Last Modification Time
We can check the file's last modification time using the date command, for this we need to
add the -r option to the command. It helps in tracking files when it was last modified. For
example,
We had a client that had an OLD fileserver box, a Thecus N4100PRO. It was completely dust-ridden and the power supply had burned
out.
Since these drives were in a RAID configuration, you could not hook any one of them up to a windows box, or a linux box to see
the data. You have to hook them all up to a box and reassemble the RAID.
We took out the drives (3 of them) and then used an external SATA to USB box to connect them to a Linux server running CentOS.
You can use parted to see what drives are now being seen by your linux system:
parted -l | grep 'raid\|sd'
Then using that output, we assembled the drives into a software array:
mdadm -A /dev/md0 /dev/sdb2 /dev/sdc2 /dev/sdd2
If we tried to only use two of those drives, it would give an error, since these were all in a linear RAID in the Thecus box.
If the last command went well, you can see the built array like so:
root% cat /proc/mdstat
Personalities : [linear]
md0 : active linear sdd2[0] sdb2[2] sdc2[1]
1459012480 blocks super 1.0 128k rounding
Note the personality shows the RAID type, in our case it was linear, which is probably the worst RAID since if any one drive fails,
your data is lost. So good thing these drives outlasted the power supply! Now we find the physical volume:
pvdisplay /dev/md0
Gives us:
-- Physical volume --
PV Name /dev/md0
VG Name vg0
PV Size 1.36 TB / not usable 704.00 KB
Allocatable yes
PE Size (KByte) 2048
Total PE 712408
Free PE 236760
Allocated PE 475648
PV UUID iqwRGX-zJ23-LX7q-hIZR-hO2y-oyZE-tD38A3
Then we find the logical volume:
lvdisplay /dev/vg0
Gives us:
-- Logical volume --
LV Name /dev/vg0/syslv
VG Name vg0
LV UUID UtrwkM-z0lw-6fb3-TlW4-IpkT-YcdN-NY1orZ
LV Write Access read/write
LV Status NOT available
LV Size 1.00 GB
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors 16384
-- Logical volume --
LV Name /dev/vg0/lv0
VG Name vg0
LV UUID 0qsIdY-i2cA-SAHs-O1qt-FFSr-VuWO-xuh41q
LV Write Access read/write
LV Status NOT available
LV Size 928.00 GB
Current LE 475136
Segments 1
Allocation inherit
Read ahead sectors 16384
We want to focus on the lv0 volume. You cannot mount yet, until you are able to lvscan them.
ACTIVE '/dev/vg0/syslv' [1.00 GB] inherit
ACTIVE '/dev/vg0/lv0' [928.00 GB] inherit
Now we can mount with:
mount /dev/vg0/lv0 /mnt
And viola! We have our data up and accessable in /mnt to recover! Of course your setup is most likely going to look different
from what I have shown you above, but hopefully this gives some helpful information for you to recover your own data.
Those shortcuts belong to the class of commands known as bang commands . Internet
search for this term provides a wealth of additional information (which probably you do not
need ;-), I will concentrate on just most common and potentially useful in the current command
line environment bang commands. Of them !$ is probably the most useful and definitely
is the most widely used. For many sysadmins it is the only bang command that is regularly
used.
!! is the bang command that re-executes the last command . This command is used
mainly as a shortcut sudo !! -- elevation of privileges after your command failed
on your user account. For example:
fgrep 'kernel' /var/log/messages # it will fail due to unsufficient privileges, as /var/log directory is not readable by ordinary user
sudo !! # now we re-execute the command with elevated privileges
!$ puts into the current command line the last argument from previous command . For
example:
mkdir -p /tmp/Bezroun/Workdir
cd !$
In this example the last command is equivalent to the command cd /tmp/Bezroun/Workdir. Please
try this example. It is a pretty neat trick.
NOTE: You can also work with individual arguments using numbers.
!:1 is the previous command and its options
!:2 is the first argument of the previous command
!:3 is the second
And so on
For example:
cp !:2 !:3 # picks up the first and the second argument from the previous command
For this and other bang command capabilities, copying fragments of the previous command line
using mouse is much more convenient, and you do not need to remember extra staff. After all, band
commands were created before mouse was available, and most of them reflect the realities and needs
of this bygone era. Still I met sysadmins that use this and some additional capabilities like
!!:s^<old>^<new> (which replaces the string 'old' with the string 'new" and
re-executes previous command) even now.
The same is true for !* -- all arguments of the last command. I do not use them and
have had troubles writing this part of this post, correcting it several times to make it right
4/0
Nowadays CTRL+R activates reverse search, which provides an easier way to
navigate through your history then capabilities in the past provided by band commands.
To list all open files, run the lsof command without any arguments:
lsof
For example, Here is the screengrab of a part of the output the above command produced on my system:
The first column represents the process while the last column contains the file name. For details on all the columns, head to
the command's man page .
2. How to list files opened by processes belonging to a specific user
The tool also allows you to list files opened by processes belonging to a specific user. This feature can be accessed by using
the -u command-line option.
lsof -u [user-name]
For example:
lsof -u administrator
3. How to list files based on their Internet address
The tool lets you list files based on their Internet address. This can be done using the -i command-line option. For example,
if you want, you can have IPv4 and IPv6 files displayed separately. For IPv4, run the following command:
lsof -i 4
...
4. How to list all files by application name
The -c command-line option allows you to get all files opened by program name.
$ lsof -c apache
You do not have to use the full program name as all programs that start with the word 'apache' are shown. So in our case, it will
list all processes of the 'apache2' application.
The -c option is basically just a shortcut for the two commands:
$ lsof | grep apache
5. How to list files specific to a process
The tool also lets you display opened files based on process identification (PID) numbers. This can be done by using the -p
command-line option.
lsof -p [PID]
For example:
lsof -p 856
Moving on, you can also exclude specific PIDs in the output by adding the ^ symbol before them. To exclude a specific PID, you
can run the following command:
lsof -p [^PID]
For example:
lsof -p ^1
As you can see in the above screenshot, the process with id 1 is excluded from the list.
6. How to list IDs of processes that have opened a particular file
The tool allows you to list IDs of processes that have opened a particular file. This can be done by using the -t command
line option.
If you want, you can also make lsof search for all open instances of a directory (including all the files and directories it contains).
This feature can be accessed using the +D command-line option.
$ lsof +D [directory-path]
For example:
$ lsof +D /usr/lib/locale
8. How to list all Internet and x.25 (HP-UX) network files
This is possible by using the -i command-line option we described earlier. Just that you have to use it without any arguments.
$ lsof -i
9. Find out which program is using a port
The -i switch of the command allows you to find a process or application which listens to a specific port number. In the example
below, I checked which program is using port 80.
$ lsof -i :80
Instead of the port number, you can use the service name as listed in the /etc/services file. Example to check which app
listens on the HTTPS (443) port:
$ lsof -i :https
... ... ...
The above examples will check both TCP and UDP. If you like to check for TCP or UDP only, prepend the word 'tcp' or 'udp'. For
example, which application is using port 25 TCP:
$ lsof -i tcp:25
or which app uses UDP port 53:
$ lsof -i udp:53
10. How to list open files based on port range
The utility also allows you to list open files based on a specific port or port range. For example, to display open files for
port 1-1024, use the following command:
$ lsof -i :1-1024
11. How to list open files based on the type of connection (TCP or UDP)
The tool allows you to list files based on the type of connection. For example, for UDP specific files, use the following command:
$ lsof -i udp
Similarly, you can make lsof display TCP-specific files.
12. How to make lsof list Parent PID of processes
There's also an option that forces lsof to list the Parent Process IDentification (PPID) number in the output. The option in question
is -R .
$ lsof -R
To get PPID info for a specific PID, you can run the following command:
$ lsof -p [PID] -R
For example:
$ lsof -p 3 -R
13. How to find network activity by user
By using a combination of the -i and -u command-line options, we can search for all network connections of a Linux user. This
can be helpful if you inspect a system that might have been hacked. In this example, we check all network activity of the user www-data:
$ lsof -a -i -u www-data
14. List all memory-mapped files
This command lists all memory-mapped files on Linux.
$ lsof -d mem
15. List all NFS files
The -N option shows you a list of all NFS (Network File System) files.
$lsof -N
Conclusion
Although lsof offers a plethora of options, the ones we've discussed here should be enough to get you started. Once you're done
practicing with these, head to the tool's man page to learn more about
it. Oh, and in case you have any doubts and queries, drop in a comment below.
Himanshu Arora has been working on Linux since 2007. He carries professional experience in system level programming, networking
protocols, and command line. In addition to HowtoForge, Himanshu's work has also been featured in some of world's other leading publications
including Computerworld, IBM DeveloperWorks, and Linux Journal.
Great article! Another useful one is "lsof -i tcp:PORT_NUMBER" to list processes happening on a specific port, useful for node.js
when you need to kill a process.
Ex: lsof -i tcp:3000
then say you want to kill the process 5393 (PID) running on port 3000, you would run "kill -9 5393"
Most (if not every) Linux distributions come with an editor that allows you to perform hexadecimal and binary manipulation. One
of those tools is the command-line tool "" xxd , which is most commonly used to make a hex dump of a given file or standard input.
It can also convert a hex dump back to its original binary form.
Hexedit Hex Editor
Hexedit is another hexadecimal command-line editor that might already be preinstalled on your OS.
Tilde is a text editor for the console/terminal, which provides an intuitive interface for
people accustomed to GUI environments such as Gnome, KDE and Windows. For example, the
short-cut to copy the current selection is Control-C, and to paste the previously copied text
the short-cut Control-V can be used. As another example, the File menu can be accessed by
pressing Meta-F.
However, being a terminal-based program there are limitations. Not all terminals provide
sufficient information to the client programs to make Tilde behave in the most intuitive way.
When this is the case, Tilde provides work-arounds which should be easy to work with.
The main audience for Tilde is users who normally work in GUI environments, but sometimes
require an editor for a console/terminal environment. This may be because the computer in
question is a server which does not provide a GUI, or is accessed remotely over SSH. Tilde
allows these users to edit files without having to learn a completely new interface, such as vi
or Emacs do. A result of this choice is that Tilde will not provide all the fancy features that
Vim or Emacs provide, but only the most used features.
NewsTilde version
1.1.2 released
This release fixes a bug where Tilde would discard read lines before an invalid character,
while requested to continue reading.
23-May-2020
Tilde version 1.1.1 released
This release fixes a build failure on C++14 and later compilers
IBM is notorious for destroying useful information . This article is no longer available from IBM.
Jul 20, 2008
Originally from: |IBM DeveloperWorks
How to be a more productive Linux systems administrator
Learn these 10 tricks and you'll be the most powerful Linux® systems administrator
in the universe...well, maybe not the universe, but you will need these tips
to play in the big leagues. Learn about SSH tunnels, VNC, password recovery,
console spying, and more. Examples accompany each trick, so you can duplicate
them on your own systems.
The best systems administrators are set apart by their efficiency. And if an
efficient systems administrator can do a task in 10 minutes that would take another
mortal two hours to complete, then the efficient systems administrator should be
rewarded (paid more) because the company is saving time, and time is money, right?
The trick is to prove your efficiency to management. While I won't attempt to
cover that trick in this article, I will give you 10 essential gems from
the lazy admin's bag of tricks. These tips will save you time-and even if you don't
get paid more money to be more efficient, you'll at least have more time to play
Halo.
The newbie states that when he pushes the Eject button on the DVD drive of a
server running a certain Redmond-based operating system, it will eject immediately.
He then complains that, in most enterprise Linux servers, if a process is running
in that directory, then the ejection won't happen. For too long as a Linux administrator,
I would reboot the machine and get my disk on the bounce if I couldn't figure out
what was running and why it wouldn't release the DVD drive. But this is ineffective.
Here's how you find the process that holds your DVD drive and eject it to your
heart's content: First, simulate it. Stick a disk in your DVD drive, open up a terminal,
and mount the DVD drive:
# mount /media/cdrom # cd /media/cdrom # while [ 1 ]; do echo "All your drives are belong to us!"; sleep 30; done
Now open up a second terminal and try to eject the DVD drive:
# eject
You'll get a message like:
umount: /media/cdrom: device is busy
Before you free it, let's find out who is using it.
# fuser /media/cdrom
You see the process was running and, indeed, it is our fault we can not eject
the disk.
Now, if you are root, you can exercise your godlike powers and kill processes:
# fuser -k /media/cdrom
Boom! Just like that, freedom. Now solemnly unmount the drive:
Behold! Your terminal looks like garbage. Everything you type looks like you're
looking into the Matrix. What do you do?
You type reset. But wait you say, typing reset is too
close to typing reboot or shutdown. Your palms start to
sweat-especially if you are doing this on a production machine.
Rest assured: You can do it with the confidence that no machine will be rebooted.
Go ahead, do it:
# reset
Now your screen is back to normal. This is much better than closing the window
and then logging in again, especially if you just went through five machines to
SSH to this machine.
David, the high-maintenance user from product engineering, calls: "I need you
to help me understand why I can't compile supercode.c on these new machines you
deployed."
"Fine," you say. "What machine are you on?"
David responds: " Posh." (Yes, this fictional company has named its five production
servers in honor of the Spice Girls.) OK, you say. You exercise your godlike root
powers and on another machine become David:
# su - david
Then you go over to posh:
# ssh posh
Once you are there, you run:
# screen -S foo
Then you holler at David:
"Hey David, run the following command on your terminal: # screen -x foo."
This will cause your and David's sessions to be joined together in the holy Linux
shell. You can type or he can type, but you'll both see what the other is doing.
This saves you from walking to the other floor and lets you both have equal control.
The benefit is that David can watch your troubleshooting skills and see exactly
how you solve problems.
At last you both see what the problem is: David's compile script hard-coded an
old directory that does not exist on this new server. You mount it, recompile, solve
the problem, and David goes back to work. You then go back to whatever lazy activity
you were doing before.
The one caveat to this trick is that you both need to be logged in as the same
user. Other cool things you can do with the screen command include
having multiple windows and split screens. Read the man pages for more on that.
But I'll give you one last tip while you're in your screen session.
To detach from it and leave it open, type: Ctrl-A D . (I mean, hold
down the Ctrl key and strike the A key. Then push the D key.)
You can then reattach by running the screen -x foo command again.
You forgot your root password. Nice work. Now you'll just have to reinstall the
entire machine. Sadly enough, I've seen more than a few people do this. But it's
surprisingly easy to get on the machine and change the password. This doesn't work
in all cases (like if you made a GRUB password and forgot that too), but here's
how you do it in a normal case with a Cent OS Linux example.
First reboot the system. When it reboots you'll come to the GRUB screen as shown
in Figure 1. Move the arrow key so that you stay on this screen instead of proceeding
all the way to a normal boot.
Use the arrow key again to highlight the line that begins with
kernel,
and press E to edit the kernel parameters. When you get to the screen shown
in Figure 3, simply append the number 1 to the arguments as shown in
Figure 3:
Many times I'll be at a site where I need remote support from someone who is
blocked on the outside by a company firewall. Few people realize that if you can
get out to the world through a firewall, then it is relatively easy to open a hole
so that the world can come into you.
In its crudest form, this is called "poking a hole in the firewall." I'll call
it an SSH back door. To use it, you'll need a machine on the Internet that
you can use as an intermediary.
In our example, we'll call our machine blackbox.example.com. The machine behind
the company firewall is called ginger. Finally, the machine that technical support
is on will be called tech. Figure 4 explains how this is set up.
Check that what you're doing is allowed, but make sure you ask the right
people. Most people will cringe that you're opening the firewall, but what they
don't understand is that it is completely encrypted. Furthermore, someone would
need to hack your outside machine before getting into your company. Instead,
you may belong to the school of "ask-for-forgiveness-instead-of-permission."
Either way, use your judgment and don't blame me if this doesn't go your way.
SSH from ginger to blackbox.example.com with the -R flag. I'll
assume that you're the root user on ginger and that tech will need the root
user ID to help you with the system. With the -R flag, you'll forward
instructions of port 2222 on blackbox to port 22 on ginger. This is how you
set up an SSH tunnel. Note that only SSH traffic can come into ginger: You're
not putting ginger out on the Internet naked.
VNC or virtual network computing has been around a long time. I typically find
myself needing to use it when the remote server has some type of graphical program
that is only available on that server.
For example, suppose in Trick 5, ginger
is a storage server. Many storage devices come with a GUI program to manage the
storage controllers. Often these GUI management tools need a direct connection to
the storage through a network that is at times kept in a private subnet. Therefore,
the only way to access this GUI is to do it from ginger.
You can try SSH'ing to ginger with the -X option and launch it that
way, but many times the bandwidth required is too much and you'll get frustrated
waiting. VNC is a much more network-friendly tool and is readily available for nearly
all operating systems.
Let's assume that the setup is the same as in Trick 5, but you want tech to be
able to get VNC access instead of SSH. In this case, you'll do something similar
but forward VNC ports instead. Here's what you do:
Start a VNC server session on ginger. This is done by running something
like:
The options tell the VNC server to start up with a resolution of 1024x768
and a pixel depth of 24 bits per pixel. If you are using a really slow connection
setting, 8 may be a better option. Using :99 specifies the port
the VNC server will be accessible from. The VNC protocol starts at 5900 so specifying
:99 means the server is accessible from port 5999.
When you start the session, you'll be asked to specify a password. The user
ID will be the same user that you launched the VNC server from. (In our case,
this is root.)
SSH from ginger to blackbox.example.com forwarding the port 5999 on blackbox
to ginger. This is done from ginger by running the command:
Once you run this command, you'll need to keep this SSH session open in order
to keep the port forwarded to ginger. At this point if you were on blackbox,
you could now access the VNC session on ginger by just running:
thedude@blackbox:~$ vncviewer localhost:99
That would forward the port through SSH to ginger. But we're interested in
letting tech get VNC access to ginger. To accomplish this, you'll need another
tunnel.
From tech, you open a tunnel via SSH to forward your port 5999 to port 5999
on blackbox. This would be done by running:
This time the SSH flag we used was -L, which instead of pushing
5999 to blackbox, pulled from it. Once you are in on blackbox, you'll need to
leave this session open. Now you're ready to VNC from tech!
From tech, VNC to ginger by running the command:
root@tech:~# vncviewer localhost:99 .
Tech will now have a VNC session directly to ginger.
While the effort might seem like a bit much to set up, it beats flying across
the country to fix the storage arrays. Also, if you practice this a few times, it
becomes quite easy.
Let me add a trick to this trick: If tech was running the Windows® operating
system and didn't have a command-line SSH client, then tech can run Putty. Putty
can be set to forward SSH ports by looking in the options in the sidebar. If the
port were 5902 instead of our example of 5999, then you would enter something like
in Figure 5.
Imagine this: Company A has a storage server named ginger and it is being NFS-mounted
by a client node named beckham. Company A has decided they really want to get more
bandwidth out of ginger because they have lots of nodes they want to have NFS mount
ginger's shared filesystem.
The most common and cheapest way to do this is to bond two Gigabit ethernet NICs
together. This is cheapest because usually you have an extra on-board NIC and an
extra port on your switch somewhere.
So they do this. But now the question is: How much bandwidth do they really have?
Gigabit Ethernet has a theoretical limit of 128MBps. Where does that number come
from? Well,
You'll need to install it on a shared filesystem that both ginger and beckham
can see. or compile and install on both nodes. I'll compile it in the home directory
of the bob user that is viewable on both nodes:
tar zxvf iperf*gz cd iperf-2.0.2 ./configure -prefix=/home/bob/perf make make install
On ginger, run:
# /home/bob/perf/bin/iperf -s -f M
This machine will act as the server and print out performance speeds in MBps.
You'll see output in both screens telling you what the speed is. On a normal
server with a Gigabit Ethernet adapter, you will probably see about 112MBps. This
is normal as bandwidth is lost in the TCP stack and physical cables. By connecting
two servers back-to-back, each with two bonded Ethernet cards, I got about 220MBps.
In reality, what you see with NFS on bonded networks is around 150-160MBps. Still,
this gives you a good indication that your bandwidth is going to be about what you'd
expect. If you see something much less, then you should check for a problem.
I recently ran into a case in which the bonding driver was used to bond two NICs
that used different drivers. The performance was extremely poor, leading to about
20MBps in bandwidth, less than they would have gotten had they not bonded the Ethernet
cards together!
A Linux systems administrator becomes more efficient by using command-line scripting
with authority. This includes crafting loops and knowing how to parse data using
utilities like awk, grep, and sed. There
are many cases where doing so takes fewer keystrokes and lessens the likelihood
of user errors.
For example, suppose you need to generate a new /etc/hosts file for a Linux cluster
that you are about to install. The long way would be to add IP addresses in vi or
your favorite text editor. However, it can be done by taking the already existing
/etc/hosts file and appending the following to it by running this on the command
line:
# P=1; for i in $(seq -w 200); do echo "192.168.99.$P n$i"; P=$(expr $P
+ 1); done >>/etc/hosts
Two hundred host names, n001 through n200, will then be created with IP addresses
192.168.99.1 through 192.168.99.200. Populating a file like this by hand runs the
risk of inadvertently creating duplicate IP addresses or host names, so this is
a good example of using the built-in command line to eliminate user errors. Please
note that this is done in the bash shell, the default in most Linux distributions.
As another example, let's suppose you want to check that the memory size is the
same in each of the compute nodes in the Linux cluster. In most cases of this sort,
having a distributed or parallel shell would be the best practice, but for the sake
of illustration, here's a way to do this using SSH.
Assume the SSH is set up to authenticate without a password. Then run:
# for num in $(seq -w 200); do ssh n$num free -tm | grep Mem | awk '{print
$2}'; done | sort | uniq
A command line like this looks pretty terse. (It can be worse if you put regular
expressions in it.) Let's pick it apart and uncover the mystery.
First you're doing a loop through 001-200. This padding with 0s in the front
is done with the -w option to the seq command. Then you
substitute the num variable to create the host you're going to SSH
to. Once you have the target host, give the command to it. In this case, it's:
free -m | grep Mem | awk '{print $2}'
That command says to:
Use the free command to get the memory size in megabytes.
Take the output of that command and use grep to get the line
that has the string Mem in it.
Take that line and use awk to print the second field, which
is the total memory in the node.
This operation is performed on every node.
Once you have performed the command on every node, the entire output of all 200
nodes is piped (|d) to the sort command so that all the
memory values are sorted.
Finally, you eliminate duplicates with the uniq command. This command
will result in one of the following cases:
If all the nodes, n001-n200, have the same memory size, then only one number
will be displayed. This is the size of memory as seen by each operating system.
If node memory size is different, you will see several memory size values.
Finally, if the SSH failed on a certain node, then you may see some error
messages.
This command isn't perfect. If you find that a value of memory is different than
what you expect, you won't know on which node it was or how many nodes there were.
Another command may need to be issued for that.
What this trick does give you, though, is a fast way to check for something and
quickly learn if something is wrong. This is it's real value: Speed to do a quick-and-dirty
check.
Some software prints error messages to the console that may not necessarily show
up on your SSH session. Using the vcs devices can let you examine these. From within
an SSH session, run the following command on a remote server: # cat /dev/vcs1.
This will show you what is on the first console. You can also look at the other
virtual terminals using 2, 3, etc. If a user is typing on the remote system, you'll
be able to see what he typed.
In most data farms, using a remote terminal server, KVM, or even Serial Over
LAN is the best way to view this information; it also provides the additional benefit
of out-of-band viewing capabilities. Using the vcs device provides a fast in-band
method that may be able to save you some time from going to the machine room and
looking at the console.
In Trick 8, you saw an example of using
the command line to get information about the total memory in the system. In this
trick, I'll offer up a few other methods to collect important information from the
system you may need to verify, troubleshoot, or give to remote support.
First, let's gather information about the processor. This is easily done as follows:
# cat /proc/cpuinfo .
This command gives you information on the processor speed, quantity, and model.
Using grep in many cases can give you the desired value.
A check that I do quite often is to ascertain the quantity of processors on the
system. So, if I have purchased a dual processor quad-core server, I can run:
# cat /proc/cpuinfo | grep processor | wc -l .
I would then expect to see 8 as the value. If I don't, I call up the vendor and
tell them to send me another processor.
Another piece of information I may require is disk information. This can be gotten
with the df command. I usually add the -h flag so that
I can see the output in gigabytes or megabytes. # df -h also shows
how the disk was partitioned.
And to end the list, here's a way to look at the firmware of your system-a method
to get the BIOS level and the firmware on the NIC.
To check the BIOS version, you can run the dmidecode command. Unfortunately,
you can't easily grep for the information, so piping it is a less efficient
way to do this. On my Lenovo T61 laptop, the output looks like this:
#dmidecode | less ... BIOS Information Vendor: LENOVO Version: 7LET52WW (1.22 ) Release Date: 08/27/2007 ...
This is much more efficient than rebooting your machine and looking at the POST
output.
To examine the driver and firmware versions of your Ethernet adapter, run
ethtool:
There are thousands of tricks you can learn from someone's who's an expert at
the command line. The best ways to learn are to:
Work with others. Share screen sessions and watch how others work-you'll
see new approaches to doing things. You may need to swallow your pride and let
other people drive, but often you can learn a lot.
Read the man pages. Seriously; reading man pages, even on commands you know
like the back of your hand, can provide amazing insights. For example, did you
know you can do network programming with awk?
Solve problems. As the system administrator, you are always solving problems
whether they are created by you or by others. This is called experience, and
experience makes you better and more efficient.
I hope at least one of these tricks helped you learn something you didn't know.
Essential tricks like these make you more efficient and add to your experience,
but most importantly, tricks give you more free time to do more interesting things,
like playing video games. And the best administrators are lazy because they don't
like to work. They find the fastest way to do a task and finish it quickly so they
can continue in their lazy pursuits.
Vallard Benincosa is a lazy Linux Certified IT professional
working for the IBM Linux Clusters team. He lives in Portland, OR, with
his wife and two kids.
The slogan of the Bropages
utility is just get to the point . It is true! The bropages are just like man pages, but it will
display examples only. As its slogan says, It skips all text part and gives you the concise
examples for command line programs. The bropages can be easily installed using gem . So, you need
Ruby 1.8.7+ installed on your machine for this to work. To install Ruby on Rails in CentOS and
Ubuntu, refer the following guide: The slogan of the Bropages utility is just get to the point .
It is true!
The bropages are just like man pages, but it will display examples only. As its
slogan says, It skips all text part and gives you the concise examples for command line programs.
The bropages can be easily installed using gem . So, you need Ruby 1.8.7+ installed on your machine for this to work...After After installing gem, all
you have to do to install bro pages is:
$ gem install bropages
... The usage is incredibly easy! ...just type:
$ bro find
... The good thing thing is you can upvote or downvote the examples.
As you see in the above screenshot, we can upvote to first command by entering the following
command: As you see in the above screenshot, we can upvote to first command by entering the
following command:
$ bro thanks
You will be asked to enter your Email Id. Enter a valid Email to receive the verification
code. And, copy/paste the verification code in the prompt and hit ENTER to submit your upvote. The
highest upvoted examples will be shown at the top. You will be asked to enter your Email Id. Enter
a valid Email to receive the verification code. And, copy/paste the verification code in the prompt
and hit ENTER to submit your upvote. The highest upvoted examples will be shown at the top.
Bropages.org requires an email address verification to do this
What's your email address?
[email protected]
Great! We're sending an email to [email protected]
Please enter the verification code: apHelH13ocC7OxTyB7Mo9p
Great! You're verified! FYI, your email and code are stored locally in ~/.bro
You just gave thanks to an entry for find!
You rock!
Cheat is another useful alternative to man pages to learn Unix commands. It
allows you to create and view interactive Linux/Unix commands cheatsheets on the command-line.
The recommended way to install Cheat is using Pip package manager.,,,
... ... ...
Cheat usage is trivial.
$ cheat find
You will be presented with the list of available examples of find command:
... ... ...
To view help section, run: To view help section, run:
$ cheat -h
For more details, see project's GitHub repository: For more details, see project's GitHub
repository:
TLDR is a collection of simplified and community-driven man pages.
Unlike man pages, TLDR pages focuses only on practical examples. TLDR can be installed using npm
. So, you need NodeJS installed on your machine for this to work.
To install NodeJS in Linux, refer the following guide. To install NodeJS in Linux, refer the
following guide.
After installing npm, run the following command to install tldr. After installing npm, run
the following command to install tldr.
$ npm install -g tldr
TLDR clients are also available for Android. Install any one of below apps from Google Play
Sore and access the TLDR pages from your Android devices. TLDR clients are also available for
Android. Install any one of below apps from Google Play Sore and access the TLDR pages from your
Android devices.
There are many TLDR clients available. You can view them all
here
3.1. Usage To display the documentation of any command, fro example find , run:
$ tldr find
You will see the list of available examples of find command.
...To view the list of all commands in the cache,
run: To view the list of all commands in the cache, run:
$ tldr --list-all
...To update the local cache, run: To update the local cache, run: To update the
local cache, run:
$ tldr -u
Or, Or,
$ tldr --update
To display the help section, run: To display the help section, run:
Tldr++ is yet another client to access the TLDR pages. Unlike the other
Tldr clients, it is fully interactive .
5. Tealdeer
Tealdeer is a fast, un-official tldr client that allows you to access and
display Linux commands cheatsheets in your Terminal. The developer of Tealdeer claims it is very
fast compared to the official tldr client and other community-supported tldr clients.
6. tldr.jsx web client
The tldr.jsx is a a Reactive web client for tldr-pages. If you
don't want to install anything on your system, you can try this client online from any
Internet-enabled devices like desktop, laptop, tablet and smart phone. All you have to do is just
a Web-browser. Open a web browser and navigate to The tldr.jsx is a a Reactive web client for
tldr-pages. If you don't want to install anything on your system, you can try this client online
from any Internet-enabled devices like desktop, laptop, tablet and smart phone. All you have to
do is just a Web-browser. Open a web browser and navigate to Open a web browser and navigate to
Open a web browser and navigate to https://tldr.ostera.io/ page.
7. Navi interactive commandline cheatsheet
tool
Navi is an interactive commandline cheatsheet tool written in Rust . Just like Bro
pages, Cheat, Tldr tools, Navi also provides a list of examples for a given command, skipping all
other comprehensive text parts. For more details, check the following link. Navi is an
interactive commandline cheatsheet tool written in Rust . Just like Bro pages, Cheat, Tldr tools,
Navi also provides a list of examples for a given command, skipping all other comprehensive text
parts. For more details, check the following link.
I came across this utility recently and I thought that it would be a worth
addition to this list. Say hello to Manly , a compliment to man pages. Manly is written in Python
, so you can install it using Pip package manager.
Manly is slightly different from the above three
utilities. It will not display any examples and also you need to mention the flags or options along
with the commands. Say for example, the following example won't work:
$ manly dpkg
But, if you mention any flag/option of a command, you will get a small description of the given command and its
options.
$ manly dpkg -i -R
View Linux
$ manly --help
And also take a look at the project's GitHub page. And also take a look at the project's
GitHub page.
The GitHub page of TLDR pages for Linux/Unix describes it as a collection of simplified and
community-driven man pages. It's an effort to make the experience of using man pages simpler
with the help of practical examples. For those who don't know, TLDR is taken from common
internet slang Too Long Didn't Read .
In case you wish to compare, let's take the example of tar command. The usual man page
extends over 1,000 lines. It's an archiving utility that's often combined with a compression
method like bzip or gzip. Take a look at its man page:
On the other hand, TLDR pages lets you simply take a glance at the
command and see how it works. Tar's TLDR page simply looks like this and comes with some handy
examples of the most common tasks you can complete with this utility:
Let's take another example and show you what TLDR pages has to
offer when it comes to apt:
Having shown you how TLDR works and makes your life easier, let's
tell you how to install it on your Linux-based operating system.
How to install and use
TLDR pages on Linux?
The most mature TLDR client is based on Node.js and you can install it easily using NPM
package manager. In case Node and NPM are not available on your system, run the following
command:
sudo apt-get install nodejs
sudo apt-get install npm
In case you're using an OS other than Debian, Ubuntu, or Ubuntu's derivatives, you can use
yum, dnf, or pacman package manager as per your convenience.
When we need help in Linux command line, man is usually the first friend we
check for more information. But it became my second line support after I met other
alternatives, e.g. tldr , cheat and eg .
tldr
tldr stands for too long
didn't read , it is a simplified and community-driven man pages. Maybe we forget the arguments
to a command, or just not patient enough to read the long man document, here
tldr comes in, it will provide concise information with examples. And I even
contributed a couple of lines code myself to help a little bit with the project on Github. It
is very easy to install: npm install -g tldr , and there are many clients
available to pick to be able to access the tldr pages. E.g. install Python client
with pip install tldr ,
To display help information, run tldr -h or tldr tldr .
Take curl as an example
tldr++
tldr++ is an interactive
tldr client written with go, I just steal the gif from its official site.
cheat
Similarly, cheat allows you to
create and view interactive cheatsheets on the command-line. It was designed to help remind
*nix system administrators of options for commands that they use frequently, but not frequently
enough to remember. It is written in Golang, so just download the binary and add it into your
PATH .
eg
eg provides useful examples with
explanations on the command line.
So I consult tldr , cheat or eg before I ask
man and Google.
In our daily use of Linux/Unix systems, we use many command-line tools to complete our work
and to understand and manage our systems -- tools like du to monitor disk
utilization and top to show system resources. Some of these tools have existed for
a long time. For example, top was first released in 1984, while du 's
first release dates to 1971.
Over the years, these tools have been modernized and ported to different systems, but, in
general, they still follow their original idea, look, and feel.
These are great tools and essential to many system administrators' workflows. However, in
recent years, the open source community has developed alternative tools that offer additional
benefits. Some are just eye candy, but others greatly improve usability, making them a great
choice to use on modern systems. These include the following five alternatives to the standard
Linux command-line tools.
1. ncdu as a replacement for du
The NCurses Disk Usage ( ncdu ) tool provides similar results to
du but in a curses-based, interactive interface that focuses on the directories
that consume most of your disk space. ncdu spends some time analyzing the disk,
then displays the results sorted by your most used directories or files, like this:
ncdu
1.14.2 ~ Use the arrow keys to navigate, press ? for help
--- /home/rgerardi ------------------------------------------------------------
96.7 GiB [##########] /libvirt
33.9 GiB [### ] /.crc
...
Total disk usage: 159.4 GiB Apparent size: 280.8 GiB Items: 561540
Navigate to each entry by using the arrow keys. If you press Enter on a directory entry,
ncdu displays the contents of that directory:
You can use that to drill down into the directories and find which files are consuming the
most disk space. Return to the previous directory by using the Left arrow key. By default, you
can delete files with ncdu by pressing the d key, and it asks for confirmation
before deleting a file. If you want to disable this behavior to prevent accidents, use the
-r option for read-only access: ncdu -r .
ncdu is available for many platforms and Linux distributions. For example, you
can use dnf to install it on Fedora directly from the official repositories:
$ sudo dnf install ncdu
You can find more information about this tool on the ncdu web page .
2. htop as a replacement
for top
htop is an interactive process viewer similar to top but that
provides a nicer user experience out of the box. By default, htop displays the
same metrics as top in a pleasant and colorful display.
In addition, htop provides system overview information at the top and a command
bar at the bottom to trigger commands using the function keys, and you can customize it by
pressing F2 to enter the setup screen. In setup, you can change its colors, add or remove
metrics, or change display options for the overview bar.
While you can configure recent versions of top to achieve similar results,
htop provides saner default configurations, which makes it a nice and easy to use
process viewer.
To learn more about this project, check the htop home page .
3. tldr as a replacement for
man
The tldr command-line tool displays simplified command utilization information,
mostly including examples. It works as a client for the community tldr pages project .
This tool is not a replacement for man . The man pages are still the canonical
and complete source of information for many tools. However, in some cases, man is
too much. Sometimes you don't need all that information about a command; you're just trying to
remember the basic options. For example, the man page for the curl command has
almost 3,000 lines. In contrast, the tldr for curl is 40 lines long
and looks like this:
$ tldr curl
# curl
Transfers data from or to a server.
Supports most protocols, including HTTP, FTP, and POP3.
More information: < https: // curl.haxx.se > .
- Download the contents of an URL to a file:
curl http: // example.com -o filename
- Download a file , saving the output under the filename indicated by the URL:
curl -O http: // example.com / filename
- Download a file , following [ L ] ocation redirects, and automatically [ C ] ontinuing (
resuming ) a previous file transfer:
curl -O -L -C - http: // example.com / filename
- Send form-encoded data ( POST request of type ` application / x-www-form-urlencoded ` )
:
curl -d 'name=bob' http: // example.com / form
- Send a request with an extra header, using a custom HTTP method:
curl -H 'X-My-Header: 123' -X PUT http: // example.com
- Send data in JSON format, specifying the appropriate content-type header:
TLDR stands for "too long; didn't read," which is internet slang for a summary of long text.
The name is appropriate for this tool because man pages, while useful, are sometimes just too
long.
In Fedora, the tldr client was written in Python. You can install it using
dnf . For other client options, consult the tldr pages project .
In general, the tldr tool requires access to the internet to consult the tldr
pages. The Python client in Fedora allows you to download and cache these pages for offline
access.
For more information on tldr , you can use tldr tldr .
4. jq
as a replacement for sed/grep for JSON
jq is a command-line JSON processor. It's like sed or
grep but specifically designed to deal with JSON data. If you're a developer or
system administrator who uses JSON in your daily tasks, this is an essential tool in your
toolbox.
The main benefit of jq over generic text-processing tools like
grep and sed is that it understands the JSON data structure, allowing
you to create complex queries with a single expression.
To illustrate, imagine you're trying to find the name of the containers in this JSON
file:
grep returned all lines that contain the word name . You can add a
few more options to grep to restrict it and, with some regular-expression
manipulation, you can find the names of the containers. To obtain the result you want with
jq , use an expression that simulates navigating down the data structure, like
this:
This command gives you the name of both containers. If you're looking for only the name of
the second container, add the array element index to the expression:
Because jq is aware of the data structure, it provides the same results even if
the file format changes slightly. grep and sed may provide different
results with small changes to the format.
jq has many features, and covering them all would require another article. For
more information, consult the jq project page , the man pages, or
tldr jq .
5. fd as a replacement for find
fd is a simple and fast alternative to the find command. It does
not aim to replace the complete functionality find provides; instead, it provides
some sane defaults that help a lot in certain scenarios.
For example, when searching for source-code files in a directory that contains a Git
repository, fd automatically excludes hidden files and directories, including the
.git directory, as well as ignoring patterns from the .gitignore
file. In general, it provides faster searches with more relevant results on the first try.
By default, fd runs a case-insensitive pattern search in the current directory
with colored output. The same search using find requires you to provide additional
command-line parameters. For example, to search all markdown files ( .md or
.MD ) in the current directory, the find command is this:
$ find . -iname "*.md"
Here is the same search with fd :
$ fd .md
In some cases, fd requires additional options; for example, if you want to
include hidden files and directories, you must use the option -H , while this is
not required in find .
fd is available for many Linux distributions. Install it in Fedora using the
standard repositories:
Another (fancy looking) alternative for ls is lsd. Miguel Perez on 25 Jun 2020
Bat instead of cat, ripgrep instead of grep, httpie instead of curl, bashtop instead of
htop, autojump instead of cd... Drto on 25 Jun 2020
ack instead of grep for files. Million times faster.
Gordon Harris on 25 Jun 2020
The yq command line utility is useful too. It's just like jq, except for yaml files and has
the ability to convert yaml into json.
Matt howard on 26 Jun 2020
Glances is a great top replacement too Paul M on 26 Jun 2020
Try "mtr" instead of traceroute
Try "hping2" instead of ping
Try "pigz" instead of gzip jmtd on 28 Jun 2020
You run a separate "duc index" command to capture disk space usage in a database file and
then can explore the data very quickly with "duc ui" ncurses ui. There's also GUI and web
front-ends that give you a nice graphical pie chart interface.
In my experience the index stage is faster than plain du. You can choose to re-index only
certain folders if you want to update some data quickly without rescanning everything.
wurn on 29 Jun 2020
Imho, jq uses a syntax that's ok for simple queries but quickly becomes horrible when you
need more complex queries. Pjy is a sensible replacement for jq, having an (improved) python
syntax which is familiar to many people and much more readable: https://github.com/hydrargyrum/pjy
Jack Orenstein on 29 Jun 2020
Also along the lines of command-line alternatives, take a look at marcel, which is a modern
shell: https://marceltheshell.org .
The basic idea is to pipe Python values instead of strings, between commands. It integrates
smoothly with host commands (and, presumably, the alternatives discussed here), and also
integrates remote access and database access. Ricardo Fraile on 05 Jul 2020
"tuptime" instead of "uptime".
It tracks the history of the system, not only the current one. The Cube on 07 Jul 2020
One downside of all of this is that there are even more things to remember. I learned find,
diff, cat, vi (and ed), grep and a few others starting in 1976 on 6th edition. They have been
enhanced some, over the years (for which I use man when I need to remember), and learned top
and other things as I needed them, but things I did back then still work great now. KISS is
still a "thing". Especially in scripts one is going to use on a wide variety of distributions
or for a long time. These kind of tweaks are fun and all, but add complexity and reduce one's
inter-system mobility. (And don't get me started on systemd 8P).
The replace utility program changes strings in place in files or
on the standard input.
Invoke replace in one of the following ways:
shell> replace from to [from to] ... -- file_name [file_name] ...
shell> replace from to [from to] ... < file_name
from represents a string to look for and to represents its
replacement. There can be one or more pairs of strings.
Use the -- option to indicate where the string-replacement list
ends and the file names begin. In this case, any file named on
the command line is modified in place, so you may want to make a
copy of the original before converting it. replace prints a
message indicating which of the input files it actually modifies.
If the -- option is not given, replace reads the standard input
and writes to the standard output.
replace uses a finite state machine to match longer strings
first. It can be used to swap strings. For example, the following
command swaps a and b in the given files, file1 and file2:
shell> replace a b b a -- file1 file2 ...
The replace program is used by msql2mysql. See msql2mysql(1).
replace supports the following options.
• -?, -I
Display a help message and exit.
• -#debug_options
Enable debugging.
• -s
Silent mode. Print less information what the program does.
• -v
Verbose mode. Print more information about what the program
does.
• -V
Display version information and exit.
Copyright 2007-2008 MySQL AB, 2008-2010 Sun Microsystems, Inc.,
2010-2015 MariaDB Foundation
This documentation is free software; you can redistribute it
and/or modify it only under the terms of the GNU General Public
License as published by the Free Software Foundation; version 2
of the License.
This documentation is distributed in the hope that it will be
useful, but WITHOUT ANY WARRANTY; without even the implied
warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with the program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
02110-1335 USA or see http://www.gnu.org/licenses/.
This page is part of the MariaDB (MariaDB database server)
project. Information about the project can be found at
⟨http://mariadb.org/⟩. If you have a bug report for this manual
page, see ⟨https://mariadb.com/kb/en/mariadb/reporting-bugs/⟩.
This page was obtained from the project's upstream Git repository
⟨https://github.com/MariaDB/server⟩ on 2021-04-01. (At that
time, the date of the most recent commit that was found in the
repository was 2020-11-03.) If you discover any rendering
problems in this HTML version of the page, or you believe there
is a better or more up-to-date source for the page, or you have
corrections or improvements to the information in this COLOPHON
(which is not part o
Eg is a free, open source program written in Python language and the code is freely available in GitHub. For those wondering,
eg comes from the Latin word "Exempli Gratia" that
literally means "for the sake of example" in English. Exempli Gratia is known by its abbreviation e.g. , in English speaking countries.
Install Eg in Linux
Eg can be installed using Pip package manager. If Pip is not available in your system, install it as described in the below link.
After installing Pip, run the following command to install eg on your Linux system:
$ pip install eg
Display Linux commands cheatsheets using Eg
Let us start by displaying the help section of eg program. To do so, run eg without any options:
$ eg
Sample output:
usage: eg [-h] [-v] [-f CONFIG_FILE] [-e] [--examples-dir EXAMPLES_DIR]
[-c CUSTOM_DIR] [-p PAGER_CMD] [-l] [--color] [-s] [--no-color]
[program]
eg provides examples of common command usage.
positional arguments:
program The program for which to display examples.
optional arguments:
-h, --help show this help message and exit
-v, --version Display version information about eg
-f CONFIG_FILE, --config-file CONFIG_FILE
Path to the .egrc file, if it is not in the default
location.
-e, --edit Edit the custom examples for the given command. If
editor-cmd is not set in your .egrc and $VISUAL and
$EDITOR are not set, prints a message and does
nothing.
--examples-dir EXAMPLES_DIR
The location to the examples/ dir that ships with eg
-c CUSTOM_DIR, --custom-dir CUSTOM_DIR
Path to a directory containing user-defined examples.
-p PAGER_CMD, --pager-cmd PAGER_CMD
String literal that will be invoked to page output.
-l, --list Show all the programs with eg entries.
--color Colorize output.
-s, --squeeze Show fewer blank lines in output.
--no-color Do not colorize output.
You can also bring the help section using this command too:
$ eg --help
Now let us see how to view example commands usage.
To display cheatsheet of a Linux command, for example grep , run:
$ eg grep
Sample output:
grep
print all lines containing foo in input.txt
grep "foo" input.txt
print all lines matching the regex "^start" in input.txt
grep -e "^start" input.txt
print all lines containing bar by recursively searching a directory
grep -r "bar" directory
print all lines containing bar ignoring case
grep -i "bAr" input.txt
print 3 lines of context before and after each line matching "foo"
grep -C 3 "foo" input.txt
Basic Usage
Search each line in input_file for a match against pattern and print
matching lines:
grep "<pattern>" <input_file>
[...]
Before using the
locate
command you should
check if it is installed in your machine. A
locate
command
comes with GNU findutils or GNU mlocate packages. You can simply run the following command to check if
locate
is
installed or not.
$ which locate
If
locate
is not installed by default then
you can run the following commands to install.
Once the installation is completed you need to run the following command to update the
locate
database
to quickly get the file location. That's how your result is faster when you use the
locate
command
to find files in Linux.
$ sudo updatedb
The
mlocate
db file is located at
/var/lib/mlocate/mlocate.db
.
$ ls -l /var/lib/mlocate/mlocate.db
A good place to start and get to know about
locate
command
is using the man page.
$ man locate
How to Use locate Command to Find Files Faster in Linux
To search for any files simply pass the file name as an argument to
locate
command.
$ locate .bashrc
If you wish to see how many matched items instead of printing the location of the file you can pass the
-c
flag.
$ sudo locate -c .bashrc
By default
locate
command is set to be case
sensitive. You can make the search to be case insensitive by using the
-i
flag.
$ sudo locate -i file1.sh
You can limit the search result by using the
-n
flag.
$ sudo locate -n 3 .bashrc
When you
delete
a file
and if you did not update the
mlocate
database
it will still print the deleted file in output. You have two options now either to update
mlocate
db
periodically or use
-e
flag
which will skip the deleted files.
$ locate -i -e file1.sh
You can check the statistics of the
mlocate
database
by running the following command.
$ locate -S
If your
db
file is in a different location
then you may want to use
-d
flag
followed by
mlocate
db path and filename to
be searched for.
$ locate -d [ DB PATH ] [ FILENAME ]
Sometimes you may encounter an error, you can suppress the error messages by running the command with the
-q
flag.
$ locate -q [ FILENAME ]
That's it for this article. We have shown you all the basic operations you can do with
locate
command.
It will be a handy tool for you when working on the command line.
We'll be installing TigerVNC. It is an actively maintained high-performance VNC server. Type the following command to install the
package:
sudo apt install tigervnc-standalone-serverCopy
Configuring VNC Access
Once the VNC server is installed, the next step is to create the initial user configuration and set up the password.
Set the user password using the
vncpasswd
command.
Do not use sudo when running the command below:
vncpasswdCopy
You will be prompted to enter and confirm the password and whether to set it as a view-only password. If you choose to set up a
view-only password, the user will not be able to interact with the VNC instance with the mouse and the keyboard.
Password:
Verify:
Would you like to enter a view-only password (y/n)? n
Copy
The password file is stored in the
~/.vnc
directory,
which is created if not present.
Next, we need to configure TigerVNC to use Xfce. To do so, create the following file:
New 'server2.linuxize.com:1 (linuxize)' desktop at :1 on machine server2.linuxize.com
Starting applications specified in /home/linuxize/.vnc/xstartup
Log file is /home/linuxize/.vnc/server2.linuxize.com:1.log
Use xtigervncviewer -SecurityTypes VncAuth -passwd /home/linuxize/.vnc/passwd :1 to connect to the VNC server.
Copy
Note the
:1
after
the
hostname
in
the output above. This indicates the number of the display port on which the vnc server is running. In this example, the server
is running on TCP port
5901
(5900+1).
If you create a second instance with
vncserver
it
will run on the next free port i.e
:2
,
which means that the server is running on port
5902
(5900+2).
What is important to remember is that when working with VNC servers,
:X
is
a display port that refers to
5900+X
.
You can get a list of all the currently running VNC sessions by typing:
vncserver -listCopy
TigerVNC server sessions:
X DISPLAY # RFB PORT # PROCESS ID
:1 5901 5710
Copy
Before continuing with the next step, stop the VNC instance using the
vncserver
command
with a
-kill
option
and the server number as an argument. In this example, the server is running in port 5901 (
:1
),
so we'll stop it with:
The number
1
after
the
@
sign
defines the display port on which the VNC service will run. This means that the VNC server will listen on port
5901
,
as we discussed in the previous section.
● [email protected] - Remote desktop service (VNC)
Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: enabled)
Active: active (running) since Fri 2021-03-26 20:00:59 UTC; 3s ago
...
Copy
Connecting to VNC server
VNC is not an encrypted protocol and can be subject to packet sniffing. The recommended approach is to create an
SSH
tunnel
and securely forward traffic from your local machine on port 5901 to the server on the same port.
Set Up SSH Tunneling on Linux and macOS
If you run Linux, macOS, or any other Unix-based operating system on your machine, you can easily create an SSH tunnel with the
following command:
...Now, let us edit these two files at a time using Vim editor. To do so, run:
$ vim file1.txt file2.txt
Vim will display the contents of the files in an order. The first file's contents will be
shown first and then second file and so on.
Edit Multiple Files Using Vim Editor Switch between files
To move to the next file, type:
:n
Switch between files in Vim editor
To go back to previous file, type:
:N
Here, N is capital (Type SHIFT+n).
Start editing the files as the way you do with Vim editor. Press 'i' to switch to
interactive mode and modify the contents as per your liking. Once done, press ESC to go back to
normal mode.
Vim won't allow you to move to the next file if there are any unsaved changes. To save the
changes in the current file, type:
ZZ
Please note that it is double capital letters ZZ (SHIFT+zz).
To abandon the changes and move to the previous file, type:
:N!
To view the files which are being currently edited, type:
:buffers
View files in buffer in VIm
You will see the list of loaded files at the bottom.
List of files in buffer in Vim
To switch to the next file, type :buffer followed by the buffer number. For example, to
switch to the first file, type:
:buffer 1
Or, just do:
:b 1
Switch to next file in Vim
Just remember these commands to easily switch between buffers:
:bf # Go to first file.
:bl # Go to last file
:bn # Go to next file.
:bp # Go to previous file.
:b number # Go to n'th file (E.g :b 2)
:bw # Close current file.
Opening additional files for editing
We are currently editing two files namely file1.txt, file2.txt. You might want to open
another file named file3.txt for editing. What will you do? It's easy! Just type :e followed by
the file name like below.
:e file3.txt
Open additional files for editing in Vim
Now you can edit file3.txt.
To view how many files are being edited currently, type:
:buffers
View all files in buffers in Vim
Please note that you can not switch between opened files with :e using either :n or :N . To
switch to another file, type :buffer followed by the file buffer number.
Copying contents
of one file into another
You know how to open and edit multiple files at the same time. Sometimes, you might want to
copy the contents of one file into another. It is possible too. Switch to a file of your
choice. For example, let us say you want to copy the contents of file1.txt into file2.txt.
To do so, first switch to file1.txt:
:buffer 1
Place the move cursor in-front of a line that wants to copy and type yy to yank(copy) the
line. Then, move to file2.txt:
:buffer 2
Place the mouse cursor where you want to paste the copied lines from file1.txt and type p .
For example, you want to paste the copied line between line2 and line3. To do so, put the mouse
cursor before line and type p .
Sample output:
line1
line2
ostechnix
line3
line4
line5
Copying contents of one file into another file using Vim
To save the changes made in the current file, type:
ZZ
Again, please note that this is double capital ZZ (SHIFT+z).
To save the changes in all files and exit vim editor. type:
:wq
Similarly, you can copy any line from any file to other files.
Copying entire file
contents into another
We know how to copy a single line. What about the entire file contents? That's also
possible. Let us say, you want to copy the entire contents of file1.txt into file2.txt.
To do so, open the file2.txt first:
$ vim file2.txt
If the files are already loaded, you can switch to file2.txt by typing:
:buffer 2
Move the cursor to the place where you wanted to copy the contents of file1.txt. I want to
copy the contents of file1.txt after line5 in file2.txt, so I moved the cursor to line 5. Then,
type the following command and hit ENTER key:
:r file1.txt
Copying entire contents of a file into another file
Here, r means read .
Now you will see the contents of file1.txt is pasted after line5 in file2.txt.
line1
line2
line3
line4
line5
ostechnix
open source
technology
linux
unix
Copying entire file contents into another file using Vim
To save the changes in the current file, type:
ZZ
To save all changes in all loaded files and exit vim editor, type:
:wq
Method 2
The another method to open multiple files at once is by using either -o or -O flags.
To open multiple files in horizontal windows, run:
$ vim -o file1.txt file2.txt
Open multiple files at once in Vim
To switch between windows, press CTRL-w w (i.e Press CTRL+w and again press w ). Or, use the
following shortcuts to move between windows.
CTRL-w k - top window
CTRL-w j - bottom window
To open multiple files in vertical windows, run:
$ vim -O file1.txt file2.txt file3.txt
Open multiple files in vertical windows in Vim
To switch between windows, press CTRL-w w (i.e Press CTRL+w and again press w ). Or, use the
following shortcuts to move between windows.
CTRL-w l - left window
CTRL-w h - right window
Everything else is same as described in method 1.
For example, to list currently loaded files, run:
:buffers
To switch between files:
:buffer 1
To open an additional file, type:
:e file3.txt
To copy entire contents of a file into another:
:r file1.txt
The only difference in method 2 is once you saved the changes in the current file using ZZ ,
the file will automatically close itself. Also, you need to close the files one by one by
typing :wq . But, had you followed the method 1, when typing :wq all changes will be saved in
all files and all files will be closed at once.
In this case, we are commenting out the lines from 1 to 3. Check the following screenshot.
The lines from 1 to 3 have been commented out.
Comment out multiple lines at once in vim
To uncomment those lines, run:
:1,3s/^#/
Once you're done, unset the line numbers.
:set nonumber
Let us go ahead and see third method.
Method 3:
This one is same as above but slightly different.
Open the file in vim editor.
$ vim ostechnix.txt
Set line numbers:
:set number
Then, type the following command to comment out the lines.
:1,4s/^/# /
The above command will comment out lines from 1 to 4.
Comment out multiple lines in vim
Finally, unset the line numbers by typing the following.
:set nonumber
Method 4:
This method is suggested by one of our reader Mr.Anand Nande in the comment section
below.
Open file in vim editor:
$ vim ostechnix.txt
Press Ctrl+V to enter into 'Visual block' mode and press DOWN arrow to select all the lines
in your file.
Select lines in Vim
Then, press Shift+i to enter INSERT mode (this will place your cursor on the first line).
Press Shift+3 which will insert '#' before your first line.
Insert '#' before the first line in Vim
Finally, press ESC key, and you can now see all lines are commented out.
Comment out multiple lines using vim Method 5:
This method is suggested by one of our Twitter follower and friend Mr.Tim Chase .
We can even target lines to comment out by regex. Open the file in vim editor.
$ vim ostechnix.txt
And type the following:
:g/\Linux/s/^/# /
The above command will comment out all lines that contains the word "Linux".
Comment out all lines that contains a specific word in Vim
And, that's all for now. I hope this helps. If you know any other easier method than the
given methods here, please let me know in the comment section below. I will check and add them
in the guide. Also, have a look at the comment section below. One of our visitor has shared a
good guide about Vim usage.
NUNY3 November 23, 2017 - 8:46 pm
If you want to be productive in Vim you need to talk with Vim with *language* Vim is using.
Every solution that gets out of "normal
mode" is most probably not the most effective.
METHOD 1
Using "normal mode". For example comment first three lines with: I#j.j.
This is strange isn't it, but:
I –> capital I jumps to the beginning of row and gets into insert mode
# –> type actual comment character
–> exit insert mode and gets back to normal mode
j –> move down a line
. –> repeat last command. Last command was: I#
j –> move down a line
. –> repeat last command. Last command was: I#
You get it: After you execute a command, you just repeat j. cobination for the lines you would
like to comment out.
METHOD 2
There is "command line mode" command to execute "normal mode" command.
Example: :%norm I#
Explanation:
% –> whole file (you can also use range if you like: 1,3 to do only for first three
lines).
norm –> (short for normal)
I –> is normal command I that is, jump to the first character in line and execute
insert
# –> insert actual character
You get it, for each range you select, for each of the line normal mode command is executed
METHOD 3
This is the method I love the most, because it uses Vim in the "I am talking to Vim" with Vim
language principle.
This is by using extension (plug-in, add-in): https://github.com/tomtom/tcomment_vim
extension.
How to use it? In NORMAL MODE of course to be efficient. Use: gc+action.
Examples:
gcap –> comment a paragraph
gcj –> comment current line and line bellow
gc3j –> comment current line and 3 lines bellow
gcgg –> comment current line and all the lines including first line in file
gcG –> comment current line and all the lines including last line in file
gcc –> shortcut for comment a current line
You name it it has all sort of combinations. Remember, you have to talk with Vim, to
properly efficially use it.
Yes sure it also works with "visual mode", so you use it like: V select the lines you would
like to mark and execute: gc
You see if I want to impress a friend I am using gc+action combination. Because I always
get: What? How did you do it? My answer it is Vim, you need to talk with the text editor, not
using dummy mouse and repeat actions.
NOTE: Please stop telling people to use DOWN arrow key. Start using h, j, k and l keys to
move around. This keys are on home row of typist. DOWN, UP, LEFT and RIGHT key are bed habit
used by beginners. It is very inefficient. You have to move your hand from home row to arrow
keys.
VERY IMPORTANT: Do you want to get one million dollar tip for using Vim? Start using Vim
like it was designed for use normal mode. Use its language: verbs, nouns, adverbs and
adjectives. Interested what I am talking about? You should be, if you are serious about using
Vim. Read this one million dollar answer on forum:
https://stackoverflow.com/questions/1218390/what-is-your-most-productive-shortcut-with-vim/1220118#1220118
MDEBUSK November 26, 2019 - 7:07 am
I've tried the "boxes" utility with vim and it can be a lot of fun.
The idea was that sharing this would inspire others to improve their bashrc savviness. Take
a look at what our Sudoers group shared and, please, borrow anything you like to make your
sysadmin life easier.
# Require confirmation before overwriting target files. This setting keeps me from deleting things I didn't expect to, etc
alias cp='cp -i'
alias mv='mv -i'
alias rm='rm -i'
# Add color, formatting, etc to ls without re-typing a bunch of options every time
alias ll='ls -alhF'
alias ls="ls --color"
# So I don't need to remember the options to tar every time
alias untar='tar xzvf'
alias tarup='tar czvf'
# Changing the default editor, I'm sure a bunch of people have this so they don't get dropped into vi instead of vim, etc. A lot of distributions have system default overrides for these, but I don't like relying on that being around
alias vim='nvim'
alias vi='nvim'
# Easy copy the content of a file without using cat / selecting it etc. It requires xclip to be installed
# Example: _cp /etc/dnsmasq.conf _cp()
{
local file="$1"
local st=1
if [[ -f $file ]]; then
cat "$file" | xclip -selection clipboard
st=$?
else
printf '%s\n' "Make sure you are copying the content of a file" >&2
fi
return $st
}
# This is the function to paste the content. The content is now in your buffer.
# Example: _paste
_paste()
{
xclip -selection cliboard -o
}
# Generate a random password without installing any external tooling
genpw()
{
alphanum=( {a..z} {A..Z} {0..9} ); for((i=0;i<=${#alphanum[@]};i++)); do printf '%s' "${alphanum[@]:$((RANDOM%255)):1}"; done; echo
}
# See what command you are using the most (this parses the history command)
cm() {
history | awk ' { a[$4]++ } END { for ( i in a ) print a[i], i | "sort -rn | head -n10"}' | awk '$1 > max{ max=$1} { bar=""; i=s=10*$1/max;while(i-->0)bar=bar"#"; printf "%25s %15d %s %s", $2, $1,bar, "\n"; }'
}
alias vim='nvim'
alias l='ls -CF --color=always''
alias cd='cd -P' # follow symlinks
alias gits='git status'
alias gitu='git remote update'
alias gitum='git reset --hard upstream/master'
I don't know who I need to thank for this, some awesome woman on Twitter whose name I no
longer remember, but it's changed the organization of my bash aliases and commands
completely.
I have Ansible drop individual <something>.bashrc files into ~/.bashrc.d/
with any alias or command or shortcut I want, related to any particular technology or Ansible
role, and can manage them all separately per host. It's been the best single trick I've learned
for .bashrc files ever.
Git stuff gets a ~/.bashrc.d/git.bashrc , Kubernetes goes in
~/.bashrc.d/kube.bashrc .
if [ -d ${HOME}/.bashrc.d ]
then
for file in ~/.bashrc.d/*.bashrc
do
source "${file}"
done
fi
These aren't bashrc aliases, but I use them all the time. I wrote a little script named
clean for getting rid of excess lines in files. For example, here's
nsswitch.conf with lots of comments and blank lines:
[pgervase@pgervase etc]$ head authselect/nsswitch.conf
# Generated by authselect on Sun Dec 6 22:12:26 2020
# Do not modify this file manually.
# If you want to make changes to nsswitch.conf please modify
# /etc/authselect/user-nsswitch.conf and run 'authselect apply-changes'.
#
# Note that your changes may not be applied as they may be
# overwritten by selected profile. Maps set in the authselect
# profile always take precedence and overwrites the same maps
# set in the user file. Only maps that are not set by the profile
[pgervase@pgervase etc]$ wc -l authselect/nsswitch.conf
80 authselect/nsswitch.conf
[pgervase@pgervase etc]$ clean authselect/nsswitch.conf
passwd: sss files systemd
group: sss files systemd
netgroup: sss files
automount: sss files
services: sss files
shadow: files sss
hosts: files dns myhostname
bootparams: files
ethers: files
netmasks: files
networks: files
protocols: files
rpc: files
publickey: files
aliases: files
[pgervase@pgervase etc]$ cat `which clean`
#! /bin/bash
#
/bin/cat $1 | /bin/sed 's/^[ \t]*//' | /bin/grep -v -e "^#" -e "^;" -e "^[[:space:]]*$" -e "^[ \t]+"
If navigating a network through IP addresses and hostnames is confusing, or if you don't
like the idea of opening a folder for sharing and forgetting that it's open for perusal, then
you might prefer Snapdrop
. This is an open source project that you can run yourself or use the demonstration instance on
the internet to connect computers through WebRTC. WebRTC enables peer-to-peer connections
through a web browser, meaning that two users on the same network can find each other by
navigating to Snapdrop and then communicate with each other directly, without going through an
external server.
Once two or more clients have contacted a Snapdrop service, users can trade files and chat
messages back and forth, right over the local network. The transfer is fast, and your data
stays local.
When you call
date with +%s option, it shows the current system clock in
seconds since 1970-01-01 00:00:00 UTC. Thus, with this option, you can easily calculate
time difference in seconds between two clock measurements.
start_time=$(date +%s)
# perform a task
end_time=$(date +%s)
# elapsed time with second resolution
elapsed=$(( end_time - start_time ))
Another (preferred) way to measure elapsed time in seconds in bash is to use a built-in bash
variable called SECONDS . When you access SECONDS variable in a bash
shell, it returns the number of seconds that have passed so far since the current shell was
launched. Since this method does not require running the external date command in
a subshell, it is a more elegant solution.
This will display elapsed time in terms of the number of seconds. If you want a more
human-readable format, you can convert $elapsed output as follows.
eval "echo Elapsed time: $(date -ud "@$elapsed" +'$((%s/3600/24)) days %H hr %M min %S sec')"
/var directory has filled up and you are
left with with no free disk space available. This is a typical scenario which can be easily
fixed by mounting your /var directory on different partition. Let's get started by
attaching new storage, partitioning and creating a desired file system. The exact steps may
vary and are not part of this config article. Once ready obtain partition UUID of your new var
partition eg. /dev/sdc1:
Reboot your system and you are done. Confirm that everything is working correctly and
optionally remove old var directory by booting to some Live Linux system etc.
I have two drives on my computer that have the following configuration:
Drive 1: 160GB, /home
Drive 2: 40GB, /boot and /
Unfortunately, drive 2 seems to be dying, because trying to write to it is giving me
errors, and checking out the SMART settings shows a sad state of affairs.
I have plenty of space on Drive 1, so what I'd like to do is move the / and /boot
partitions to it, remove Drive 2 from the system, replace Drive 2 with a new drive, then
reverse the process.
I imagine I need to do some updating to grub, and I need to move some things around, but
I'm pretty baffled how to exactly go about this. Since this is my main computer, I want to be
careful not to mess things up so I can't boot. partitioning fstab Share Improve this
question Follow asked Sep 1 '10 at 0:56 mlissner 2,013 2 2 gold badges 22 22 silver
badges 35 35 bronze badges
You'll need to boot from a live cd. Add partitions for them to disk 1, copy all the
contents over, and then use sudo blkid to get the UUID of each partition. On
disk 1's new /, edit the /etc/fstab to use the new UUIDs you just looked up.
Updating GRUB depends on whether it's GRUB1 or GRUB2. If GRUB1, you need to edit
/boot/grub/device.map
If GRUB2, I think you need to mount your partitions as they would be in a real situation.
For example:
sudo mkdir /media/root
sudo mount /dev/sda1 /media/root
sudo mount /dev/sda2 /media/root/boot
sudo mount /dev/sda3 /media/root/home
(Filling in whatever the actual partitions are that you copied things to, of course)
Then bind mount /proc and /dev in the /media/root:
sudo mount -B /proc /media/root/proc
sudo mount -B /dev /media/root/dev
sudo mount -B /sys /media/root/sys
Now chroot into the drive so you can force GRUB to update itself according to the new
layout:
sudo chroot /media/root
sudo update-grub
The second command will make one complaint (I forget what it is though...), but that's ok
to ignore.
Test it by removing the bad drive. If it doesn't work, the bad drive should still be able
to boot the system, but I believe these are all the necessary steps. Share Improve this answer Follow edited Jun 15 '14 at 23:04 Matthew Buckett
105 4 4 bronze badges answered Sep 1 '10 at 6:14 maco 14.4k 3 3 gold badges 27 27 silver badges 35
35 bronze badges
William Mortada ,
FYI to anyone viewing this these days, this does not apply to EFI setups. You need to mount
/media/root/boot/efi , among other things. – wjandrea Sep 10 '16 at 7:54
sBlatt ,
6
If you replace the drive right away you can use dd (tried it on my server
some months ago, and it worked like a charm).
You'll need a boot-CD for this as well.
Start boot-CD
Only mount Drive 1
Run dd if=/dev/sdb1 of=/media/drive1/backuproot.img - sdb1 being your root
( / ) partition. This will save the whole partition in a file.
same for /boot
Power off, replace disk, power on
Run dd if=/media/drive1/backuproot.img of=/dev/sdb1 - write it back.
same for /boot
The above will create 2 partitions with the exact same size as they had before. You might
need to adjust grub (check macos post).
If you want to resize your partitions (as i did):
Create 2 Partitions on the new drive (for / and /boot ; size
whatever you want)
Mount the backup-image: mount /media/drive1/backuproot.img
/media/backuproot/
Mount the empty / partition: mount /dev/sdb1
/media/sdb1/
Copy its contents to the new partition (i'm unsure about this command, it's really
important to preserve ownership, cp -R won't do it!) cp -R
--preserve=all /media/backuproot/* /media/sdb1
It turns out that the new "40GB" drive I'm trying to install is smaller than my current
"40GB" drive. I have both of them connected, and I'm booted into a liveCD. Is there an easy
way to just dd from the old one to the new one, and call it a done deal? – mlissner Sep 4 '10 at 3:02
mlissner ,
6
My final solution to this was a combination of a number of techniques:
I connected the dying drive and its replacement to the computer simultaneously.
The new drive was smaller than the old, so I shrank the partitions on the old using
GParted.
After doing that, I copied the partitions on the old drive, and pasted them on the new
(also using GParted).
Next, I added the boot flag to the correct partition on the new drive, so it was
effectively a mirror of the old drive.
This all worked well, but I needed to update grub2 per the instructions here .
Finally, this solved it for me. I had a Virtualbox disk (vdi file) that I needed to move to a
smaller disk. However Virtualbox does not support shrinking a vdi file, so I had to create a
new virtual disk and copy over the linux installation onto this new disk. I've spent two days
trying to get it to boot. – j.karlsson Dec 19 '19 at 9:48
This document (7018639) is provided subject to the disclaimer at the end of
this document.
Environment SLE 11
SLE 12 Situation The root filesystem needs to be moved to a new disk or partition.
Resolution 1. Use the media to go into rescue mode on the system. This is the safest way
to copy data from the root disk so that it's not changing while we are copying from it. Make
sure the new disk is available.
2. Copy data at the block(a) or filesystem(b) level depending on preference from the old
disk to the new disk. NOTE: If the dd command is not being used to copy data from an entire disk to an entire
disk the partition(s) will need to be created prior to this step on the new disk so that the
data can copied from partition to partition.
a. Here is a dd command for copying at the block level (the disks do not need to be
mounted):
# dd if=/dev/<old root disk> of=/dev/<new root disk> bs=64k conv=noerror,sync
The dd command is not verbose and depending on the size of the disk could take some time to
complete. While it is running the command will look like it is just hanging. If needed, to
verify it is still running, use the ps command on another terminal window to find the dd
command's process ID and use strace to follow that PID and make sure there is activity.
# ps aux | grep dd
# strace -p<process id>
After confirming activity, hit CTRL + c to end the strace command. Once the dd command is
complete the terminal prompt will return allowing for new commands to be run.
b. Alternatively to dd, mount the disks and then use an rsync command for copying at the
filesystem level:
# mount /dev/<old root disk> /mnt
# mkdir /mnt2
(If the new disk's root partition doesn't have a filesystem yet, create it now.)
# mount /dev/<new root disk> /mnt2
# rsync -zahP /mnt/ /mnt2/
This command is much more verbose than dd and there shouldn't be any issues telling that it
is working. This does generally take longer than the dd command.
3. Setting up the partition boot label with either fdisk(a) or parted(b) NOTE: This step can be skipped if the boot partition is separate from the root partition
and has not changed. Also, if dd was used on an entire disk to an entire disk in section
"a" of step 2 you can still skip this step since the partition table will have been copied to
the new disk (If the partitions are not showing as available yet on the new disk run
"partprobe" or enter fdisk and save no changes. ). This exception does not include using dd on
only a partition.
a. Using fdisk to label the new root partition (which contains boot) as bootable.
# fdisk /dev/<new root disk>
From the fdisk shell type 'p' to list and verify the root partition is there.
Command (m for help): p
If the "Boot" column of the root partition does not have an "*" symbol then it needs to be
activated. Type 'a' to toggle the bootable partition flag: Command (m for help): a Partition
number (1-4): <number from output p for root partition>
After that use the 'p' command to verify the bootable flag is now enabled. Finally, save
changes: Command (m for help): w
b. Alternatively to fdisk, use parted to label the new root partition (which contains boot)
as bootable.
# parted /dev/sda
From the parted shell type "print" to list and verify the root partition is there.
(parted) print If the "Flags" column of the root partition doesn't include "boot" then it will
need to be enabled. (parted) set <root partition number> boot on
After that use the "print" command again to verify the flag is now listed for the root
partition. then exit parted to save the changes: (parted) quit
4. Updating Legacy GRUB(a) on SLE11 or GRUB2(b) on SLE12. NOTE: Steps 4 through 6 will need to be done in a chroot environment on the new
root disk. TID7018126 covers how to chroot in rescue mode:
https://www.suse.com/support/kb/doc?id=7018126
a. Updating Legacy GRUB on SLE11
# vim /boot/grub/menu.lst
There are two changes that may need to occur in the menu.lst file. 1. If the contents of
/boot are in the root partition which is being changed, we'll need to update the line "root
(hd#,#)" which points to the disk with the contents of /boot.
Since the sd[a-z] device names are not persistent it's recommended to find the equivalent
/dev/disk/by-id/ or /dev/disk/by-path/ disk name and to use that instead. Also, the device name
might be different in chroot than it was before chroot. Run this command to verify the disk
name in chroot: # mount
For this line Grub uses "hd[0-9]" rather than "sd[a-z]" so sda would be hd0 and sdb would be
hd1, and so on. Match to the disk as shown in the mount command within chroot. The partition
number in Legacy Grub also starts at 0. So if it were sda1 it would be hd0,0 and if it were
sdb2 it would be hd1,1. Update that line accordingly.
2. in the line starting with the word "kernel" (generally just below the root line we just
went over) there should be a root=/dev/<old root disk> parameter. That will need to be
updated to match the path and device name of the new root partition.
root=/dev/disk/by-id/<new root partition> Also, if the swap partition was changed to the
new disk you'll need to reflect that with the resume= parameter.
Save and exit after making the above changes as needed.
Next, run this command: # yast2 bootloader
( you may get a warning message about the boot loader. This can be ignored.)
Go to the "Boot Loader Installation" tab with ALT + a. Verify it is set to boot from the
correct partition. For example, if the content of /boot is in the root partition then make sure
it is set to boot from the root partition. Lastly hit ALT + o so that it will save the
configuration. While the YaST2 module is existing it should also install the boot loader.
b Updating GRUB2 on SLE12 # vim /etc/default/grub
The parameter to update is the GRUB_CMDLINE_LINUX_DEFAULT. If there is a "root=/dev/<old
root disk>" parameter update it so that it is "root=/dev/<new root disk>". If there is
no root= parameter in there add it. Each parameter is space separated so make sure there is a
space separating it from the other parameters. Also, if the swap partition was changed to the
new disk you'll need to reflect that with the resume= parameter.
Since the sd[a-z] device names are not persistent it's recommended to find the equivalent
/dev/disk/by-id/ or /dev/disk/by-path/ disk name and to use that instead. Also, the device name
might be different in chroot than it was before chroot. Run this command to verify the disk
name in chroot before comparing with by-id or by-path: # mount
It might look something like this afterward:
GRUB_CMDLINE_LINUX_DEFAULT="root=/dev/disk/by-id/<partition/disk name>
resume=/dev/disk/by-id/<partition/disk name> splash=silent quiet showopts"
After saving changes to that file run this command to save them to the GRUB2 configuration: #
grub2-mkconfig -o /boot/grub2/grub.cfg (You can ignore any errors about lvmetad during the
output of the above command.)
After that run this command on the disk with the root partition. For example, if the root
partition is sda2 run this command on sda:
# grub2-install /dev/<disk of root partition>
5. Correct the fstab file to match new partition name(s)
# vim /etc/fstab
Correct the root (/) partition mount row in the file so that it points to the new
disk/partition name. If any other partitions were changed they will need to be updated as well.
For example, changed from: /dev/<old root disk> / ext3 defaults 1 1 to:
/dev/disk/by-id/<new root disk> / ext3 defaults 1 1
The 3rd through 6th column may vary from the example. The important aspect is to change the
row that is root (/) on the second column and adjust in particular the first column to reflect
the new root disk/partition. Save and exit after making needed changes.
6. Lastly, run the following command to rebuild the ramdisk to match updated information: #
mkinitrd
7. Exit chroot and reboot the system to test if it will boot using the new disk. Make sure
to adjust the BIOS boot order so that the new disk is prioritized first. Additional
Information The range of environments that can impact the necessary steps to migrate a root
filesystem makes it near impossible to cover every case. Some environments could require tweaks
in the steps needed to make this migration a success. As always in administration, have backups
ready and proceed with caution. Disclaimer
This Support Knowledgebase provides a valuable tool for SUSE customers and parties
interested in our products and solutions to acquire information, ideas and learn from one
another. Materials are provided for informational, personal or non-commercial use within your
organization and are presented "AS IS" WITHOUT WARRANTY OF ANY KIND.
How to move Linux root partition to another drive quickly
Dominik Gacek
Jun 21, 2019 · 4 min read
There's a bunch of information over internet on how to clone the Linux drives or partitions
between other drives and partitions using solution like partclone ,
clonezilla , partimage , dd or similar, and while most
of them are working just fine, they're not always the fastest possible way to achieve the
result.
Today I want to show you another approach that combines most of them, and I am finding it
the easiest and fastest of all.
Assumptions:
You are using GRUB 2 as a boot loader
You have two disks/partitions where a destination one is at least the same size or larger
than the original one.
Let's dive in into action.
Just "dd" it
First thing that we h ave to do, is to create a direct copy of our current root partition
from our source disk into our target one.
Before you start, you have to know what are the device names of your drives, to check on
that type in:
sudo fdisk -l
You should see the list of all the disks and partitions inside your system, along with the
corresponding device names, most probably something like /dev/sdx where the
x will be replaced with proper device letter, in addition to that you'll see all
of the partitions for that device prefixed with partition number, so something like
/dev/sdx1
Based on the partition size, device identifier and the file-system, you can say what
partitions you'll switch your installation from and which one will be the target one.
I am assuming here, that you already have the proper destination partition created, but if
you do not, you can utilize one of the tools like GParted or similar to create it.
Once you'll have those identifiers, let's use dd to create a clone, with
command similar to.
Where /dev/sdx1 is your source partition, and /dev/sdy1 is your
destination one.
It's really important to provide the proper devices into if and of
arguments, cause otherwise you can overwrite your source disk instead!
The above process will take a while and once it's finished you should already be able to
mount your new partition into the system by using two commands:
sudo mkdir /mnt/new
sudo mount /dev/sdy1 /mnt/new
There's also a chance that your device will be mounted automatically but that varies on a
Linux distro of choice.
Once you execute it, if everything went smoothly you should be able to run
ls -l /mnt/new
And as the outcome you should see all the files from the core partition, being stored in the
new location.
It finishes the first and most important part of the operation.
Now the tricky
part
We do have our new partition moved into shiny new drive, but the problem that we have, is
the fact that since they're the direct clones both of the devices will have the same UUIDs and
if we want to load your installation from the new device properly, we'll have to adjust that as
well.
First, execute following command to see the current disk uuid's
blkid
You'll see all of the partitions with the corresponding UUID.
Now, if we want to change it we have to first generate a new one using:
uuidgen
which will generate a brand new UUID for us, then let's copy it result and execute command
similar to:
where in place of /dev/sdy1 you should provide your target partition device
identifier, and in place of -U flag value, you should paste the value generated
from uuidgen command.
Now the last thing to do, is to update our fstab file on new partition so that it'll contain
the proper UUID, to do this, let's edit it with.
sudo vim /etc/fstab
# or nano or whatever editor of choice
you'll see something similar to the code below inside:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sdc1 during installation
UUID=cd6ecfb1–05e0–4dd7–89e7–8e78dad1fa0e / ext4 errors=remount-ro 0 1
# /home was on /dev/sdc2 during installation
UUID=667f98f4–9db1–415b-b326–65d16c528e29 /home ext4 defaults 0 2
/swapfile none swap sw 0 0
UUID=7AA7–10F1 /boot/efi vfat defaults 0 1
The bold part is important for us, so what we want to do, is to paste our new UUID replacing
the current one specified for the / path.
And that's almost it
The last part you have to do is to simply update the grub.
There are a number of options here, for the brave ones you can edit the
/boot/grub/grub.cfg
Another option is to simply reinstall grub into our new drive with command:
sudo grub-install /dev/sdx
And if you do not want to bother with editing or reinstalling grub manually, you can simply
use the tool called grub-customizer to have a simple and easy GUI for all of those
operations.
No
doubt the old spinning hard drives are the main
bottleneck
of
any Linux PC. Overall system responsiveness is highly dependent on storage drive performance.
So, here's how you can clone HDD to SSD without re-installing the existing
Linux
distro
and now be clear about few things.
As you're planning to move your existing
linux installation to a SSD, there's a good chance that the SSD has smaller storage capacity(in GB) than the existing hard
drive.
You don't need to worry about the above, but
you should clear up the existing hard drive as much as possible, I mean
delete
the junks
.
You should at least know what junks to
exclude while copying the files, taking backup of important files is always good.
Here we'll assume that it's not a
dual
boot system
, only linux is installed on the hard drive.
Read this tutorial carefully before actually
cloning to SSD, anyway there's almost no risk of messing things up.
Of
course it's not the only way to clone linux from HDD to SSD, rather it's exactly what I did after buying a SSD for my laptop.
This tutorial should work on every Linux distro with a little modification, depending on which distro you're using, I was
using Ubuntu.
Contents
Hardware setup
As
you're going to copy files from the hard drive to the SSD. So you need to attach the both disk at the same time on your
PC/Laptop.
For desktops, it's easier, as there's always at least 2 SATA ports on the motherboard. You've just have to connect the SSD to
any of the free SATA ports and you're done.
On
laptops it's a bit tricky, as there's no free SATA port. If the laptop has a DVD drive, then you could remove it and use a "
2nd
hard drive caddy
".
It
could be either 9.5 mm or 12.7 mm. Open up your laptop's DVD drive and get a rough measurement.
But if you don't want to play around with your DVD drive or there's no DVD at all, use a
USB
to SATA adapter
.
Preferably a USB 3 adapter for better speed, like
this
one
. However the "caddy" is the best you can do with your laptop.
Try AmazonPrime for free
Enjoy free shipping and One-Day delivery,
cancel any time.
You'll need a bootable USB drive for letter steps, booting any live Linux distro of your choice, I used to Ubuntu.
You could use any method to create it, the
dd
approach
will be the simplest. Here's detailed the tutorials, with
MultiBootUSB
and
here's
bootable
USB with GRUB
.
Create Partitions on the
SSD
After successfully attaching the SSD, you need to partition it according to it's capacity and your choice. My SSD, SAMSUNG 850
EVO was absolutely blank, might be yours too as well. So, I had to create the partition table before creating disk partitions.
Now many question arises, likeWhat kind of partition table? How many partitions? Is there any need of a swap partition?
Well, if your Laptop/PC has a UEFI based BIOS, and want to use the UEFI functionalities, you should use the GPT partition
table.
For a regular desktop use, 2 separate partitions are enough, a
root
partition
and a
home
.
But if you want to boot through UEFI, then you also need to crate a 100 MB or more FAT32 partition.
I
think a 32 GB
root
partition is just enough, but you've to
decide yours depending on future plans. However you can go with as low as 8 GB root partition, if you know what you're doing.
Of
course you don't need a dedicated swap partition, at least what I think. If there's any need of swap in future, you can just
create a swap file.
So, here's how I partitioned the disk. It's formatted with the MBR partition table, a 32 GB
root
partition
and the rest of 256 GB(232.89 GiB) is
home
.
This SSD partitions were created with Gparted on the existing Linux system on the HDD. The SSD was connected to the DVD drive
slot with a "Caddy", showing as
/dev/sdb
here.
Mount the HDD and SSD
partitions
At
the beginning of this step, you need to shutdown your PC and boot to any
live
Linux distro
of your choice from a bootable USB drive.
The purpose of booting to a live linux session is for copying everything from the old
root
partition
in a more cleaner way. I mean why copy unnecessary files or directories under
/dev
,
/proc
,
/sys
,
/var
,
/tmp
?
And of course you know how to boot from a USB drive, so I'm not going to repeat the same thing. After booting to the live
session, you've to mount both the HDD and SSD.
As
I used Ubuntu live, so just opened up the file manager to mount the volumes. At this point you've to be absolutely sure about
which are the old and new
root
and
home
partitions.
And if you didn't had any separate
/home
partition
on the HDD previously, then you've to be careful while copying files. As there could be lots of contents that won't fit inside
the tiny
root
volume of the SSD in this case.
Finally if you don't want to use any graphical tool like file managers to mount the disk partition, then it's even better. An
example
below,
only commands, not much explanation.
sudo -i # after booting to the live session
mkdir -p /mnt/{root1,root2,home1,home2} # Create the directories
mount /dev/sdb1 /mnt/root1/ # mount the root partitions
mount /dev/sdc1 /mnt/root2/
mount /dev/sdb2 /mnt/home1/ # mount the home partitions
mount /dev/sdc2 /mnt/home2/
Copy contents from
the HDD to SSD
In
this step, we'll be using the
rsync
command
to clone HDD to SSD while
preserving proper file permissions
.
And we'll assume that the all partitions are mounter like below.
Old root partition of the hard drive mounted
on
/media/ubuntu/root/
Old home partition of the hard drive on
/media/ubuntu/home/
New root partition of the SSD, on
/media/ubuntu/root1/
New home partition of the SSD mounted on
/media/ubuntu/home1/
Actually in my case, both the root and home partitions were labelled as root and home, so udisk2 created the mount directories
like above.
Note:
Most
probably your mount points are different.
Don't
just copy
paste the commands below, modify them according to your system and requirements.
You can also see the transfer progress, that's helpful.
The copying process will take about
10 minutes
or so to complete, depending on the
size of it's contents.
Note:
If
there was no separate home partition on your previous installation and there's not enough space in the SSD's root partition,
exclude the
/home
directory.
Now copy the contents of one home partition to another, and this is a bit tricky of your SSD is smaller in size than the HDD.
You've to use the
--exclude
flag
with
rsync
to exclude certain large files or folders.
So, here for an
example
, I wanted to exclude few excessively
large folders.
Excluding files and folders with rsync is bit sketchy, the source folder is the starting point of any file or directory path.
Make sure that the exclude path is properly implemented.
Hope you've got the point, for a proper HDD to SSD cloning in linux, copy the contents of the HDD's root partition to the new
SSD's root partition. And do the the same thing for the home partition too.
Install GRUB
bootloader on the SSD
The SSD won't boot until there's a properly configured bootloader. And there's a very good chance that you'were using GRUB as
a boot loader.
So, to install GRUB, we've to
chroot
on
the root partition of the SSD and install it from there. Before that be sure about which device under the
/dev
directory
is your SSD. In my case, it was
/dev/sdb
.
Note:
You
can just copy the first 512 byte from the HDD and dump it to the SSD, but I'm not going that way this time.
So, first step is chrooting, here's all the commands below, running all of then as super user.
sudo -i # login as super user
mount -o bind /dev/ /media/ubuntu/root1/dev/
mount -o bind /dev/pts/ /media/ubuntu/root1/dev/pts/
mount -o bind /sys/ /media/ubuntu/root1/sys/
mount -o bind /proc/ /media/ubuntu/root1/proc/
chroot /media/ubuntu/root1/
After successfully chrooting to the SSD's root partition, install GRUB. And there's also a catch, if you want to use a UEFI
compatible GRUB, then it's another long path. But we'll be installing the legacy BIOS version of the GRUB here.
If
GRUB is installed without any problem, then update the configuration file.
update-grub
These two commands above are to be run inside the chroot, and don't exit from the chroot now. Here's the detailed
GRUB
rescue
tutorial, both for legacy BIOS and UEFI systems.
Update the fstab entry
You've to properly update the fstab entry to properly mount the filesystems while booting.
Use the
blkid
command
to know the proper UUID of the partitions.
Now open up the
/etc/fstab
file
with your favorite text editor and add the proper
root
and
home
UUID
at proper locations.
nano /etc/fstab
The
above is the final fstab entry from my laptops Ubuntu installation.
Shutdown and boot from
the SSD
If
you were using a USB to SATA converter to do all the above steps, then it's time to connect the SSD to a SATA port.
For desktops it's not a problem, just connect the SSD to any of it's available SATA port. But many laptop refuses to boot if
the DVD drive is replaced with a SSD or HDD. So, in that case, remove the hard drive and slip the SSD in it's place.
After doing all the hardware stuff, it's better to check if the SSD is recognized by the BIOS/UEFI at all. Hit the BIOS setup
button while powering it up, and check all the disks.
If
the SSD is detected, then set it as the default boot device. Save all the changes to BIOS/UEFI and hit the power button again.
Now it's the moment of truth, if HDD to SSD cloning was done right, then Linux should boot. It will boot much faster than
previous, you can check that with the
systemd-analyze
command.
Conclusion
As
said before it's neither the only way nor the perfect, but was pretty simple for me.I got the idea from openwrt extroot setup,
but previously used the
squashfs
tools instead of rsync.
It
took around 20 minute to clone my HDD to SSD. But, well writing this tutorial took around 15X more time of that.
Hope I'll be able to add the GRUB installation process for UEFI based systems to this tutorial soon, stay tuned !
Also please don't forget to share your thoughts and suggestions on the comment section.
Your comments
Sh3l
says
December 21, 2020
Hello,
It seems you haven't gotten around writing that UEFI based article yet. But right now I really need the steps necessary
to clone hdd to ssd in UEFI based system. Can you please let me know how to do it?
Reply
Create an extra UEFI partition, along with root and home partitions, FAT32, 100 to 200 MB, install GRUB in UEFI mode,
it should boot.
Commands should be like this -
mount
/dev/sda2 /boot/efi
grub-install /dev/sda --target=x86_64-efi
Then edit the grub.cfg file under /boot/grub/ , you're good to go.
If it's not booting try GRUB rescue, boot and install grub from there.
Reply
Pronay
Guha
says
November 9, 2020
I'm already using Kubuntu 20.04, and now I'm trying to add an SSD to my laptop. It is running windows alongside. I want
the data to be there but instead of using HDD, the Kubuntu OS should use SSD. How to do it?
Reply
none
says
May 23, 2020
Can you explain what to do if the original HDD has Swap and you don't want it on the SSD?
Thanks.
Reply
You can ignore the Swap partition, as it's not essential for booting.
Edit the /etc/fstab file, and use a swap file instead.
Reply
none
says
May 21, 2020
A couple of problems:
In one section you mount homeS and rootS as root1 root2 home1 home2 but in the next sectionS you call them root root1
home home1
In the blkid image sda is SSD and sdb is HDD but you said in the previous paragraph that sdb is your SSD
Thanks for the guide
Reply
The first portion is just an example, not the actual commands.
There's some confusing paragraphs and formatting error, I agree.
Reply
oybek
says
April 21, 2020
Thank you very much for the article
Yesterday moved linux from hdd to ssd without any problem
Brilliant article
Reply
Pronay
Guha
says
November 9, 2020
hey, I'm trying to move Linux from HDD to SSD with windows as a dual boot option.
What changes should I do?
Reply
Passingby
says
March 25, 2020
Thank you for your article. It was very helpful. But i see one disadvantage. When you copy like cp -a
/media/ubuntu/root/ /media/ubuntu/root1/ In root1 will be created root folder, but not all its content separately
without folder. To avoid this you must add (*) after /
It should be looked like cp -a /media/ubuntu/root/* /media/ubuntu/root1/ For my opinion rsync command is much more
better. You see like files copping. And when i used cp, i did not understand the process hanged up or not.
Reply
Thanks for pointing out the typo.
Yeas, rsync is better.
Reply
David
Keith
says
December 8, 2018
Just a quick note: rsync, scp, cp etc. all seem to have a file size limitation of approximately 100GB. So this tutorial
will work well with the average filesystem, but will bomb repeatedly if the file size is extremely large.
Reply
oldunixguy
says
June 23, 2018
Question: If one doesn't need to exclude anything why not use "cp -a" instead of rsync?
Question: You say "use a UEFI compatible GRUB, then it's another long path" but you don't tell us how to do this for
UEFI. How do we do it?
Reply
You're most welcome, truly I don't know how to respond such a praise. Thanks!
Reply
Emmanuel
says
February 3, 2018
Far the best tutorial I've found "quickly" searching DuckDuckGo. Planning to migrate my system on early 2018. Thank you!
I now visualize quite clearly the different steps I'll have to adapt and pass through. it also stick to the KISS* thank
you again, the time you invested is very useful, at least for me!
Author:
Vivek
Gite
Last updated:
March
14, 2006
58
comments
/dev/shm
is nothing but implementation of
traditional
shared memory
concept. It is an
efficient means of passing data between programs. One program will create a memory portion, which other processes (if
permitted) can access. This will result into speeding up things on Linux.
shm / shmfs is also known as tmpfs, which is a common name for a temporary file storage facility on many Unix-like operating
systems. It is intended to appear as a mounted file system, but one which uses virtual memory instead of a persistent storage
device.
If you type the mount command you will see /dev/shm as a tempfs file system. Therefore, it is a file system, which keeps all
files in virtual memory. Everything in tmpfs is temporary in the sense that no files will be created on your hard drive. If
you unmount a tmpfs instance, everything stored therein is lost. By default almost all Linux distros configured to use /dev/shm:
$ df -h
Sample outputs:
You can use /dev/shm to improve the performance of application software such as Oracle or overall Linux system performance. On
heavily loaded system, it can make tons of difference. For example VMware workstation/server can be optimized to improve your
Linux host's performance (i.e. improve the performance of your virtual machines).
In this example, remount /dev/shm with 8G size as follows:
# mount -o remount,size=8G /dev/shm
To be frank, if you have more than 2GB RAM + multiple Virtual machines, this hack always improves performance. In this
example, you will give you tmpfs instance on /disk2/tmpfs which can allocate 5GB RAM/SWAP in 5K inodes and it is only
accessible by root:
# mount -t tmpfs -o size=5G,nr_inodes=5k,mode=700 tmpfs /disk2/tmpfs
Where,
-o opt1,opt2
: Pass various options with a -o
flag followed by a comma separated string of options. In this examples, I used the following options:
remount
: Attempt to remount an
already-mounted filesystem. In this example, remount the system and increase its size.
size=8G or size=5G
: Override default
maximum size of the /dev/shm filesystem. he size is given in bytes, and rounded up to entire pages. The default is half
of the memory. The size parameter also accepts a suffix % to limit this tmpfs instance to that percentage of your
pysical RAM: the default, when neither size nor nr_blocks is specified, is size=50%. In this example it is set to 8GiB
or 5GiB. The tmpfs mount options for sizing ( size, nr_blocks, and nr_inodes) accept a suffix k, m or g for Ki, Mi, Gi
(binary kilo, mega and giga) and can be changed on remount.
nr_inodes=5k
: The maximum number of
inodes for this instance. The default is half of the number of your physical RAM pages, or (on a machine with highmem)
the number of lowmem RAM pages, whichever is the lower.
mode=700
: Set initial permissions of the
root directory.
tmpfs
: Tmpfs is a file system which keeps
all files in virtual memory.
How do I restrict or modify size of /dev/shm permanently?
You need to add or modify entry in /etc/fstab file so that system can read it after the reboot. Edit, /etc/fstab as a root
user, enter:
# vi /etc/fstab
Append or modify /dev/shm entry as follows to set size to 8G
none /dev/shm tmpfs defaults,size=8G 0 0
Save and close the file. For the changes to take effect immediately remount /dev/shm:
# mount -o remount /dev/shm
Verify the same:
# df -h
The root user's home directory is /root. I would like to relocate this, and any other user's
home directories to a new location, perhaps on sda9. How do I go about this? debian user-management linux ShareImprove this question
Follow asked Nov 30 '10 at 17:27 nicholas.alipaz 155 2 2 silver badges
7 7 bronze badges
Do you need to have /root on a separate partition, or would it be enough to simply copy
the contents somewhere else and set up a symbolic link? (Disclaimer: I've never tried this,
but it should work.) – SmallClanger Nov 30 '10 at 17:31
You should avoid symlinks, it can make nasty bugs to appear... one day. And very hard to
debug.
Use mount --bind :
# as root
cp -a /root /home/
echo "" >> /etc/fstab
echo "/home/root /root none defaults,bind 0 0" >> /etc/fstab
# do it now
cd / ; mv /root /root.old; mkdir /root; mount -a
it will be made at every reboots which you should do now if you want to catch errors soon
Share Improve this answer Follow
answered Nov 30 '10 at 17:51 shellholic 1,257 8 8 silver badges 11 11
bronze badges
1 You're welcome. But remember moving /root is a bad practice. Perhaps you
could change a bit and make /home/bigrootfiles and mount/link it to some
directory inside /root . If your "big files" are for some service. The best
practice on Debian is to put them in /var/lib/somename – shellholic Nov 30 '10 at
18:40
1 I see. Ultimately root login should not be used IMO. I guess I still might forgo moving
/root entirely since it is not really very good to do. I just need to setup some new sudoer
users with directories on the right partition and setup keyed authentication for better
security. That would be the best solution I think. – nicholas.alipaz Nov 30 '10 at
18:42
Perhaps make a new question describing the purpose of your case and you could come with
great answers. – shellholic Nov 30 '10 at 18:45
https://877f1b32808dbf7ec83f8faa126bb75f.safeframe.googlesyndication.com/safeframe/1-0-37/html/container.html
Report this ad 1
Never tried it, but you shouldn't have a problem with: cd / to make sure you're not in the directory to be moved mv /root /home/root ln -s /home/root /root symlink it back to the original location. Share Improve this answer Follow answered
Nov 30 '10 at 17:32 James L 5,645 1 1 gold badge 17 17 silver
badges 23 23 bronze badges Add a
comment 0
booting from a live cd is unfortunately not an option for a remote server, which this is
the case here. – nicholas.alipaz Nov 30 '10 at
17:54
I think that worked in the past - if you do update-grub and grub-install at the end.
However, with debian 10 grub sends me back to have my old partition as the root. –
user855443 Jun 11
'20 at 22:10
The dmesg command is used to print the kernel's message buffer. This is another
important command that you cannot work without. It is much easier to troubleshoot a system when
you can see what is going on, and what happened behind the scenes.
Another example from real life: You are troubleshooting an issue and find out that one file
system is at 100 percent of its capacity.
There may be many subdirectories and files in production, so you may have to come up with
some way to classify the "worst directories" because the problem (or solution) could be in one
or more.
In the next example, I will show a very simple scenario to illustrate the point.
We go to the file system where the disk space is low (I used my home directory as an
example).
Then, we use the command df -k * to show the sizes of directories in
kilobytes.
That requires some classification for us to find the big ones, but just sort
is not enough because, by default, this command will not treat the numbers as values but just
characters.
We add -n to the sort command, which now shows us the biggest
directories.
In case we have to navigate to many other directories, creating an alias
might be useful.
Some data sources present unique logging challenges, leaving organizations vulnerable to
attack. Here's how to navigate each one to reduce risk and increase visibility.
$ colordiff attendance-2020 attendance-2021
10,12c10
< Monroe Landry
< Jonathan Moody
< Donnell Moore
---
< Sandra Henry-Stocker
If you add a -u option, those lines that are included in both files will appear in your
normal font color.
wdiff
The wdiff command uses a different strategy. It highlights the lines that are only in the
first or second files using special characters. Those surrounded by square brackets are only in
the first file. Those surrounded by braces are only in the second file.
$ wdiff attendance-2020 attendance-2021
Alfreda Branch
Hans Burris
Felix Burt
Ray Campos
Juliet Chan
Denver Cunningham
Tristan Day
Kent Farmer
Terrie Harrington
[-Monroe Landry <== lines in file 1 start
Jonathon Moody
Donnell Moore-] <== lines only in file 1 stop
{+Sandra Henry-Stocker+} <== line only in file 2
Leanne Park
Alfredo Potter
Felipe Rush
vimdiff
The vimdiff command takes an entirely different approach. It uses the vim editor to open the
files in a side-by-side fashion. It then highlights the lines that are different using
background colors and allows you to edit the two files and save each of them separately.
Unlike the commands described above, it runs on the desktop, not in a terminal
window.
This webinar will discuss key trends and strategies, identified by Forrester Research, for
digital CX and customer self-service in 2021 and beyond. Register now
On Debian systems, you can install vimdiff with this command:
$ sudo apt install vim
vimdiff.jpg <=====================
kompare
The kompare command, like vimdifff , runs on your desktop. It displays differences between
files to be viewed and merged and is often used by programmers to see and manage differences in
their code. It can compare files or folders. It's also quite customizable.
The kdiff3 tool allows you to compare up to three files and not only see the differences
highlighted, but merge the files as you see fit. This tool is often used to manage changes and
updates in program code.
Like vimdiff and kompare , kdiff3 runs on the desktop.
You can find more information on kdiff3 at sourceforge .
Tags provide an easy way to associate strings that look like hash tags (e.g., #HOME ) with
commands that you run on the command line. Once a tag is established, you can rerun the
associated command without having to retype it. Instead, you simply type the tag. The idea is
to use tags that are easy to remember for commands that are complex or bothersome to
retype.
Unlike setting up an alias, tags are associated with your command history. For this reason,
they only remain available if you keep using them. Once you stop using a tag, it will slowly
disappear from your command history file. Of course, for most of us, that means we can type 500
or 1,000 commands before this happens. So, tags are a good way to rerun commands that are going
to be useful for some period of time, but not for those that you want to have available
permanently.
To set up a tag, type a command and then add your tag at the end of it. The tag must start
with a # sign and should be followed immediately by a string of letters. This keeps the tag
from being treated as part of the command itself. Instead, it's handled as a comment but is
still included in your command history file. Here's a very simple and not particularly useful
example:
$ history | grep TAG
998 08/11/20 08:28:29 echo "I like tags" #TAG <==
999 08/11/20 08:28:34 history | grep TAG
Afterwards, you can rerun the echo command shown by entering !? followed by the tag.
$ !? #TAG
echo "I like tags" #TAG
"I like tags"
The point is that you will likely only want to do this when the command you want to run
repeatedly is so complex that it's hard to remember or just annoying to type repeatedly. To
list your most recently updated files, for example, you might use a tag #REC (for "recent") and
associate it with the appropriate ls command. The command below lists files in your home
directory regardless of where you are currently positioned in the file system, lists them in
reverse date order, and displays only the five most recently created or changed files.
$ ls -ltr ~ | tail -5 #REC <== Associate the tag with a command
drwxrwxr-x 2 shs shs 4096 Oct 26 06:13 PNGs
-rw-rw-r-- 1 shs shs 21 Oct 27 16:26 answers
-rwx------ 1 shs shs 644 Oct 29 17:29 update_user
-rw-rw-r-- 1 shs shs 242528 Nov 1 15:54 my.log
-rw-rw-r-- 1 shs shs 266296 Nov 5 18:39 political_map.jpg
$ !? #REC <== Run the command that the tag is associated with
ls -ltr ~ | tail -5 #REC
drwxrwxr-x 2 shs shs 4096 Oct 26 06:13 PNGs
-rw-rw-r-- 1 shs shs 21 Oct 27 16:26 answers
-rwx------ 1 shs shs 644 Oct 29 17:29 update_user
-rw-rw-r-- 1 shs shs 242528 Nov 1 15:54 my.log
-rw-rw-r-- 1 shs shs 266296 Nov 5 18:39 political_map.jpg
You can also rerun tagged commands using Ctrl-r (hold Ctrl key and press the "r" key) and
then typing your tag (e.g., #REC). In fact, if you are only using one tag, just typing # after
Ctrl-r should bring it up for you. The Ctrl-r sequence, like !? , searches through your command
history for the string that you enter.
Tagging locations
Some people use tags to remember particular file system locations, making it easier to
return to directories they"re working in without having to type complete directory
paths.
Some data sources present unique logging challenges, leaving organizations vulnerable to
attack. Here's how to navigate each one to reduce risk and increase visibility.
$ cd /apps/data/stats/2020/11 #NOV
$ cat stats
$ cd
!? #NOV <== takes you back to /apps/data/stats/2020/11
After using the #NOV tag as shown, whenever you need to move into the directory associated
with #NOV , you have a quick way to do so – and one that doesn't require that you think
too much about where the data files are stored.
NOTE: Tags don't need to be in all uppercase letters, though this makes them easier to
recognize and unlikely to conflict with any commands or file names that are also in your
command history.
Alternatives to tags
While tags can be very useful, there are other ways to do the same things that you can do
with them.
To make commands easily repeatable, assign them to aliases.
As the status quo of security inverts from the data center to the user, Cloud Access
Security Brokers and Secure Web Gateways increasingly will be the same conversation, not
separate technology...
$ alias recent="ls -ltr ~ | tail -5"
To make multiple commands easily repeatable, turn them into a script.
To make file system locations easier to navigate to, create symbolic links.
$ ln -s /apps/data/stats/2020/11 NOV
To rerun recently used commands, use the up arrow key to back up through your command
history until you reach the command you want to reuse and then press the enter key.
You can also rerun recent commands by typing something like "history | tail -20" and then
type "!" following by the number to the left of the command you want to rerun (e.g.,
!999).
Wrap-up
Tags are most useful when you need to run complex commands again and again in a limited
timeframe. They're easy to set up and they fade away when you stop using them.
One easy way to reuse a previously entered command (one that's still on your command
history) is to type the beginning of the command. If the bottom of your history buffers looks
like this, you could rerun the ps command that's used to count system processes simply by
typing just !p .
Can you name the 3 biggest misconceptions about cloud migration? Here's the truth - and how
to solve the challenges.
$ history | tail -7
1002 21/02/21 18:24:25 alias
1003 21/02/21 18:25:37 history | more
1004 21/02/21 18:33:45 ps -ef | grep systemd | wc -l
1005 21/02/21 18:33:54 ls
1006 21/02/21 18:34:16 echo "What's next?"
You can also rerun a command by entering a string that was included anywhere within it. For
example, you could rerun the ps command shown in the listing above by typing !?sys? The
question marks act as string delimiters.
$ !?sys?
ps -ef | grep systemd | wc -l
5
You could rerun the command shown in the listing above by typing !1004 but this would be
more trouble if you're not looking at a listing of recent commands.
Run previous commands
with changes
After the ps command shown above, you could count kworker processes instead of systemd
processes by typing ^systemd^kworker^ . This replaces one process name with the other and runs
the altered command. As you can see in the commands below, this string substitution allows you
to reuse commands when they differ only a little.
The pandemic of 2020 threw business into disarray, but provided opportunities to accelerate
remote work, collaboration, and digital transformation
$ sudo ls -l /var/log/samba/corse
ls: cannot access '/var/log/samba/corse': No such file or directory
$ ^se^es^
sudo ls -l /var/log/samba/cores
total 8
drwx -- -- -- . 2 root root 4096 Feb 16 10:50 nmbd
drwx -- -- -- . 2 root root 4096 Feb 16 10:50 smbd
Reach back into history
You can also reuse commands with a character string that asks, for example, to rerun the
command you entered some number of commands earlier. Entering !-11 would rerun the command you
typed 11 commands earlier. In the output below, the !-3 reruns the first of the three earlier
commands displayed.
$ ps -ef | wc -l
132
$ who
shs pts/0 2021-02-21 18:19 (192.168.0.2)
$ date
Sun 21 Feb 2021 06:59:09 PM EST
$ !-3
ps -ef | wc -l
133
Reuse command arguments
Another thing you can do with your command history is reuse arguments that you provided to
various commands. For example, the character sequence !:1 represents the first argument
provided to the most recently run command, !:2 the second, !:3 the third and so on. !:$
represents the final argument. In this example, the arguments are reversed in the second echo
command.
$ echo be the light
be the light
$ echo !:3 !:2 !:1
echo light the be
light the be
$ echo !:3 !:$
echo light light
light light
If you want to run a series of commands using the same argument, you could do something like
this:
$ echo nemo
nemo
$ id !:1
id nemo
uid=1001(nemo) gid=1001(nemo) groups=1001(nemo),16(fish),27(sudo)
$ df -k /home/!:$
df -k /home/nemo
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb1 446885824 83472864 340642736 20% /home
Of course, if the argument was a long and complicated string, it might actually save you
some time and trouble to use this technique. Please remember this is just an
example!
Wrap-Up
Simple history command tricks can often save you a lot of trouble by allowing you to reuse
rather than retype previously entered commands. Remember, however, that using strings to
identify commands will recall only the most recent use of that string and that you can only
rerun commands in this way if they are being saved in your history buffer.
Join the Network
World communities on Facebook and LinkedIn to comment on topics that are top
of mind.
The cluster comes with a simple parallel shell named pdsh. The pdsh shell is handy for
running commands across the cluster. There is man page that describes the capabilities of pdsh
is detail. One of the useful features is the capability of specifying all or a subset of the
cluster. For example: pdsh -a targets the to all nodes of the cluster, including the master.
pdsh -a -x node00 targets the to all nodes of the cluster except the master. pdsh node[01-08]
targets the to the 8 nodes of the cluster named node01, node02, . . ., node08.
Another utility that is useful for formatting the output of pdsh is dshbak. Here we will
show some handy uses of pdsh.
Show the current date and time on all nodes of the cluster. pdsh -a date
Show the current load and system uptime for all nodes of the cluster. pdsh -a
uptime
Show the version of the Operating System on all nodes.
pdsh -a cat /etc/redhat-release
Check who is logged in the MetaGeek lab!
pdsh -w node[01-32] who
Show all process that have the substring pbs on the cluster. These will be the PBS
servers running on each node.
pdsh -a ps augx | grep pbs | grep -v grep
The utility dshbak formats the output from pdsh by consolidating the output from
each node. The option -c shows identical output from different nodes just once . Try
the following commands.
pdsh -w node[01-32] who | dshbak
pdsh -w node[01-32] who | dshbak -c
pdsh -a date | dshbak -c
Administrators can build wrapper commands around pdsh for commands that are
frequently used across multiple systems and Serviceguard clusters. Several such wrapper
commands are provided with DSAU. These wrappers are Serviceguard cluster-aware and default to
fanning out cluster-wide when used in a Serviceguard environment. These wrappers support most
standard pdsh command line options and also support long options ( --
option syntax) .
cexec is a general purpose pdsh wrapper. In addition to the standard
pdsh features, cexec includes a reporting feature. Use the
--report_loc option to have cexec display the report location for a command.
The command report records the command issued in addition to the nodes where the command
succeeded, failed, or the nodes that were unreachable. The report can be used with the
--retry option to replay the command against nodes that failed, succeeded, were
unreachable, or all nodes.
ccp
ccp is a wrapper for pdcp and copies files cluster-wide or to the
specified set of systems.
cps
cps fans out a ps command across a set of systems or cluster.
ckill
ckill allows the administrator to signal a process by name since the pid of a
specific process will vary across a set of systems or the members of a cluster.
cuptime
cuptime displays the uptime statistics for a set of systems or a cluster.
cwall
cwall displays a wall(1M) broadcast message on multiple hosts.
All the wrappers support the CFANOUT_HOSTS environment variable when not executing in a
Serviceguard cluster. The environment variable specifies a file containing the list of hosts to
target, one hostname per line. This will be used if no other host specifications are present on
the command line. When no target nodelist command line options are used and CFANOUT_HOSTS is
undefined, the command will be executed on the local host.
For more information on these commands, refer to their reference manpages.
Hm, this seems like a good idea, but I'm not sure dshbak is the right
place for this. (That script is meant to simply reformat output which
is prefixed by "node: ")
If you'd like to track up/down nodes, you should check out Al Chu's
Cerebro and whatsup/libnodeupdown:
http://www.llnl.gov/linux/cerebro/cerebro.html
http://www.llnl.gov/linux/whatsup/
But I do realize that reporting nodes that did not respond to pdsh
would also be a good feature. However, it seems to me that pdsh itself
would have to do this work, because only it knows the list of hosts originally
targeted. (How would dshbak know this?)
As an alternative I sometimes use something like this:
# pdsh -a true 2>&1 | sed 's/^[^:]*: //' | dshbak -c
----------------
emcr[73,138,165,293,313,331,357,386,389,481,493,499,519,522,526,536,548,553,560,564,574,601,604,612,618,636,646,655,665,676,678,693,700-701,703,706,711,713,715,717-718,724,733,737,740,759,767,779,817,840,851,890]
----------------
mcmd: connect failed: No route to host
----------------
emcrj
----------------
mcmd: xpoll: protocol failure in circuit setup
i.e. strip off the leading pdsh@...: and send all errors to stdout. Then
collect errors with dshbak to see which hosts are not reachable.
Maybe we should add an option to pdsh to issue a report of failed hosts
at the end of execution?
mark
>
NOTE: if you don't want to enter passwords for each server, then you need to have an
authorized_key installed on the remote servers. If necessary, you can use the environment
variable PDSH_SSH_ARGS to specify ssh options, including which identity file to
use ( -i ).
The commands will be run in parallel on all servers, and output from them will be
intermingled (with the hostname pre-pended to each output line). You can view the output nicely
formatted and separated by host using pdsh 's dshbak utility:
dshbak logfile.txt | less
Alternatively, you can pipe through dshbak before redirecting to a logfile:
IMO it's better to save the raw log file and use dshbak when required, but
that's just my subjective preference. For remote commands that produce only a single line of
output (e.g. uname or uptime ), dshbak is overly verbose
as the output is nicely concise. e.g. from my home network:
You can define hosts and groups of hosts in a file called /etc/genders and then
specify the host group with pdsh -g instead of pdsh -w . e.g. with an
/etc/genders file like this:
pdsh -g all uname -a will run uname -a on all servers. pdsh
-g web uptime will run uptime only on server1 and server 2. pdsh -g
web,mysql df -h / will run df on servers 1, 2, 5, and 6. and so on.
BTW, one odd thing about pdsh is that it is configured to use rsh
by default instead of ssh . You need to either:
use -R ssh on the pdsh command line (e.g. pdsh -R ssh -w server[0-9]
...
export PDSH_RCMD_TYPE=ssh before running pdsh
run echo ssh > /etc/pdsh/rcmd_default to set ssh as the
permanent default.
There are several other tools that do the same basic job as pdsh . I've tried
several of them and found that they're generally more hassle to set up and use.
pdsh pretty much just works with zero or minimal configuration.
dsh -q displays the values of the dsh variables (DSH_NODE_LIST, DCP_NODE_RCP...) dsh <command> runs comamnd on each server in DSH_NODE_LIST dsh <command> | dshbak same as above, just formats output to separate each
host dsh -w aix1,aix2 <command> execute command on the given servers (dsh -w aix1,aix2
"oslevel -s") dsh -e <script> to run the given script on each server
(for me it was faster to dcp and after run the script with dsh on the remote server)
dcp <file> <location> copies a file to the given location (without
location home dir will be used)
dping -n aix1, aix2 do a ping on the listed servers dping -f <filename> do a ping for all servers given in the file (-f)
AutoKey is an open source
Linux desktop automation tool that, once it's part of your workflow, you'll wonder how you ever
managed without. It can be a transformative tool to improve your productivity or simply a way
to reduce the physical stress associated with typing.
This article will look at how to install and start using AutoKey, cover some simple recipes
you can immediately use in your workflow, and explore some of the advanced features that
AutoKey power users may find attractive.
Install and set up AutoKey
AutoKey is available as a software package on many Linux distributions. The project's
installation
guide contains directions for many platforms, including building from source. This article
uses Fedora as the operating platform.
AutoKey comes in two variants: autokey-gtk, designed for GTK -based environments such as GNOME, and autokey-qt, which is
QT -based.
You can install either variant from the command line:
sudo dnf install autokey-gtk
Once it's installed, run it by using autokey-gtk (or autokey-qt
).
Explore the interface
Before you set AutoKey to run in the background and automatically perform actions, you will
first want to configure it. Bring up the configuration user interface (UI):
autokey-gtk -c
AutoKey comes preconfigured with some examples. You may wish to leave them while you're
getting familiar with the UI, but you can delete them if you wish.
The left pane contains a folder-based hierarchy of phrases and scripts. Phrases are
text that you want AutoKey to enter on your behalf. Scripts are dynamic, programmatic
equivalents that can be written using Python and achieve basically the same result of making
the keyboard send keystrokes to an active window.
The right pane is where the phrases and scripts are built and configured.
Once you're happy with your configuration, you'll probably want to run AutoKey automatically
when you log in so that you don't have to start it up every time. You can configure this in the
Preferences menu ( Edit -> Preferences ) by selecting Automatically start AutoKey at login
.
Fixing common typos is an easy problem for AutoKey to fix. For example, I consistently
type "gerp" instead of "grep." Here's how to configure AutoKey to fix these types of problems
for you.
Create a new subfolder where you can group all your "typo correction" configurations. Select
My Phrases in the left pane, then File -> New -> Subfolder . Name the subfolder Typos
.
Create a new phrase in File -> New -> Phrase , and call it "grep."
Configure AutoKey to insert the correct word by highlighting the phrase "grep" then entering
"grep" in the Enter phrase contents section (replacing the default "Enter phrase contents"
text).
Next, set up how AutoKey triggers this phrase by defining an Abbreviation. Click the Set
button next to Abbreviations at the bottom of the UI.
In the dialog box that pops up, click the Add button and add "gerp" as a new abbreviation.
Leave Remove typed abbreviation checked; this is what instructs AutoKey to replace any typed
occurrence of the word "gerp" with "grep." Leave Trigger when typed as part of a word unchecked
so that if you type a word containing "gerp" (such as "fingerprint"), it won't attempt
to turn that into "fingreprint." It will work only when "gerp" is typed as an isolated
word.
"... Manipulating history is usually less dangerous than it sounds, especially when you're curating it with a purpose in mind. For instance, if you're documenting a complex problem, it's often best to use your session history to record your commands because, by slotting them into your history, you're running them and thereby testing the process. Very often, documenting without doing leads to overlooking small steps or writing minor details wrong. ..."
To block adding a command to the history entries, you can place a
space before the command, as long as you have ignorespace in your
HISTCONTROL environment variable:
$ history | tail
535 echo "foo"
536 echo "bar"
$ history -d 536
$ history | tail
535 echo "foo"
You can clear your entire session history with the -c option:
$ history
-c
$ history
$ History lessons More on Bash
Manipulating history is usually less dangerous than it sounds, especially when you're
curating it with a purpose in mind. For instance, if you're documenting a complex problem, it's
often best to use your session history to record your commands because, by slotting them into
your history, you're running them and thereby testing the process. Very often, documenting
without doing leads to overlooking small steps or writing minor details wrong.
Use your history sessions as needed, and exercise your power over history wisely. Happy
history hacking!
As soon as I log into a server, the first thing I do is check whether it has the operating
system, kernel, and hardware architecture needed for the tests I will be running. I often check
how long a server has been up and running. While this does not matter very much for a test
system because it will be rebooted multiple times, I still find this information helpful.
Use the following commands to get this information. I mostly use Red Hat Linux for testing,
so if you are using another Linux distro, use *-release in the filename instead of
redhat-release :
cat / etc / redhat-release
uname -a
hostnamectl
uptime 2. Is anyone else on board?
Once I know that the machine meets my test needs, I need to ensure no one else is logged
into the system at the same time running their own tests. Although it is highly unlikely, given
that the provisioning system takes care of this for me, it's still good to check once in a
while -- especially if it's my first time logging into a server. I also check whether there are
other users (other than root) who can access the system.
Use the following commands to find this information. The last command looks for users in the
/etc/passwd file who have shell access; it skips other services in the file that
do not have shell access or have a shell set to nologin :
who
who -Hu
grep sh $ / etc / passwd 3. Physical or virtual machine
Now that I know I have the machine to myself, I need to identify whether it's a physical
machine or a virtual machine (VM). If I provisioned the machine myself, I could be sure that I
have what I asked for. However, if you are using a machine that you did not provision, you
should check whether the machine is physical or virtual.
Use the following commands to identify this information. If it's a physical system, you will
see the vendor's name (e.g., HP, IBM, etc.) and the make and model of the server; whereas, in a
virtual machine, you should see KVM, VirtualBox, etc., depending on what virtualization
software was used to create the VM:
dmidecode -s system-manufacturer
dmidecode -s system-product-name
lshw -c system | grep product | head -1
cat / sys / class / dmi / id / product_name
cat / sys / class / dmi / id / sys_vendor 4. Hardware
Because I often test hardware connected to the Linux machine, I usually work with physical
servers, not VMs. On a physical machine, my next step is to identify the server's hardware
capabilities -- for example, what kind of CPU is running, how many cores does it have, which
flags are enabled, and how much memory is available for running tests. If I am running network
tests, I check the type and capacity of the Ethernet or other network devices connected to the
server.
Use the following commands to display the hardware connected to a Linux server. Some of the
commands might be deprecated in newer operating system versions, but you can still install them
from yum repos or switch to their equivalent new commands:
lscpu or cat / proc / cpuinfo
lsmem or cat / proc / meminfo
ifconfig -a
ethtool < devname >
lshw
lspci
dmidecode 5. Installed software
Testing software always requires installing additional dependent packages, libraries, etc.
However, before I install anything, I check what is already installed (including what version
it is), as well as which repos are configured, so I know where the software comes from, and I
can debug any package installation issues.
Use the following commands to identify what software is installed:
Once I check the installed software, it's natural to check what processes are running on the
system. This is crucial when running a performance test on a system -- if a running process,
daemon, test software, etc. is eating up most of the CPU/RAM, it makes sense to stop that
process before running the tests. This also checks that the processes or daemons the test
requires are up and running. For example, if the tests require httpd to be running, the service
to start the daemon might not have run even if the package is installed.
Use the following commands to identify running processes and enabled services on your
system:
Today's machines are heavily networked, and they need to communicate with other machines or
services on the network. I identify which ports are open on the server, if there are any
connections from the network to the test machine, if a firewall is enabled, and if so, is it
blocking any ports, and which DNS servers the machine talks to.
Use the following commands to identify network services-related information. If a deprecated
command is not available, install it from a yum repo or use the equivalent newer
command:
When doing systems testing, I find it helpful to know kernel-related information, such as
the kernel version and which kernel modules are loaded. I also list any tunable
kernel parameters and what they are set to and check the options used when booting the
running kernel.
Use the following commands to identify this information:
If you've ever typed a command at the Linux shell prompt, you've probably already used bash -- after all, it's the default command
shell on most modern GNU/Linux distributions.
The bash shell is the primary interface to the Linux operating system -- it accepts, interprets and executes your commands, and
provides you with the building blocks for shell scripting and automated task execution.
Bash's unassuming exterior hides some very powerful tools and shortcuts. If you're a heavy user of the command line, these can
save you a fair bit of typing. This document outlines 10 of the most useful tools:
Easily recall previous commands
Bash keeps track of the commands you execute in a history buffer, and allows you
to recall previous commands by cycling through them with the Up and Down cursor keys. For even faster recall, "speed search" previously-executed
commands by typing the first few letters of the command followed by the key combination Ctrl-R; bash will then scan the command
history for matching commands and display them on the console. Type Ctrl-R repeatedly to cycle through the entire list of matching
commands.
Use command aliases
If you always run a command with the same set of options, you can have bash create an alias for it. This alias will incorporate
the required options, so that you don't need to remember them or manually type them every time. For example, if you always run
ls with the -l option to obtain a detailed directory listing, you can use this command:
bash> alias ls='ls -l'
To create an alias that automatically includes the -l option. Once this alias has been created, typing ls at the bash prompt
will invoke the alias and produce the ls -l output.
You can obtain a list of available aliases by invoking alias without any arguments, and you can delete an alias with unalias.
Use filename auto-completion
Bash supports filename auto-completion at the command prompt. To use this feature, type
the first few letters of the file name, followed by Tab. bash will scan the current directory, as well as all other directories
in the search path, for matches to that name. If a single match is found, bash will automatically complete the filename for you.
If multiple matches are found, you will be prompted to choose one.
Use key shortcuts to efficiently edit the command line
Bash supports a number of keyboard shortcuts for command-line
navigation and editing. The Ctrl-A key shortcut moves the cursor to the beginning of the command line, while the Ctrl-E shortcut
moves the cursor to the end of the command line. The Ctrl-W shortcut deletes the word immediately before the cursor, while the
Ctrl-K shortcut deletes everything immediately after the cursor. You can undo a deletion with Ctrl-Y.
Get automatic notification of new mail
You can configure bash to automatically notify you of new mail, by setting
the $MAILPATH variable to point to your local mail spool. For example, the command:
Causes bash to print a notification on john's console every time a new message is appended to John's mail spool.
Run tasks in the background
Bash lets you run one or more tasks in the background, and selectively suspend or resume
any of the current tasks (or "jobs"). To run a task in the background, add an ampersand (&) to the end of its command line. Here's
an example:
bash> tail -f /var/log/messages &
[1] 614
Each task backgrounded in this manner is assigned a job ID, which is printed to the console. A task can be brought back to
the foreground with the command fg jobnumber, where jobnumber is the job ID of the task you wish to bring to the
foreground. Here's an example:
bash> fg 1
A list of active jobs can be obtained at any time by typing jobs at the bash prompt.
Quickly jump to frequently-used directories
You probably already know that the $PATH variable lists bash's "search
path" -- the directories it will search when it can't find the requested file in the current directory. However, bash also supports
the $CDPATH variable, which lists the directories the cd command will look in when attempting to change directories. To use this
feature, assign a directory list to the $CDPATH variable, as shown in the example below:
Now, whenever you use the cd command, bash will check all the directories in the $CDPATH list for matches to the directory
name.
Perform calculations
Bash can perform simple arithmetic operations at the command prompt. To use this feature, simply
type in the arithmetic expression you wish to evaluate at the prompt within double parentheses, as illustrated below. Bash will
attempt to perform the calculation and return the answer.
bash> echo $((16/2))
8
Customise the shell prompt
You can customise the bash shell prompt to display -- among other things -- the current
username and host name, the current time, the load average and/or the current working directory. To do this, alter the $PS1 variable,
as below:
This will display the name of the currently logged-in user, the host name, the current working directory and the current time
at the shell prompt. You can obtain a list of symbols understood by bash from its manual page.
Get context-specific help
Bash comes with help for all built-in commands. To see a list of all built-in commands,
type help. To obtain help on a specific command, type help command, where command is the command you need help on.
Here's an example:
bash> help alias
...some help text...
Obviously, you can obtain detailed help on the bash shell by typing man bash at your command prompt at any time.
How to use
the Linux mtr command - The mtr (My Traceroute) command is a major
improvement over the old traceroute and is one of my first go-to tools when
troubleshooting network problems.
Linux for
beginners: 10 commands to get you started at the terminal - Everyone who works on the
Linux CLI needs to know some basic commands for moving around the directory structure and
exploring files and directories. This article covers those commands in a simple way that
places them into a usable context for those of us new to the command line.
Getting started with
systemctl - Do you need to enable, disable, start, and stop systemd services? Learn the
basics of systemctl – a powerful tool for managing systemd services and
more.
A
beginner's guide to gawk - gawk is a command line tool that can be used for
simple text processing in Bash and other scripts. It is also a powerful language in its own
right.
The original link to the article of Vallard Benincosa published on 20 Jul 2008 in IBM
DeveloperWorks disappeared due to yet another reorganization of IBM website that killed old
content. Money greedy incompetents is what current upper IBM managers really is...
How to be a more productive Linux systems administrator
Learn these 10 tricks and you'll be the most powerful Linux® systems administrator in the
universe...well, maybe not the universe, but you will need these tips to play in the big
leagues. Learn about SSH tunnels, VNC, password recovery, console spying, and more. Examples
accompany each trick, so you can duplicate them on your own systems.
The best systems administrators are set apart by their efficiency. And if an efficient
systems administrator can do a task in 10 minutes that would take another mortal two hours to
complete, then the efficient systems administrator should be rewarded (paid more) because the
company is saving time, and time is money, right?
The trick is to prove your efficiency to management. While I won't attempt to cover
that trick in this article, I will give you 10 essential gems from the lazy admin's bag
of tricks. These tips will save you time -- and even if you don't get paid more money to be
more efficient, you'll at least have more time to play Halo.
The newbie states that when he pushes the Eject button on the DVD drive of a server running
a certain Redmond-based operating system, it will eject immediately. He then complains that, in
most enterprise Linux servers, if a process is running in that directory, then the ejection
won't happen. For too long as a Linux administrator, I would reboot the machine and get my disk
on the bounce if I couldn't figure out what was running and why it wouldn't release the DVD
drive. But this is ineffective.
Here's how you find the process that holds your DVD drive and eject it to your heart's
content: First, simulate it. Stick a disk in your DVD drive, open up a terminal, and mount the
DVD drive:
# mount /media/cdrom
# cd /media/cdrom
# while [ 1 ]; do echo "All your drives are belong to us!"; sleep 30; done
Now open up a second terminal and try to eject the DVD drive:
# eject
You'll get a message like:
umount: /media/cdrom: device is busy
Before you free it, let's find out who is using it.
# fuser /media/cdrom
You see the process was running and, indeed, it is our fault we can not eject the disk.
Now, if you are root, you can exercise your godlike powers and kill processes:
# fuser -k /media/cdrom
Boom! Just like that, freedom. Now solemnly unmount the drive:
Behold! Your terminal looks like garbage. Everything you type looks like you're looking into
the Matrix. What do you do?
You type reset . But wait you say, typing reset is too close to
typing reboot or shutdown . Your palms start to sweat -- especially
if you are doing this on a production machine.
Rest assured: You can do it with the confidence that no machine will be rebooted. Go ahead,
do it:
# reset
Now your screen is back to normal. This is much better than closing the window and then
logging in again, especially if you just went through five machines to SSH to this
machine.
David, the high-maintenance user from product engineering, calls: "I need you to help me
understand why I can't compile supercode.c on these new machines you deployed."
"Fine," you say. "What machine are you on?"
David responds: " Posh." (Yes, this fictional company has named its five production servers
in honor of the Spice Girls.) OK, you say. You exercise your godlike root powers and on another
machine become David:
# su - david
Then you go over to posh:
# ssh posh
Once you are there, you run:
# screen -S foo
Then you holler at David:
"Hey David, run the following command on your terminal: # screen -x foo ."
This will cause your and David's sessions to be joined together in the holy Linux shell. You
can type or he can type, but you'll both see what the other is doing. This saves you from
walking to the other floor and lets you both have equal control. The benefit is that David can
watch your troubleshooting skills and see exactly how you solve problems.
At last you both see what the problem is: David's compile script hard-coded an old directory
that does not exist on this new server. You mount it, recompile, solve the problem, and David
goes back to work. You then go back to whatever lazy activity you were doing before.
The one caveat to this trick is that you both need to be logged in as the same user. Other
cool things you can do with the screen command include having multiple windows and
split screens. Read the man pages for more on that.
But I'll give you one last tip while you're in your screen session. To detach
from it and leave it open, type: Ctrl-A D . (I mean, hold down the Ctrl key
and strike the A key. Then push the D key.)
You can then reattach by running the screen -x foo command
again.
You forgot your root password. Nice work. Now you'll just have to reinstall the entire
machine. Sadly enough, I've seen more than a few people do this. But it's surprisingly easy to
get on the machine and change the password. This doesn't work in all cases (like if you made a
GRUB password and forgot that too), but here's how you do it in a normal case with a Cent OS
Linux example.
First reboot the system. When it reboots you'll come to the GRUB screen as shown in Figure
1. Move the arrow key so that you stay on this screen instead of proceeding all the way to a
normal boot.
Use the arrow key again to highlight the line that begins with kernel , and
press E to edit the kernel parameters. When you get to the screen shown in Figure 3,
simply append the number 1 to the arguments as shown in Figure 3:
Many times I'll be at a site where I need remote support from someone who is blocked on the
outside by a company firewall. Few people realize that if you can get out to the world through
a firewall, then it is relatively easy to open a hole so that the world can come into you.
In its crudest form, this is called "poking a hole in the firewall." I'll call it an SSH
back door . To use it, you'll need a machine on the Internet that you can use as an
intermediary.
In our example, we'll call our machine blackbox.example.com. The machine behind the company
firewall is called ginger. Finally, the machine that technical support is on will be called
tech. Figure 4 explains how this is set up.
Check that what you're doing is allowed, but make sure you ask the right people. Most
people will cringe that you're opening the firewall, but what they don't understand is that
it is completely encrypted. Furthermore, someone would need to hack your outside machine
before getting into your company. Instead, you may belong to the school of
"ask-for-forgiveness-instead-of-permission." Either way, use your judgment and don't blame me
if this doesn't go your way.
SSH from ginger to blackbox.example.com with the -R flag. I'll assume that
you're the root user on ginger and that tech will need the root user ID to help you with the
system. With the -R flag, you'll forward instructions of port 2222 on blackbox
to port 22 on ginger. This is how you set up an SSH tunnel. Note that only SSH traffic can
come into ginger: You're not putting ginger out on the Internet naked.
VNC or virtual network computing has been around a long time. I typically find myself
needing to use it when the remote server has some type of graphical program that is only
available on that server.
For example, suppose in Trick 5 , ginger is a storage
server. Many storage devices come with a GUI program to manage the storage controllers. Often
these GUI management tools need a direct connection to the storage through a network that is at
times kept in a private subnet. Therefore, the only way to access this GUI is to do it from
ginger.
You can try SSH'ing to ginger with the -X option and launch it that way, but
many times the bandwidth required is too much and you'll get frustrated waiting. VNC is a much
more network-friendly tool and is readily available for nearly all operating systems.
Let's assume that the setup is the same as in Trick 5, but you want tech to be able to get
VNC access instead of SSH. In this case, you'll do something similar but forward VNC ports
instead. Here's what you do:
Start a VNC server session on ginger. This is done by running something like:
The options tell the VNC server to start up with a resolution of 1024x768 and a pixel
depth of 24 bits per pixel. If you are using a really slow connection setting, 8 may be a
better option. Using :99 specifies the port the VNC server will be accessible
from. The VNC protocol starts at 5900 so specifying :99 means the server is
accessible from port 5999.
When you start the session, you'll be asked to specify a password. The user ID will be
the same user that you launched the VNC server from. (In our case, this is root.)
SSH from ginger to blackbox.example.com forwarding the port 5999 on blackbox to ginger.
This is done from ginger by running the command:
Once you run this command, you'll need to keep this SSH session open in order to keep
the port forwarded to ginger. At this point if you were on blackbox, you could now access
the VNC session on ginger by just running:
thedude@blackbox:~$ vncviewer localhost:99
That would forward the port through SSH to ginger. But we're interested in letting tech
get VNC access to ginger. To accomplish this, you'll need another tunnel.
From tech, you open a tunnel via SSH to forward your port 5999 to port 5999 on blackbox.
This would be done by running:
This time the SSH flag we used was -L , which instead of pushing 5999 to
blackbox, pulled from it. Once you are in on blackbox, you'll need to leave this session
open. Now you're ready to VNC from tech!
From tech, VNC to ginger by running the command:
root@tech:~# vncviewer localhost:99 .
Tech will now have a VNC session directly to ginger.
While the effort might seem like a bit much to set up, it beats flying across the country to
fix the storage arrays. Also, if you practice this a few times, it becomes quite easy.
Let me add a trick to this trick: If tech was running the Windows® operating system and
didn't have a command-line SSH client, then tech can run Putty. Putty can be set to forward SSH
ports by looking in the options in the sidebar. If the port were 5902 instead of our example of
5999, then you would enter something like in Figure 5.
Imagine this: Company A has a storage server named ginger and it is being NFS-mounted by a
client node named beckham. Company A has decided they really want to get more bandwidth out of
ginger because they have lots of nodes they want to have NFS mount ginger's shared
filesystem.
The most common and cheapest way to do this is to bond two Gigabit ethernet NICs together.
This is cheapest because usually you have an extra on-board NIC and an extra port on your
switch somewhere.
So they do this. But now the question is: How much bandwidth do they really have?
Gigabit Ethernet has a theoretical limit of 128MBps. Where does that number come from?
Well,
You'll need to install it on a shared filesystem that both ginger and beckham can see. or
compile and install on both nodes. I'll compile it in the home directory of the bob user that
is viewable on both nodes:
tar zxvf iperf*gz
cd iperf-2.0.2
./configure -prefix=/home/bob/perf
make
make install
On ginger, run:
# /home/bob/perf/bin/iperf -s -f M
This machine will act as the server and print out performance speeds in MBps.
You'll see output in both screens telling you what the speed is. On a normal server with a
Gigabit Ethernet adapter, you will probably see about 112MBps. This is normal as bandwidth is
lost in the TCP stack and physical cables. By connecting two servers back-to-back, each with
two bonded Ethernet cards, I got about 220MBps.
In reality, what you see with NFS on bonded networks is around 150-160MBps. Still, this
gives you a good indication that your bandwidth is going to be about what you'd expect. If you
see something much less, then you should check for a problem.
I recently ran into a case in which the bonding driver was used to bond two NICs that used
different drivers. The performance was extremely poor, leading to about 20MBps in bandwidth,
less than they would have gotten had they not bonded the Ethernet cards
together!
A Linux systems administrator becomes more efficient by using command-line scripting with
authority. This includes crafting loops and knowing how to parse data using utilities like
awk , grep , and sed . There are many cases where doing
so takes fewer keystrokes and lessens the likelihood of user errors.
For example, suppose you need to generate a new /etc/hosts file for a Linux cluster that you
are about to install. The long way would be to add IP addresses in vi or your favorite text
editor. However, it can be done by taking the already existing /etc/hosts file and appending
the following to it by running this on the command line:
# P=1; for i in $(seq -w 200); do echo "192.168.99.$P n$i"; P=$(expr $P + 1);
done >>/etc/hosts
Two hundred host names, n001 through n200, will then be created with IP addresses
192.168.99.1 through 192.168.99.200. Populating a file like this by hand runs the risk of
inadvertently creating duplicate IP addresses or host names, so this is a good example of using
the built-in command line to eliminate user errors. Please note that this is done in the bash
shell, the default in most Linux distributions.
As another example, let's suppose you want to check that the memory size is the same in each
of the compute nodes in the Linux cluster. In most cases of this sort, having a distributed or
parallel shell would be the best practice, but for the sake of illustration, here's a way to do
this using SSH.
Assume the SSH is set up to authenticate without a password. Then run:
# for num in $(seq -w 200); do ssh n$num free -tm | grep Mem | awk '{print $2}';
done | sort | uniq
A command line like this looks pretty terse. (It can be worse if you put regular expressions
in it.) Let's pick it apart and uncover the mystery.
First you're doing a loop through 001-200. This padding with 0s in the front is done with
the -w option to the seq command. Then you substitute the
num variable to create the host you're going to SSH to. Once you have the target
host, give the command to it. In this case, it's:
free -m | grep Mem | awk '{print $2}'
That command says to:
Use the free command to get the memory size in megabytes.
Take the output of that command and use grep to get the line that has the
string Mem in it.
Take that line and use awk to print the second field, which is the total
memory in the node.
This operation is performed on every node.
Once you have performed the command on every node, the entire output of all 200 nodes is
piped ( | d) to the sort command so that all the memory values are
sorted.
Finally, you eliminate duplicates with the uniq command. This command will
result in one of the following cases:
If all the nodes, n001-n200, have the same memory size, then only one number will be
displayed. This is the size of memory as seen by each operating system.
If node memory size is different, you will see several memory size values.
Finally, if the SSH failed on a certain node, then you may see some error messages.
This command isn't perfect. If you find that a value of memory is different than what you
expect, you won't know on which node it was or how many nodes there were. Another command may
need to be issued for that.
What this trick does give you, though, is a fast way to check for something and quickly
learn if something is wrong. This is it's real value: Speed to do a quick-and-dirty
check.
Some software prints error messages to the console that may not necessarily show up on your
SSH session. Using the vcs devices can let you examine these. From within an SSH session, run
the following command on a remote server: # cat /dev/vcs1 . This will show you
what is on the first console. You can also look at the other virtual terminals using 2, 3, etc.
If a user is typing on the remote system, you'll be able to see what he typed.
In most data farms, using a remote terminal server, KVM, or even Serial Over LAN is the best
way to view this information; it also provides the additional benefit of out-of-band viewing
capabilities. Using the vcs device provides a fast in-band method that may be able to save you
some time from going to the machine room and looking at the console.
In Trick 8 , you saw an example of using the command
line to get information about the total memory in the system. In this trick, I'll offer up a
few other methods to collect important information from the system you may need to verify,
troubleshoot, or give to remote support.
First, let's gather information about the processor. This is easily done as follows:
# cat /proc/cpuinfo .
This command gives you information on the processor speed, quantity, and model. Using
grep in many cases can give you the desired value.
A check that I do quite often is to ascertain the quantity of processors on the system. So,
if I have purchased a dual processor quad-core server, I can run:
# cat /proc/cpuinfo | grep processor | wc -l .
I would then expect to see 8 as the value. If I don't, I call up the vendor and tell them to
send me another processor.
Another piece of information I may require is disk information. This can be gotten with the
df command. I usually add the -h flag so that I can see the output in
gigabytes or megabytes. # df -h also shows how the disk was partitioned.
And to end the list, here's a way to look at the firmware of your system -- a method to get
the BIOS level and the firmware on the NIC.
To check the BIOS version, you can run the dmidecode command. Unfortunately,
you can't easily grep for the information, so piping it is a less efficient way to
do this. On my Lenovo T61 laptop, the output looks like this:
#dmidecode | less
...
BIOS Information
Vendor: LENOVO
Version: 7LET52WW (1.22 )
Release Date: 08/27/2007
...
This is much more efficient than rebooting your machine and looking at the POST output.
To examine the driver and firmware versions of your Ethernet adapter, run
ethtool :
There are thousands of tricks you can learn from someone's who's an expert at the command
line. The best ways to learn are to:
Work with others. Share screen sessions and watch how others work -- you'll see
new approaches to doing things. You may need to swallow your pride and let other people
drive, but often you can learn a lot.
Read the man pages. Seriously; reading man pages, even on commands you know like
the back of your hand, can provide amazing insights. For example, did you know you can do
network programming with awk ?
Solve problems. As the system administrator, you are always solving problems
whether they are created by you or by others. This is called experience, and experience makes
you better and more efficient.
I hope at least one of these tricks helped you learn something you didn't know. Essential
tricks like these make you more efficient and add to your experience, but most importantly,
tricks give you more free time to do more interesting things, like playing video games. And the
best administrators are lazy because they don't like to work. They find the fastest way to do a
task and finish it quickly so they can continue in their lazy pursuits.
Vallard Benincosa is a lazy Linux Certified IT professional working for
the IBM Linux Clusters team. He lives in Portland, OR, with his wife and two kids.
491k
109 965 1494 asked Aug 22 '14 at 9:40 SHW 7,341 3 31 69
> ,
1
Christian Severin , 2017-09-29 09:47:52
You can use e.g. date --set='-2 years' to set the clock back two years, leaving
all other elements identical. You can change month and day of month the same way. I haven't
checked what happens if that calculation results in a datetime that doesn't actually exist,
e.g. during a DST switchover, but the behaviour ought to be identical to the usual "set both
date and time to concrete values" behaviour. – Christian Severin Sep 29 '17
at 9:47
Run that as root or under sudo . Changing only one of the year/month/day is
more of a challenge and will involve repeating bits of the current date. There are also GUI
date tools built in to the major desktop environments, usually accessed through the
clock.
To change only part of the time, you can use command substitution in the date string:
date -s "2014-12-25 $(date +%H:%M:%S)"
will change the date, but keep the time. See man date for formatting details to
construct other combinations: the individual components are %Y , %m
, %d , %H , %M , and %S .
There's no option to do that. You can use date -s "2014-12-25 $(date +%H:%M:%S)"
to change the date and reuse the current time, though. – Michael Homer Aug 22 '14 at
9:55
chaos , 2014-08-22 09:59:58
System time
You can use date to set the system date. The GNU implementation of
date (as found on most non-embedded Linux-based systems) accepts many different
formats to set the time, here a few examples:
set only the year:
date -s 'next year'
date -s 'last year'
set only the month:
date -s 'last month'
date -s 'next month'
set only the day:
date -s 'next day'
date -s 'tomorrow'
date -s 'last day'
date -s 'yesterday'
date -s 'friday'
set all together:
date -s '2009-02-13 11:31:30' #that's a magical timestamp
Hardware time
Now the system time is set, but you may want to sync it with the hardware clock:
Use --show to print the hardware time:
hwclock --show
You can set the hardware clock to the current system time:
hwclock --systohc
Or the system time to the hardware clock
hwclock --hctosys
> ,
2
garethTheRed , 2014-08-22 09:57:11
You change the date with the date command. However, the command expects a full
date as the argument:
# date -s "20141022 09:45"
Wed Oct 22 09:45:00 BST 2014
To change part of the date, output the current date with the date part that you want to
change as a string and all others as date formatting variables. Then pass that to the
date -s command to set it:
# date -s "$(date +'%Y12%d %H:%M')"
Mon Dec 22 10:55:03 GMT 2014
changes the month to the 12th month - December.
The date formats are:
%Y - Year
%m - Month
%d - Day
%H - Hour
%M - Minute
Balmipour , 2016-03-23 09:10:21
For ones like me running ESXI 5.1, here's what the system answered me
~ # date -s "2016-03-23 09:56:00"
date: invalid date '2016-03-23 09:56:00'
I had to uses a specific ESX command instead :
esxcli system time set -y 2016 -M 03 -d 23 -H 10 -m 05 -s 00
Hope it helps !
> ,
1
Brook Oldre , 2017-09-26 20:03:34
I used the date command and time format listed below to successfully set the date from the
terminal shell command performed on Android Things which uses the Linux Kernal.
Here, for example, is a fragment from an old collection of hardening scripts called Titan,
written for Solaris by by Brad M. Powell. Example below uses vi which is the simplest, but
probably not optimal choice, unless your primary editor is VIM.
FixHostsEquiv() {
if -f /etc/hosts.equiv -a -s /etc/hosts.equiv ; then
t_echo 2 " /etc/hosts.equiv exists and is not empty. Saving a copy..."
/bin/cp /etc/hosts.equiv /etc/hosts.equiv.ORIG
if grep -s "^+$" /etc/hosts.equiv
then
ed - /etc/hosts.equiv <<- !
g/^+$/d
w
q
!
fi
else
t_echo 2 " No /etc/hosts.equiv - PASSES CHECK"
exit 1
fi
For VIM/Emacs users the main benefit here is that you will know your editor better,
instead of inventing/learning "yet another tool." That actually also is an argument against
Ansible and friends: unless you operate a cluster or other sizable set of servers, why try to
kill a bird with a cannon. Positive return on investment probably starts if you manage over 8
or even 16 boxes.
Perl also can be used. But I would recommend to slurp the file into an array and operate
with lines like in editor; a regex on the whole text are more difficult to write correctly
then a regex for a line, although experts have no difficulties using just them. But we seldom
acquire skills we can so without :-)
On the other hand, that gives you a chance to learn splice function ;-)
If the files are basically identical and need some slight customization you can use
patch utility with pdsh, but you need to learn the ropes. Like Perl the patch
utility was also written by Larry Wall and is a very flexible tool for such tasks. You need
first to collect files from your servers into some central directory with pdsh/pdcp (which I
think is a standard RPM on RHEL and other linuxes) or other tool, then create diffs with one
server to which you already applied the change (diff is your command language at this point),
verify that on other server that diff produced right results, apply it and then distribute
the resulting files back to each server using again pdsh/pdcp. If you have a common
NFS/GPFS/LUSTRA filesystem for all servers this is even simpler as you can store both the
tree and diffs on common filesystem.
The same central repository of config files can be used with vi and other approaches
creating "poor man Ansible" for you .
Modular Perl in Red Hat Enterprise Linux 8 By Petr Pisar May 16, 2019
Red Hat Enterprise
Linux 8 comes with
modules as a packaging concept that allows system administrators to select the desired
software version from multiple packaged versions. This article will show you how to manage Perl
as a module.
Installing from a default stream
Let's install Perl:
# yum --allowerasing install perl
Last metadata expiration check: 1:37:36 ago on Tue 07 May 2019 04:18:01 PM CEST.
Dependencies resolved.
==========================================================================================
Package Arch Version Repository Size
==========================================================================================
Installing:
perl x86_64 4:5.26.3-416.el8 rhel-8.0.z-appstream 72 k
Installing dependencies:
[ ]
Transaction Summary
==========================================================================================
Install 147 Packages
Total download size: 21 M
Installed size: 59 M
Is this ok [y/N]: y
[ ]
perl-threads-shared-1.58-2.el8.x86_64
Complete!
Next, check which Perl you have:
$ perl -V:version
version='5.26.3';
You have 5.26.3 Perl version. This is the default version supported for the next 10 years
and, if you are fine with it, you don't have to know anything about modules. But what if you
want to try a different version?
Everything you need to grow your career.
With your free Red Hat Developer program membership, unlock our library of cheat sheets and
ebooks on next-generation application development.
Let's find out what Perl modules are available using the yum module list
command:
# yum module list
Last metadata expiration check: 1:45:10 ago on Tue 07 May 2019 04:18:01 PM CEST.
[ ]
Name Stream Profiles Summary
[ ]
parfait 0.5 common Parfait Module
perl 5.24 common [d], Practical Extraction and Report Languag
minimal e
perl 5.26 [d] common [d], Practical Extraction and Report Languag
minimal e
perl-App-cpanminus 1.7044 [d] common [d] Get, unpack, build and install CPAN mod
ules
perl-DBD-MySQL 4.046 [d] common [d] A MySQL interface for Perl
perl-DBD-Pg 3.7 [d] common [d] A PostgreSQL interface for Perl
perl-DBD-SQLite 1.58 [d] common [d] SQLite DBI driver
perl-DBI 1.641 [d] common [d] A database access API for Perl
perl-FCGI 0.78 [d] common [d] FastCGI Perl bindings
perl-YAML 1.24 [d] common [d] Perl parser for YAML
php 7.2 [d] common [d], PHP scripting language
devel, minim
al
[ ]
Here you can see a Perl module is available in versions 5.24 and 5.26. Those are called
streams in the modularity world, and they denote an independent variant, usually a
different version, of the same software stack. The [d] flag marks a default stream.
That means if you do not explicitly enable a different stream, the default one will be used.
That explains why yum installed Perl 5.26.3 and not some of the 5.24 micro versions.
Now suppose you have an old application that you are migrating from Red Hat Enterprise Linux
7, which was running in the rh-perl524software collection
environment, and you want to give it a try on Red Hat Enterprise Linux 8. Let's try Perl 5.24
on Red Hat Enterprise Linux 8.
Enabling a Stream
First, switch the Perl module to the 5.24 stream:
# yum module enable perl:5.24
Last metadata expiration check: 2:03:16 ago on Tue 07 May 2019 04:18:01 PM CEST.
Problems in request:
Modular dependency problems with Defaults:
Problem 1: conflicting requests
- module freeradius:3.0:8000020190425181943:75ec4169-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
- module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
- module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
Problem 2: conflicting requests
- module freeradius:3.0:820190131191847:fbe42456-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
- module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
- module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
Dependencies resolved.
==========================================================================================
Package Arch Version Repository Size
==========================================================================================
Enabling module streams:
perl 5.24
Transaction Summary
==========================================================================================
Is this ok [y/N]: y
Complete!
Switching module streams does not alter installed packages (see 'module enable' in dnf(8)
for details)
Here you can see a warning that the freeradius:3.0 stream is not compatible with
perl:5.24 . That's because FreeRADIUS was built for Perl 5.26 only. Not all modules
are compatible with all other modules.
Next, you can see a confirmation for enabling the Perl 5.24 stream. And, finally, there is
another warning about installed packages. The last warning means that the system still can have
installed RPM packages from the 5.26 stream, and you need to explicitly sort it out.
Changing modules and changing packages are two separate phases. You can fix it by
synchronizing a distribution content like this:
# yum --allowerasing distrosync
Last metadata expiration check: 0:00:56 ago on Tue 07 May 2019 06:33:36 PM CEST.
Modular dependency problems:
Problem 1: module freeradius:3.0:8000020190425181943:75ec4169-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
- module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
- module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
- conflicting requests
Problem 2: module freeradius:3.0:820190131191847:fbe42456-0.x86_64 requires module(perl:5.26), but none of the providers can be installed
- module perl:5.26:820181219174508:9edba152-0.x86_64 conflicts with module(perl:5.24) provided by perl:5.24:820190207164249:ee766497-0.x86_64
- module perl:5.24:820190207164249:ee766497-0.x86_64 conflicts with module(perl:5.26) provided by perl:5.26:820181219174508:9edba152-0.x86_64
- conflicting requests
Dependencies resolved.
==========================================================================================
Package Arch Version Repository Size
==========================================================================================
[ ]
Downgrading:
perl x86_64 4:5.24.4-403.module+el8+2770+c759b41a
rhel-8.0.z-appstream 6.1 M
[ ]
Transaction Summary
==========================================================================================
Upgrade 69 Packages
Downgrade 66 Packages
Total download size: 20 M
Is this ok [y/N]: y
[ ]
Complete!
And try the perl command again:
$ perl -V:version
version='5.24.4';
Great! It works. We switched to a different Perl version, and the different Perl is still
invoked with the perl command and is installed to a standard path (
/usr/bin/perl ). No scl enable incantation is needed, in contrast to the
software collections.
You could notice the repeated warning about FreeRADIUS. A future YUM update is going to
clean up the unnecessary warning. Despite that, I can show you that other Perl-ish modules are
compatible with any Perl stream.
Dependent modules
Let's say the old application mentioned before is using DBD::SQLite Perl module.
(This nomenclature is a little ambiguous: Red Hat Enterprise Linux has modules; Perl has
modules. If I want to emphasize the difference, I will say the Modularity modules or the CPAN
modules.) So, let's install CPAN's DBD::SQLite module. Yum can search in a packaged CPAN
module, so give a try:
# yum --allowerasing install 'perl(DBD::SQLite)'
[ ]
Dependencies resolved.
==========================================================================================
Package Arch Version Repository Size
==========================================================================================
Installing:
perl-DBD-SQLite x86_64 1.58-1.module+el8+2519+e351b2a7 rhel-8.0.z-appstream 186 k
Installing dependencies:
perl-DBI x86_64 1.641-2.module+el8+2701+78cee6b5 rhel-8.0.z-appstream 739 k
Enabling module streams:
perl-DBD-SQLite 1.58
perl-DBI 1.641
Transaction Summary
==========================================================================================
Install 2 Packages
Total download size: 924 k
Installed size: 2.3 M
Is this ok [y/N]: y
[ ]
Installed:
perl-DBD-SQLite-1.58-1.module+el8+2519+e351b2a7.x86_64
perl-DBI-1.641-2.module+el8+2701+78cee6b5.x86_64
Complete!
Here you can see DBD::SQLite CPAN module was found in the perl-DBD-SQLite RPM
package that's part of perl-DBD-SQLite:1.58 module, and apparently it requires some
dependencies from the perl-DBI:1.641 module, too. Thus, yum asked for enabling the
streams and installing the packages.
Before playing with DBD::SQLite under Perl 5.24, take a look at the listing of the
Modularity modules and compare it with what you saw the first time:
# yum module list
[ ]
parfait 0.5 common Parfait Module
perl 5.24 [e] common [d], Practical Extraction and Report Languag
minimal e
perl 5.26 [d] common [d], Practical Extraction and Report Languag
minimal e
perl-App-cpanminus 1.7044 [d] common [d] Get, unpack, build and install CPAN mod
ules
perl-DBD-MySQL 4.046 [d] common [d] A MySQL interface for Perl
perl-DBD-Pg 3.7 [d] common [d] A PostgreSQL interface for Perl
perl-DBD-SQLite 1.58 [d][e] common [d] SQLite DBI driver
perl-DBI 1.641 [d][e] common [d] A database access API for Perl
perl-FCGI 0.78 [d] common [d] FastCGI Perl bindings
perl-YAML 1.24 [d] common [d] Perl parser for YAML
php 7.2 [d] common [d], PHP scripting language
devel, minim
al
[ ]
Notice that perl:5.24 is enabled ( [e] ) and thus takes precedence over perl:5.26,
which would otherwise be a default one ( [d] ). Other enabled Modularity modules are
perl-DBD-SQLite:1.58 and perl-DBI:1.641. Those are were enabled when you installed DBD::SQLite.
These two modules have no other streams.
In general, any module can have multiple streams. At most, one stream of a module can be the
default one. And, at most, one stream of a module can be enabled. An enabled stream takes
precedence over a default one. If there is no enabled or a default stream, content of the
module is unavailable.
If, for some reason, you need to disable a stream, even a default one, you do that with
yum module disable MODULE:STREAM command.
Enough theory, back to some productive work. You are ready to test the DBD::SQLite CPAN
module now. Let's create a test database, a foo table inside with one textual
column called bar , and let's store a row with Hello text there:
Next, verify the Hello string was indeed stored by querying the database:
$ perl -MDBI -e '$dbh=DBI->connect(q{dbi:SQLite:dbname=test}); print $dbh->selectrow_array(q{SELECT bar FROM foo}), qq{\n}'
Hello
It seems DBD::SQLite works.
Non-modular packages may not work with non-default
streams
So far, everything is great and working. Now I will show what happens if you try to install
an RPM package that has not been modularized and is thus compatible only with the default Perl,
perl:5.26:
# yum --allowerasing install 'perl(LWP)'
[ ]
Error:
Problem: package perl-libwww-perl-6.34-1.el8.noarch requires perl(:MODULE_COMPAT_5.26.2), but none of the providers can be installed
- cannot install the best candidate for the job
- package perl-libs-4:5.26.3-416.el8.i686 is excluded
- package perl-libs-4:5.26.3-416.el8.x86_64 is excluded
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
Yum will report an error about perl-libwww-perl RPM package being incompatible. The
LWP CPAN module that is packaged as perl-libwww-perl is built only for Perl 5.26, and
therefore RPM dependencies cannot be satisfied. When a perl:5.24 stream is enabled, the
packages from perl:5.26 stream are masked and become unavailable. However, this masking does
not apply to non-modular packages, like perl-libwww-perl. There are plenty of packages that
were not modularized yet. If you need some of them to be available and compatible with a
non-default stream (e.g., not only with perl:5.26 but also with perl:5.24) do not hesitate to
contact Red Hat support team
with your request.
Resetting a module
Let's say you tested your old application and now you want to find out if it works with the
new Perl 5.26.
To do that, you need to switch back to the perl:5.26 stream. Unfortunately, switching from
an enabled stream back to a default or to a yet another non-default stream is not
straightforward. You'll need to perform a module reset:
# yum module reset perl
[ ]
Dependencies resolved.
==========================================================================================
Package Arch Version Repository Size
==========================================================================================
Resetting module streams:
perl 5.24
Transaction Summary
==========================================================================================
Is this ok [y/N]: y
Complete!
Well, that did not hurt. Now you can synchronize the distribution again to replace the 5.24
RPM packages with 5.26 ones:
# yum --allowerasing distrosync
[ ]
Transaction Summary
==========================================================================================
Upgrade 65 Packages
Downgrade 71 Packages
Total download size: 22 M
Is this ok [y/N]: y
[ ]
After that, you can check the Perl version:
$ perl -V:version
version='5.26.3';
And, check the enabled modules:
# yum module list
[ ]
parfait 0.5 common Parfait Module
perl 5.24 common [d], Practical Extraction and Report Languag
minimal e
perl 5.26 [d] common [d], Practical Extraction and Report Languag
minimal e
perl-App-cpanminus 1.7044 [d] common [d] Get, unpack, build and install CPAN mod
ules
perl-DBD-MySQL 4.046 [d] common [d] A MySQL interface for Perl
perl-DBD-Pg 3.7 [d] common [d] A PostgreSQL interface for Perl
perl-DBD-SQLite 1.58 [d][e] common [d] SQLite DBI driver
perl-DBI 1.641 [d][e] common [d] A database access API for Perl
perl-FCGI 0.78 [d] common [d] FastCGI Perl bindings
perl-YAML 1.24 [d] common [d] Perl parser for YAML
php 7.2 [d] common [d], PHP scripting language
devel, minim
al
[ ]
As you can see, we are back at the square one. The perl:5.24 stream is not enabled, and
perl:5.26 is the default and therefore preferred. Only perl-DBD-SQLite:1.58 and perl-DBI:1.641
streams remained enabled. It does not matter much because those are the only streams.
Nonetheless, you can reset them back using yum module reset perl-DBI
perl-DBD-SQLite if you like.
Multi-context streams
What happened with the DBD::SQLite? It's still there and working:
$ perl -MDBI -e '$dbh=DBI->connect(q{dbi:SQLite:dbname=test}); print $dbh->selectrow_array(q{SELECT bar FROM foo}), qq{\n}'
Hello
That is possible because the perl-DBD-SQLite module is built for both 5.24 and 5.26 Perls.
We call these modules multi-contextual . That's the case for perl-DBD-SQLite or
perl-DBI, but not the case for FreeRADIUS, which explains the warning you saw earlier. If you
want to see these low-level details, such which contexts are available, which dependencies are
required, or which packages are contained in a module, you can use the yum module info
MODULE:STREAM command.
Afterword
I hope this tutorial shed some light on modules -- the fresh feature of Red Hat Enterprise
Linux 8 that enables us to provide you with multiple versions of software on top of one Linux
platform. If you need more details, please read documentation accompanying the product (namely,
user-space component management document and yum(8) manual page ) or ask the support
team for help.
The /proc files I find most valuable, especially for inherited system
discovery, are:
cmdline
cpuinfo
meminfo
version
And the most valuable of those are cpuinfo and meminfo .
Again, I'm not stating that other files don't have value, but these are the ones I've found
that have the most value to me. For example, the /proc/uptime file gives you the
system's uptime in seconds. For me, that's not particularly valuable. However, if I want that
information, I use the uptime command that also gives me a more readable version
of /proc/loadavg as well.
The value of this information is in how the kernel was booted because any switches or
special parameters will be listed here, too. And like all information under /proc
, it can be found elsewhere and usually with better formatting, but /proc files
are very handy when you can't remember the command or don't want to grep for
something.
/proc/cpuinfo
The /proc/cpuinfo file is the first file I check when connecting to a new
system. I want to know the CPU make-up of a system and this file tells me everything I need to
know.
This is a virtual machine and only has one vCPU. If your system contains more than one CPU,
the CPU numbering begins at 0 for the first CPU.
/proc/meminfo
The /proc/meminfo file is the second file I check on a new system. It gives me
a general and a specific look at a system's memory allocation and usage.
I think most sysadmins either use the free or the top command to
pull some of the data contained here. The /proc/meminfo file gives me a quick
memory overview that I like and can redirect to another file as a
snapshot.
/proc/version
The /proc/version command provides more information than the related
uname -a command does. Here are the two compared:
$ cat /proc/version
Linux version 3.10.0-1062.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) ) #1 SMP Wed Aug 7 18:08:02 UTC 2019
$ uname -a
Linux centos7 3.10.0-1062.el7.x86_64 #1 SMP Wed Aug 7 18:08:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Usually, the uname -a command is sufficient to give you kernel version info but
for those of you who are developers or who are ultra-concerned with details, the
/proc/version file is there for you.
Wrapping up
The /proc filesystem has a ton of valuable information available to system
administrators who want a convenient, non-command way of getting at raw system info. As I
stated earlier, there are other ways to display the information in /proc .
Additionally, some of the /proc info isn't what you'd want to use for system
assessment. For example, use commands such as vmstat 5 5 or iostat 5
5 to get a better picture of system performance rather than reading one of the available
/proc files.
6 handy Bash scripts for GitThese six Bash scripts will make your life easier
when you're working with Git repositories. 15 Jan 2020 Bob Peterson (Red Hat) Feed 86
up 2 comments Image by : Opensource.com x Subscribe now
Get the highlights in your inbox every week.
https://opensource.com/eloqua-embedded-email-capture-block.html?offer_id=70160000000QzXNAA0
More on Git
I wrote a bunch of Bash scripts that make my life easier when I'm working with Git
repositories. Many of my colleagues say there's no need; that everything I need to do can be
done with Git commands. While that may be true, I find the scripts infinitely more convenient
than trying to figure out the appropriate Git command to do what I want. 1. gitlog
gitlog prints an abbreviated list of current patches against the master version. It prints
them from oldest to newest and shows the author and description, with H for HEAD , ^ for HEAD^
, 2 for HEAD~2, and so forth. For example:
$ gitlog
-----------------------[ recovery25 ]-----------------------
(snip)
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in
dq
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
If I want to see what patches are on a different branch, I can specify an alternate
branch:
Again, it assumes the current branch, but I can specify a different branch if I
want.
3. gitlog.id2
gitlog.id2 is the same as gitlog.id but without the branch line at the top. This is handy
for cherry-picking all patches from one branch to the current branch:
$ # create a new
branch
$ git branch --track origin/master
$ # check out the new branch I just created
$ git checkout recovery26
$ # cherry-pick all patches from the old branch to the new one
$ for i in `gitlog.id2 recovery25` ; do git cherry-pick $i ;done 4. gitlog.grep
gitlog.grep greps for a string within that collection of patches. For example, if I find a
bug and want to fix the patch that has a reference to function inode_go_sync , I simply
do:
$ gitlog.grep inode_go_sync
-----------------------[ recovery25 - 50 patches ]-----------------------
(snip)
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
152:-static void inode_go_sync(struct gfs2_glock *gl)
153:+static int inode_go_sync(struct gfs2_glock *gl)
163:@@ -296,6 +302,7 @@ static void inode_go_sync(struct gfs2_glock *gl)
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in
dq
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
So, now I know that patch HEAD~9 is the one that needs fixing. I use git rebase -i HEAD~10
to edit patch 9, git commit -a --amend , then git rebase --continue to make the necessary
adjustments.
5. gitbranchcmp3
gitbranchcmp3 lets me compare my current branch to another branch, so I can compare older
versions of patches to my newer versions and quickly see what's changed and what hasn't. It
generates a compare script (that uses the KDE tool Kompare , which works on GNOME3,
as well) to compare the patches that aren't quite the same. If there are no differences other
than line numbers, it prints [SAME] . If there are only comment differences, it prints [same]
(in lower case). For example:
$ gitbranchcmp3 recovery24
Branch recovery24 has 47 patches
Branch recovery25 has 50 patches
(snip)
38 87eb6901607a 340d27a33895 [same] gfs2: drain the ail2 list after io errors
39 90fefb577a26 9b3c4e6efb10 [same] gfs2: clean up iopen glock mess in gfs2_create_inode
40 ba3ae06b8b0e d2e8c22be39b [same] gfs2: Do proper error checking for go_sync family of
glops
41 2ab662294329 9563e31f8bfd [SAME] gfs2: use page_offset in gfs2_page_mkwrite
42 0adc6d817b7a ebac7a38036c [SAME] gfs2: don't use buffer_heads in
gfs2_allocate_page_backing
43 55ef1f8d0be8 f703a3c27874 [SAME] gfs2: Improve mmap write vs. punch_hole consistency
44 de57c2f72570 a3e86d2ef30e [SAME] gfs2: Multi-block allocations in gfs2_page_mkwrite
45 7c5305fbd68a da3c604755b0 [SAME] gfs2: Fix end-of-file handling in gfs2_page_mkwrite
46 162524005151 4525c2f5b46f [SAME] Rafael Aquini's slab instrumentation
47 a06a5b7dea02 [ ] GFS2: Add go_get_holdtime to gl_ops
48 8ba93c796d5c [ ] gfs2: introduce new function remaining_hold_time and use it in dq
49 e8b5ff851bb9 [ ] gfs2: Allow rgrps to have a minimum hold time
Missing from recovery25:
The missing:
Compare script generated at: /tmp/compare_mismatches.sh 6. gitlog.find
Finally, I have gitlog.find , a script to help me identify where the upstream versions of my
patches are and each patch's current status. It does this by matching the patch description. It
also generates a compare script (again, using Kompare) to compare the current patch to the
upstream counterpart:
$ gitlog.find
-----------------------[ recovery25 - 50 patches ]-----------------------
(snip)
11 340d27a33895 Bob Peterson gfs2: drain the ail2 list after io errors
lo 5bcb9be74b2a Bob Peterson gfs2: drain the ail2 list after io errors
10 9b3c4e6efb10 Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
fn 2c47c1be51fb Bob Peterson gfs2: clean up iopen glock mess in gfs2_create_inode
9 d2e8c22be39b Bob Peterson gfs2: Do proper error checking for go_sync family of glops
lo feb7ea639472 Bob Peterson gfs2: Do proper error checking for go_sync family of glops
8 9563e31f8bfd Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
ms f3915f83e84c Christoph Hellwig gfs2: use page_offset in gfs2_page_mkwrite
7 ebac7a38036c Christoph Hellwig gfs2: don't use buffer_heads in gfs2_allocate_page_backing
ms 35af80aef99b Christoph Hellwig gfs2: don't use buffer_heads in
gfs2_allocate_page_backing
6 f703a3c27874 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
fn 39c3a948ecf6 Andreas Gruenbacher gfs2: Improve mmap write vs. punch_hole consistency
5 a3e86d2ef30e Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
fn f53056c43063 Andreas Gruenbacher gfs2: Multi-block allocations in gfs2_page_mkwrite
4 da3c604755b0 Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
fn 184b4e60853d Andreas Gruenbacher gfs2: Fix end-of-file handling in gfs2_page_mkwrite
3 4525c2f5b46f Bob Peterson Rafael Aquini's slab instrumentation
Not found upstream
2 a06a5b7dea02 Bob Peterson GFS2: Add go_get_holdtime to gl_ops
Not found upstream
^ 8ba93c796d5c Bob Peterson gfs2: introduce new function remaining_hold_time and use it in
dq
Not found upstream
H e8b5ff851bb9 Bob Peterson gfs2: Allow rgrps to have a minimum hold time
Not found upstream
Compare script generated: /tmp/compare_upstream.sh
The patches are shown on two lines, the first of which is your current patch, followed by
the corresponding upstream patch, and a 2-character abbreviation to indicate its upstream
status:
lo means the patch is in the local upstream Git repo only (i.e., not pushed upstream
yet).
ms means the patch is in Linus Torvald's master branch.
fn means the patch is pushed to my "for-next" development branch, intended for the next
upstream merge window.
Some of my scripts make assumptions based on how I normally work with Git. For example, when
searching for upstream patches, it uses my well-known Git tree's location. So, you will need to
adjust or improve them to suit your conditions. The gitlog.find script is designed to locate
GFS2 and DLM patches only, so unless
you're a GFS2 developer, you will want to customize it to the components that interest
you.
Source code
Here is the source for these scripts.
1. gitlog #!/bin/bash
branch = $1
if test "x $branch " = x; then
branch = ` git branch -a | grep "*" | cut -d ' ' -f2 `
fi
#echo "old: " $oldsha1s
oldcount = ${#oldsha1s[@]}
echo "Branch $oldbranch has $oldcount patches"
oldcount =$ ( echo $oldcount - 1 | bc )
#for o in `seq 0 ${#oldsha1s[@]}`; do
# echo -n ${oldsha1s[$o]} " "
# desc=`git show $i | head -5 | tail -1|cut -b5-`
#done
#echo "new: " $newsha1s
newcount = ${#newsha1s[@]}
echo "Branch $newbranch has $newcount patches"
newcount =$ ( echo $newcount - 1 | bc )
#for o in `seq 0 ${#newsha1s[@]}`; do
# echo -n ${newsha1s[$o]} " "
# desc=`git show $i | head -5 | tail -1|cut -b5-`
#done
echo
for new in ` seq 0 $newcount ` ; do
newsha = ${newsha1s[$new]}
newdesc = ` git show $newsha | head -5 | tail -1 | cut -b5- `
oldsha = " "
same = "[ ]"
for old in ` seq 0 $oldcount ` ; do
if test " ${oldsha1s[$old]} " = "match" ; then
continue ;
fi
olddesc = ` git show ${oldsha1s[$old]} | head -5 | tail -1 | cut -b5- `
if test " $olddesc " = " $newdesc " ; then
oldsha = ${oldsha1s[$old]}
#echo $oldsha
git show $oldsha | tail -n + 2 | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk1
git show $newsha | tail -n + 2 | grep -v "index.*\.\." | grep -v "@@" > / tmp / gronk2
diff / tmp / gronk1 / tmp / gronk2 &> / dev / null
if [ $? -eq 0 ] ; then
# No differences
same = "[SAME]"
oldsha1s [ $old ] = "match"
break
fi
git show $oldsha | sed -n '/diff/,$p' | grep -v "index.*\.\." | grep -v "@@" > / tmp /
gronk1
git show $newsha | sed -n '/diff/,$p' | grep -v "index.*\.\." | grep -v "@@" > / tmp /
gronk2
diff / tmp / gronk1 / tmp / gronk2 &> / dev / null
if [ $? -eq 0 ] ; then
# Differences in comments only
same = "[same]"
oldsha1s [ $old ] = "match"
break
fi
oldsha1s [ $old ] = "match"
echo "compare_them $oldsha $newsha " >> $script
fi
done
echo " $new $oldsha $newsha $same $newdesc "
done
echo
echo "Missing from $newbranch :"
the_missing = ""
# Now run through the olds we haven't matched up
for old in ` seq 0 $oldcount ` ; do
if test ${oldsha1s[$old]} ! = "match" ; then
olddesc = ` git show ${oldsha1s[$old]} | head -5 | tail -1 | cut -b5- `
echo " ${oldsha1s[$old]} $olddesc "
the_missing = ` echo " $the_missing ${oldsha1s[$old]} " `
fi
done
Reading this morning on Hacker News was this article on how the old
Internet has died because we trusted all our content to Facebook and Google. While hyperbole
abounds in the headline and there are plenty of internet things out there that aren't owned by
Google nor Facebook (including this AWS free blog) it is true much of the information and
content is in the hands of a giant Ad serving service and a social echo chamber. (well that is
probably too harsh)
I heard this advice many years ago that you should own your own content. While there isn't
much value in my trivial or obscure blog that nobody reads, it matters to me and is the reason
I've ran it on my own software, my own servers, for 10+ years. This blog, for example, runs on
open source WordPress, a Linux server hosted by a friend, and managed by me as I login and make
changes.
But of course, that is silly! Why not publish on Medium like everyone else? Or publish on
someone else's service? Isn't that the point of the internet? Maybe. But in another sense, to
me, the point is freedom. Freedom to express, do what I want, say what I will with no
restrictions. The ability to own what I say and freedom from others monetizing me directly.
There's no walled garden and anyone can access the content I write in my own little
funzone.
While that may seem like ridiculousness, to me it's part of my hobby, and something I enjoy.
In the next decade, whether this blog remains up or is shut down, is not dependent upon the
fates of Google, Facebook, Amazon, nor Apple. It's dependent upon me, whether I want it up or
not. If I change my views, I can delete it. It won't just sit on the Internet because someone
else's terms of service agreement changed. I am in control, I am in charge. That to me is
important and the reason I run this blog, don't use other people's services, and why I advocate
for owning your own content.
I'm often asked in my technical troubleshooting job to solve problems that development teams can't solve. Usually these do not
involve knowledge of API calls or syntax, rather some kind of insight into what the right tool to use is, and why and how to use
it. Probably because they're not taught in college, developers are often unaware that these tools exist, which is a shame, as playing
with them can give a much deeper understanding of what's going on and ultimately lead to better code.
My favourite secret weapon in this path to understanding is strace.
strace (or its Solaris equivalents, trussdtruss is a tool that tells you which operating system (OS)
calls your program is making.
An OS call (or just "system call") is your program asking the OS to provide some service for it. Since this covers a lot of the
things that cause problems not directly to do with the domain of your application development (I/O, finding files, permissions etc)
its use has a very high hit rate in resolving problems out of developers' normal problem space.
Usage Patterns
strace is useful in all sorts of contexts. Here's a couple of examples garnered from my experience.
My Netcat Server Won't Start!
Imagine you're trying to start an executable, but it's failing silently (no log file, no output at all). You don't have the source,
and even if you did, the source code is neither readily available, nor ready to compile, nor readily comprehensible.
Simply running through strace will likely give you clues as to what's gone on.
$ nc -l localhost 80
nc: Permission denied
Let's say someone's trying to run this and doesn't understand why it's not working (let's assume manuals are unavailable).
Simply put strace at the front of your command. Note that the following output has been heavily edited for space
reasons (deep breath):
To most people that see this flying up their terminal this initially looks like gobbledygook, but it's really quite easy to parse
when a few things are explained.
For each line:
the first entry on the left is the system call being performed
the bit in the parentheses are the arguments to the system call
the right side of the equals sign is the return value of the system call
open("/etc/gai.conf", O_RDONLY) = 3
Therefore for this particular line, the system call is open , the arguments are the string /etc/gai.conf
and the constant O_RDONLY , and the return value was 3 .
How to make sense of this?
Some of these system calls can be guessed or enough can be inferred from context. Most readers will figure out that the above
line is the attempt to open a file with read-only permission.
In the case of the above failure, we can see that before the program calls exit_group, there is a couple of calls to bind that
return "Permission denied":
We might therefore want to understand what "bind" is and why it might be failing.
You need to get a copy of the system call's documentation. On ubuntu and related distributions of linux, the documentation is
in the manpages-dev package, and can be invoked by eg man 2 bind (I just used strace to
determine which file man 2 bind opened and then did a dpkg -S to determine from which package it came!).
You can also look up online if you have access, but if you can auto-install via a package manager you're more likely to get docs
that match your installation.
Right there in my man 2 bind page it says:
ERRORS
EACCES The address is protected, and the user is not the superuser.
So there is the answer – we're trying to bind to a port that can only be bound to if you are the super-user.
My Library Is Not Loading!
Imagine a situation where developer A's perl script is working fine, but not on developer B's identical one is not (again, the
output has been edited).
In this case, we strace the output on developer B's computer to see how it's working:
We observe that the file is found in what looks like an unusual place.
open("/space/myperllib/blahlib.pm", O_RDONLY) = 4
Inspecting the environment, we see that:
$ env | grep myperl
PERL5LIB=/space/myperllib
So the solution is to set the same env variable before running:
export PERL5LIB=/space/myperllib
Get to know the internals bit by bit
If you do this a lot, or idly run strace on various commands and peruse the output, you can learn all sorts of things
about the internals of your OS. If you're like me, this is a great way to learn how things work. For example, just now I've had a
look at the file /etc/gai.conf , which I'd never come across before writing this.
Once your interest has been piqued, I recommend getting a copy of "Advanced Programming in the Unix Environment" by Stevens &
Rago, and reading it cover to cover. Not all of it will go in, but as you use strace more and more, and (hopefully)
browse C code more and more understanding will grow.
Gotchas
If you're running a program that calls other programs, it's important to run with the -f flag, which "follows" child processes
and straces them. -ff creates a separate file with the pid suffixed to the name.
If you're on solaris, this program doesn't exist – you need to use truss instead.
Many production environments will not have this program installed for security reasons. strace doesn't have many library dependencies
(on my machine it has the same dependencies as 'echo'), so if you have permission, (or are feeling sneaky) you can just copy the
executable up.
Other useful tidbits
You can attach to running processes (can be handy if your program appears to hang or the issue is not readily reproducible) with
-p .
If you're looking at performance issues, then the time flags ( -t , -tt , -ttt , and
-T ) can help significantly.
A failed access or open system call is not usually an error in the context of launching a program. Generally it is merely checking
if a config file exists.
From bash manual: The exit status of an executed command is the value returned by the waitpid system
call or equivalent function. Exit statuses fall between 0 and 255, though, as explained below, the shell may use values above 125
specially. Exit statuses from shell builtins and compound commands are also limited to this range. Under certain circumstances,
the shell will use special values to indicate specific failure modes.
For the shell’s purposes, a command which exits with a zero exit status has succeeded. A non-zero exit status indicates failure.
This seemingly counter-intuitive scheme is used so there is one well-defined way to indicate success and a variety of ways to
indicate various failure modes. When a command terminates on a fatal signal whose number is N,
Bash uses the value 128+N as the exit status.
If a command is not found, the child process created to execute it returns a status of 127. If a command is found but is not
executable, the return status is 126.
If a command fails because of an error during expansion or redirection, the exit status is greater than zero.
The exit status is used by the Bash conditional commands (see Conditional
Constructs) and some of the list constructs (see Lists).
All of the Bash builtins return an exit status of zero if they succeed and a non-zero status on failure, so they may be used by
the conditional and list constructs. All builtins return an exit status of 2 to indicate incorrect usage, generally invalid
options or missing arguments.
Not everyone knows that every time you run a shell command in bash, an 'exit code' is
returned to bash.
Generally, if a command 'succeeds' you get an error code of 0 . If it doesn't
succeed, you get a non-zero code.
1 is a 'general error', and others can give you more information (e.g. which
signal killed it, for example). 255 is upper limit and is "internal error"
grep joeuser /etc/passwd # in case of success returns 0, otherwise 1
or
grep not_there /dev/null
echo $?
$? is a special bash variable that's set to the exit code of each command after
it runs.
Grep uses exit codes to indicate whether it matched or not. I have to look up every time
which way round it goes: does finding a match or not return 0 ?
Readline is one of those technologies that is so commonly used many users don't realise it's there.
I went looking for a good primer on it so I could understand it better, but failed to find one. This is an attempt to write a
primer that may help users get to grips with it, based on what I've managed to glean as I've tried to research and experiment with
it over the years.
Bash Without Readline
First you're going to see what bash looks like without readline.
In your 'normal' bash shell, hit the TAB key twice. You should see something like this:
Display all 2335 possibilities? (y or n)
That's because bash normally has an 'autocomplete' function that allows you to see what commands are available to you if you tap
tab twice.
Hit n to get out of that autocomplete.
Another useful function that's commonly used is that if you hit the up arrow key a few times, then the previously-run commands
should be brought back to the command line.
Now type:
$ bash --noediting
The --noediting flag starts up bash without the readline library enabled.
If you hit TAB twice now you will see something different: the shell no longer 'sees' your tab and just sends a tab
direct to the screen, moving your cursor along. Autocomplete has gone.
Autocomplete is just one of the things that the readline library gives you in the terminal. You might want to try hitting the
up or down arrows as you did above to see that that no longer works as well.
Hit return to get a fresh command line, and exit your non-readline-enabled bash shell:
$ exit
Other Shortcuts
There are a great many shortcuts like autocomplete available to you if readline is enabled. I'll quickly outline four of the most
commonly-used of these before explaining how you can find out more.
$ echo 'some command'
There should not be many surprises there. Now if you hit the 'up' arrow, you will see you can get the last command back on your
line. If you like, you can re-run the command, but there are other things you can do with readline before you hit return.
If you hold down the ctrl key and then hit a at the same time your cursor will return to the start of
the line. Another way of representing this 'multi-key' way of inputting is to write it like this: \C-a . This is one
conventional way to represent this kind of input. The \C represents the control key, and the -a represents
that the a key is depressed at the same time.
Now if you hit \C-e ( ctrl and e ) then your cursor has moved to the end of the line. I
use these two dozens of times a day.
Another frequently useful one is \C-l , which clears the screen, but leaves your command line intact.
The last one I'll show you allows you to search your history to find matching commands while you type. Hit \C-r ,
and then type ec . You should see the echo command you just ran like this:
(reverse-i-search)`ec': echo echo
Then do it again, but keep hitting \C-r over and over. You should see all the commands that have `ec` in them that
you've input before (if you've only got one echo command in your history then you will only see one). As you see them
you are placed at that point in your history and you can move up and down from there or just hit return to re-run if you want.
There are many more shortcuts that you can use that readline gives you. Next I'll show you how to view these. Using `bind`
to Show Readline Shortcuts
If you type:
$ bind -p
You will see a list of bindings that readline is capable of. There's a lot of them!
Have a read through if you're interested, but don't worry about understanding them all yet.
If you type:
$ bind -p | grep C-a
you'll pick out the 'beginning-of-line' binding you used before, and see the \C-a notation I showed you before.
As an exercise at this point, you might want to look for the \C-e and \C-r bindings we used previously.
If you want to look through the entirety of the bind -p output, then you will want to know that \M refers
to the Meta key (which you might also know as the Alt key), and \e refers to the Esc
key on your keyboard. The 'escape' key bindings are different in that you don't hit it and another key at the same time, rather you
hit it, and then hit another key afterwards. So, for example, typing the Esc key, and then the ? key also
tries to auto-complete the command you are typing. This is documented as:
"\e?": possible-completions
in the bind -p output.
Readline and Terminal Options
If you've looked over the possibilities that readline offers you, you might have seen the \C-r binding we looked
at earlier:
"\C-r": reverse-search-history
You might also have seen that there is another binding that allows you to search forward through your history too:
"\C-s": forward-search-history
What often happens to me is that I hit \C-r over and over again, and then go too fast through the history and fly
past the command I was looking for. In these cases I might try to hit \C-s to search forward and get to the one I missed.
Watch out though! Hitting \C-s to search forward through the history might well not work for you.
Why is this, if the binding is there and readline is switched on?
It's because something picked up the \C-sbefore it got to the readline library: the terminal settings.
The terminal program you are running in may have standard settings that do other things on hitting some of these shortcuts before
readline gets to see it.
You can see on the last four lines ( discard dsusp [...] ) there is a table of key bindings that your terminal will
pick up before readline sees them. The ^ character (known as the 'caret') here represents the ctrl key
that we previously represented with a \C .
If you think this is confusing I won't disagree. Unfortunately in the history of Unix and Linux documenters did not stick to one
way of describing these key combinations.
If you encounter a problem where the terminal options seem to catch a shortcut key binding before it gets to readline, then you
can use the stty program to unset that binding. In this case, we want to unset the 'stop' binding.
If you are in the same situation, type:
$ stty stop undef
Now, if you re-run stty -e , the last two lines might look like this:
[...]
min quit reprint start status stop susp time werase
1 ^\ ^R ^Q ^T <undef> ^Z 0 ^W
where the stop entry now has <undef> underneath it.
Strangely, for me C-r is also bound to 'reprint' above ( ^R ).
But (on my terminals at least) that gets to readline without issue as I search up the history. Why this is the case I haven't
been able to figure out. I suspect that reprint is ignored by modern terminals that don't need to 'reprint' the current line.
\C-d sends an 'end of file' character. It's often used to indicate to a program that input is over. If you type it
on a bash shell, the bash shell you are in will close.
Finally, \C-w deletes the word before the cursor
These are the most commonly-used shortcuts that are picked up by the terminal before they get to the readline library.
You might want to check out the 'rlwrap' program. It allows you to have readline behavior on programs that don't natively support
readline, but which have a 'type in a command' type interface. For instance, we use Oracle here (alas :-) ) and the 'sqlplus'
program, that lets you type SQL commands to an Oracle instance does not have anything like readline built into it, so you can't
go back to edit previous commands. But running 'rlwrap sqlplus' gives me readline behavior in sqlplus! It's fantastic to have.
I was told to use this in a class, and I didn't understand what I did. One rabbit hole later, I was shocked and amazed at how
advanced the readline library is. One thing I'd like to add is that you can write a '~/.inputrc' file and have those readline
commands sourced at startup!
I do not know exactly when or how the inputrc is read.
set s we saw before
, but shopt s look very similar. Just inputting shopt shows a bunch of options:
$ shopt
cdable_vars off
cdspell on
checkhash off
checkwinsize on
cmdhist on
compat31 off
dotglob off
I found a set of answers here
. Essentially, it looks like it's a consequence of bash (and other shells) being built on sh, and adding shopt as
another way to set extra shell options. But I'm still unsure if you know the answer, let me know.
4) Here Docs and Here Strings
'Here docs' are files created inline in the shell.
The 'trick' is simple. Define a closing word, and the lines between that word and when it appears alone on a line become a
file.
Type this:
$ cat > afile << SOMEENDSTRING
> here is a doc
> it has three lines
> SOMEENDSTRING alone on a line will save the doc
> SOMEENDSTRING
$ cat afile
here is a doc
it has three lines
SOMEENDSTRING alone on a line will save the doc
Notice that:
the string could be included in the file if it was not 'alone' on the line
the string SOMEENDSTRING is more normally END , but that is just convention
Lesser known is the 'here string':
$ cat > asd <<< 'This file has one line'
5) String Variable Manipulation
You may have written code like this before, where you use tools like sed to manipulate strings:
$ VAR='HEADERMy voice is my passwordFOOTER'
$ PASS="$(echo $VAR | sed 's/^HEADER(.*)FOOTER/1/')"
$ echo $PASS
But you may not be aware that this is possible natively in bash .
This means that you can dispense with lots of sed and awk shenanigans.
One way to rewrite the above is:
$ VAR='HEADERMy voice is my passwordFOOTER'
$ PASS="${VAR#HEADER}"
$ PASS="${PASS%FOOTER}"
$ echo $PASS
The # means 'match and remove the following pattern from the start of the string'
The % means 'match and remove the following pattern from the end of the string
Now run chmod +x default.sh and run the script with ./default.sh first second .
Observer how the third argument's default has been assigned, but not the first two.
You can also assign directly with ${VAR: = defaultval} (equals sign, not dash) but note that this won't work with
positional variables in scripts or functions. Try changing the above script to see how it fails.
7) Traps
The trap built-in can be used to 'catch' when a
signal is sent to your script.
Note that there are two 'lines' above, even though you used ; to separate the commands.
TMOUT
You can timeout reads, which can be really handy in some scripts
#!/bin/bash
TMOUT=5
echo You have 5 seconds to respond...
read
echo ${REPLY:-noreply}
... ... ...
10) Associative Arrays
Talking of moving to other languages, a rule of thumb I use is that if I need arrays then I drop bash to go to python (I even
created a Docker container for a tool to help with this here
).
What I didn't know until I read up on it was that you can have associative arrays in bash.
Variables are a core part of most serious bash scripts (and even one-liners!), so managing
them is another important way to reduce the possibility of your script breaking.
Change your script to add the 'set' line immediately after the first line and see what
happens:
#!/bin/bash
set -o nounset
A="some value"
echo "${A}"
echo "${B}"
...I always set nounset on my scripts as a habit. It can catch many problems
before they become serious.
Tracing Variables
If you are working with a particularly complex script, then you can get to the point where
you are unsure what happened to a variable.
Try running this script and see what happens:
#!/bin/bash
set -o nounset
declare A="some value"
function a {
echo "${BASH_SOURCE}>A A=${A} LINENO:${1}"
}
trap "a $LINENO" DEBUG
B=value
echo "${A}"
A="another value"
echo "${A}"
echo "${B}"
There's a problem with this code. The output is slightly wrong. Can you work out what is
going on? If so, try and fix it.
You may need to refer to the bash man page, and make sure you understand quoting in bash
properly.
It's quite a tricky one to fix 'properly', so if you can't fix it, or work out what's wrong
with it, then ask me directly and I will help.
Profiling Bash Scripts
Returning to the xtrace (or set -x flag), we can exploit its use
of a PS variable to implement the profiling of a script:
#!/bin/bash
set -o nounset
set -o xtrace
declare A="some value"
PS4='$(date "+%s%N => ")'
B=
echo "${A}"
A="another value"
echo "${A}"
echo "${B}"
ls
pwd
curl -q bbc.co.uk
From this you should be able to tell what PS4 does. Have a play with it, and
read up and experiment with the other PS variables to get familiar with what they
do.
NOTE: If you are on a Mac, then you might only get second-level granularity on the
date!
Linting with Shellcheck
Finally, here is a very useful tip for understanding bash more deeply and improving any bash
scripts you come across.
Shellcheck is a website and a
package available on most platforms that gives you advice to help fix and improve your shell
scripts. Very often, its advice has prompted me to research more deeply and understand bash
better.
Here is some example output from a script I found on my laptop:
$ shellcheck shrinkpdf.sh
In shrinkpdf.sh line 44:
-dColorImageResolution=$3 \
^-- SC2086: Double quote to prevent globbing and word splitting.
In shrinkpdf.sh line 46:
-dGrayImageResolution=$3 \
^-- SC2086: Double quote to prevent globbing and word splitting.
In shrinkpdf.sh line 48:
-dMonoImageResolution=$3 \
^-- SC2086: Double quote to prevent globbing and word splitting.
In shrinkpdf.sh line 57:
if [ ! -f "$1" -o ! -f "$2" ]; then
^-- SC2166: Prefer [ p ] || [ q ] as [ p -o q ] is not well defined.
In shrinkpdf.sh line 60:
ISIZE="$(echo $(wc -c "$1") | cut -f1 -d\ )"
^-- SC2046: Quote this to prevent word splitting.
^-- SC2005: Useless echo? Instead of 'echo $(cmd)', just use 'cmd'.
In shrinkpdf.sh line 61:
OSIZE="$(echo $(wc -c "$2") | cut -f1 -d\ )"
^-- SC2046: Quote this to prevent word splitting.
^-- SC2005: Useless echo? Instead of 'echo $(cmd)', just use 'cmd'.
The most common reminders are regarding potential quoting issues, but you can see other
useful tips in the above output, such as preferred arguments to the test
construct, and advice on "useless" echo s.
Exercise
1) Find a large bash script on a social coding site such as GitHub, and run
shellcheck over it. Contribute back any improvements you find.
"... I use "!*" for "all arguments". It doesn't have the flexibility of your approach but it's faster for my most common need. ..."
"... Provided that your shell is readline-enabled, I find it much easier to use the arrow keys and modifiers to navigate through history than type !:1 (or having to remeber what it means). ..."
7 Bash history shortcuts you will actually useSave time on the command line with these essential Bash shortcuts.
02 Oct 2019 Ian 205
up 12 comments Image by : Opensource.com x Subscribe now
Most guides to Bash history shortcuts exhaustively list every single one available. The problem with that is I would use a shortcut
once, then glaze over as I tried out all the possibilities. Then I'd move onto my working day and completely forget them, retaining
only the well-known !! trick I learned when I first
started using Bash.
This article outlines the shortcuts I actually use every day. It is based on some of the contents of my book,
Learn Bash the hard way ; (you can read a
preview of it to learn more).
When people see me use these shortcuts, they often ask me, "What did you do there!?" There's minimal effort or intelligence required,
but to really learn them, I recommend using one each day for a week, then moving to the next one. It's worth taking your time to
get them under your fingers, as the time you save will be significant in the long run.
1. The "last argument" one: !$
If you only take one shortcut from this article, make it this one. It substitutes in the last argument of the last command
into your line.
Consider this scenario:
$ mv / path / to / wrongfile / some / other / place
mv: cannot stat '/path/to/wrongfile' : No such file or directory
Ach, I put the wrongfile filename in my command. I should have put rightfile instead.
You might decide to retype the last command and replace wrongfile with rightfile completely. Instead, you can type:
$ mv / path / to / rightfile ! $
mv / path / to / rightfile / some / other / place
and the command will work.
There are other ways to achieve the same thing in Bash with shortcuts, but this trick of reusing the last argument of the last
command is one I use the most.
2. The " n th argument" one: !:2
Ever done anything like this?
$ tar -cvf afolder afolder.tar
tar: failed to open
Like many others, I get the arguments to tar (and ln ) wrong more often than I would like to admit.
The last command's items are zero-indexed and can be substituted in with the number after the !: .
Obviously, you can also use this to reuse specific arguments from the last command rather than all of them.
3. The "all the arguments": !*
Imagine I run a command like:
$ grep '(ping|pong)' afile
The arguments are correct; however, I want to match ping or pong in a file, but I used grep rather than egrep .
I start typing egrep , but I don't want to retype the other arguments. So I can use the !:1$ shortcut to ask for all the arguments
to the previous command from the second one (remember they're zero-indexed) to the last one (represented by the $ sign).
$ egrep ! : 1 -$
egrep '(ping|pong)' afile
ping
You don't need to pick 1-$ ; you can pick a subset like 1-2 or 3-9 (if you had that many arguments in the previous command).
4. The "last but n " : !-2:$
The shortcuts above are great when I know immediately how to correct my last command, but often I run commands after the
original one, which means that the last command is no longer the one I want to reference.
For example, using the mv example from before, if I follow up my mistake with an ls check of the folder's contents:
$ mv / path / to / wrongfile / some / other / place
mv: cannot stat '/path/to/wrongfile' : No such file or directory
$ ls / path / to /
rightfile
I can no longer use the !$ shortcut.
In these cases, I can insert a - n : (where n is the number of commands to go back in the history) after the ! to
grab the last argument from an older command:
$ mv / path / to / rightfile ! - 2 :$
mv / path / to / rightfile / some / other / place
Again, once you learn it, you may be surprised at how often you need it.
5. The "get me the folder" one: !$:h
This one looks less promising on the face of it, but I use it dozens of times daily.
Imagine I run a command like this:
$ tar -cvf system.tar / etc / system
tar: / etc / system: Cannot stat: No such file or directory
tar: Error exit delayed from previous errors.
The first thing I might want to do is go to the /etc folder to see what's in there and work out what I've done wrong.
I can do this at a stroke with:
$ cd ! $:h
cd / etc
This one says: "Get the last argument to the last command ( /etc/system ) and take off its last filename component, leaving only
the /etc ."
6. The "the current line" one: !#:1
For years, I occasionally wondered if I could reference an argument on the current line before finally looking it up and learning
it. I wish I'd done so a long time ago. I most commonly use it to make backup files:
$ cp / path / to / some / file ! #:1.bak
cp / path / to / some / file / path / to / some / file.bak
but once under the fingers, it can be a very quick alternative to
7. The "search and replace" one: !!:gs
This one searches across the referenced command and replaces what's in the first two / characters with what's in the second two.
Say I want to tell the world that my s key does not work and outputs f instead:
$ echo my f key doef not work
my f key doef not work
Then I realize that I was just hitting the f key by accident. To replace all the f s with s es, I can type:
$ !! :gs / f / s /
echo my s key does not work
my s key does not work
It doesn't work only on single characters; I can replace words or sentences, too:
$ !! :gs / does / did /
echo my s key did not work
my s key did not work Test them out
Just to show you how these shortcuts can be combined, can you work out what these toenail clippings will output?
Bash can be an elegant source of shortcuts for the day-to-day command-line user. While there are thousands of tips and tricks
to learn, these are my favorites that I frequently put to use.
This article was originally posted on Ian's blog,
Zwischenzugs.com
, and is reused with permission.
Orr, August 25, 2019 at 10:39 pm
BTW – you inspired me to try and understand how to repeat the nth command entered on command line. For example I type 'ls'
and then accidentally type 'clear'. !! will retype clear again but I wanted to retype ls instead using a shortcut.
Bash doesn't accept ':' so !:2 didn't work. !-2 did however, thank you!
Dima August 26, 2019 at 7:40 am
Nice article! Just another one cool and often used command: i.e.: !vi opens the last vi command with their arguments.
cbarrick on 03 Oct 2019
Your "current line" example is too contrived. Your example is copying to a backup like this:
$ cp /path/to/some/file !#:1.bak
But a better way to write that is with filename generation:
$ cp /path/to/some/file{,.bak}
That's not a history expansion though... I'm not sure I can come up with a good reason to use `!#:1`.
Darryl Martin August 26, 2019 at 4:41 pm
I seldom get anything out of these "bash commands you didn't know" articles, but you've got some great tips here. I'm writing
several down and sticking them on my terminal for reference.
A couple additions I'm sure you know.
I use "!*" for "all arguments". It doesn't have the flexibility of your approach but it's faster for my most common need.
I recently started using Alt-. as a substitute for "!$" to get the last argument. It expands the argument on the line, allowing
me to modify it if necessary.
The problem with bash's history shorcuts for me is... that I never had the need to learn them.
Provided that your shell is readline-enabled, I find it much easier to use the arrow keys and modifiers to navigate through
history than type !:1 (or having to remeber what it means).
Examples:
Ctrl+R for a Reverse search
Ctrl+A to move to the begnining of the line (Home key also)
Ctrl+E to move to the End of the line (End key also)
Ctrl+K to Kill (delete) text from the cursor to the end of the line
Ctrl+U to kill text from the cursor to the beginning of the line
Alt+F to move Forward one word (Ctrl+Right arrow also)
Alt+B to move Backward one word (Ctrl+Left arrow also)
etc.
You may already be familiar with 2>&1 , which redirects standard error
to standard output, but until I stumbled on it in the manual, I had no idea that you can pipe
both standard output and standard error into the next stage of the pipeline like this:
if doesnotexist |& grep 'command not found' >/dev/null
then
echo oops
fi
3) $''
This construct allows you to specify specific bytes in scripts without fear of triggering
some kind of encoding problem. Here's a command that will grep through files
looking for UK currency ('£') signs in hexadecimal recursively:
grep -r $'\xc2\xa3' *
You can also use octal:
grep -r $'\302\243' *
4) HISTIGNORE
If you are concerned about security, and ever type in commands that might have sensitive
data in them, then this one may be of use.
This environment variable does not put the commands specified in your history file
if you type them in. The commands are separated by colons:
HISTIGNORE="ls *:man *:history:clear:AWS_KEY*"
You have to specify the whole line, so a glob character may be needed if you want
to exclude commands and their arguments or flags.
5) fc
If readline key bindings
aren't under your fingers, then this one may come in handy.
It calls up the last command you ran, and places it into your preferred editor (specified by
the EDITOR variable). Once edited, it re-runs the command.
6) ((i++))
If you can't be bothered with faffing around with variables in bash with the
$[] construct, you can use the C-style compound command.
So, instead of:
A=1
A=$[$A+1]
echo $A
you can do:
A=1
((A++))
echo $A
which, especially with more complex calculations, might be easier on the eye.
7)
caller
Another builtin bash command, caller gives context about the context of your
shell's
SHLVL is a related shell variable which gives the level of depth of the calling
stack.
This can be used to create stack traces for more complex bash scripts.
Here's a die function, adapted from the bash hackers' wiki that gives a stack
trace up through the calling frames:
#!/bin/bash
die() {
local frame=0
((FRAMELEVEL=SHLVL - frame))
echo -n "${FRAMELEVEL}: "
while caller $frame; do
((frame++));
((FRAMELEVEL=SHLVL - frame))
if [[ ${FRAMELEVEL} -gt -1 ]]
then
echo -n "${FRAMELEVEL}: "
fi
done
echo "$*"
exit 1
}
which outputs:
3: 17 f1 ./caller.sh
2: 18 f2 ./caller.sh
1: 19 f3 ./caller.sh
0: 20 main ./caller.sh
*** an error occured ***
8) /dev/tcp/host/port
This one can be particularly handy if you find yourself on a container running within a
Kubernetes cluster service
mesh without any network tools (a frustratingly common experience).
Bash provides you with some virtual files which, when referenced, can create socket
connections to other servers.
This snippet, for example, makes a web request to a site and returns the output.
The first line opens up file descriptor 9 to the host brvtsdflnxhkzcmw.neverssl.com on port
80 for reading and writing. Line two sends the raw HTTP request to that socket
connection's file descriptor. The final line retrieves the response.
Obviously, this doesn't handle SSL for you, so its use is limited now that pretty much
everyone is running on https, but when running from application containers within a service
mesh can still prove invaluable, as requests there are initiated using HTTP.
9)
Co-processes
Since version 4 of bash it has offered the capability to run named
coprocesses.
It seems to be particularly well-suited to managing the inputs and outputs to other
processes in a fine-grained way. Here's an annotated and trivial example:
coproc testproc (
i=1
while true
do
echo "iteration:${i}"
((i++))
read -r aline
echo "${aline}"
done
)
This sets up the coprocess as a subshell with the name testproc .
Within the subshell, there's a never-ending while loop that counts its own iterations with
the i variable. It outputs two lines: the iteration number, and a line read in
from standard input.
After creating the coprocess, bash sets up an array with that name with the file descriptor
numbers for the standard input and standard output. So this:
echo "${testproc[@]}"
in my terminal outputs:
63 60
Bash also sets up a variable with the process identifier for the coprocess, which you can
see by echoing it:
echo "${testproc_PID}"
You can now input data to the standard input of this coprocess at will like this:
echo input1 >&"${testproc[1]}"
In this case, the command resolves to: echo input1 >&60 , and the
>&[INTEGER] construct ensures the redirection goes to the coprocess's
standard input.
Now you can read the output of the coprocess's two lines in a similar way, like this:
You might use this to create an expect -like script if you were so inclined, but it
could be generally useful if you want to manage inputs and outputs. Named pipes are another
way to achieve a similar result.
Here's a complete listing for those who want to cut and paste:
"... The -I option shows the header information and the -s option silences the response body. Checking the endpoint of your database from your local desktop: ..."
curl transfers a URL. Use this command to test an application's endpoint or
connectivity to an upstream service endpoint. c url can be useful for determining if
your application can reach another service, such as a database, or checking if your service is
healthy.
As an example, imagine your application throws an HTTP 500 error indicating it can't reach a
MongoDB database:
The -I option shows the header information and the -s option silences the
response body. Checking the endpoint of your database from your local desktop:
$ curl -I -s
database: 27017
HTTP / 1.0 200 OK
So what could be the problem? Check if your application can get to other places besides the
database from the application host:
This indicates that your application cannot resolve the database because the URL of the
database is unavailable or the host (container or VM) does not have a nameserver it can use to
resolve the hostname.
Bash tells me the sshd service is not running, so the next thing I want to do is start the service. I had checked its status
with my previous command. That command was saved in history , so I can reference it. I simply run:
$> !!:s/status/start/
sudo systemctl start sshd
The above expression has the following content:
!! - repeat the last command from history
:s/status/start/ - substitute status with start
The result is that the sshd service is started.
Next, I increase the default HISTSIZE value from 500 to 5000 by using the following command:
What if I want to display the last three commands in my history? I enter:
$> history 3
1002 ls
1003 tail audit.log
1004 history 3
I run tail on audit.log by referring to the history line number. In this case, I use line 1003:
$> !1003
tail audit.log
Reference the last argument of the previous command
When I want to list directory contents for different directories, I may change between directories quite often. There is a
nice trick you can use to refer to the last argument of the previous command. For example:
$> pwd
/home/username/
$> ls some/very/long/path/to/some/directory
foo-file bar-file baz-file
In the above example, /some/very/long/path/to/some/directory is the last argument of the previous command.
If I want to cd (change directory) to that location, I enter something like this:
$> cd $_
$> pwd
/home/username/some/very/long/path/to/some/directory
Now simply use a dash character to go back to where I was:
"... Apart from regular absolute line numbers, Vim supports relative and hybrid line numbers too to help navigate around text files. The 'relativenumber' vim option displays the line number relative to the line with the cursor in front of each line. Relative line numbers help you use the count you can precede some vertical motion commands with, without having to calculate it yourself. ..."
"... We can enable both absolute and relative line numbers at the same time to get "Hybrid" line numbers. ..."
How do I show
line numbers in Vim by default on Linux? Vim (Vi IMproved) is not just free text editor, but it
is the number one editor for Linux sysadmin and software development work.
By default, Vim
doesn't show line numbers on Linux and Unix-like systems, however, we can turn it on using the
following instructions.
.... Let us see how to display the line number in vim
permanently. Vim (Vi IMproved) is not just free text editor, but it is the number one editor
for Linux sysadmin and software development work.
By default, Vim doesn't show line numbers on
Linux and Unix-like systems, however, we can turn it on using the following instructions. My
experience shows that line numbers are useful for debugging shell scripts, program code, and
configuration files. Let us see how to display the line number in vim permanently.
Vim show line numbers by default
Turn on absolute line numbering by default in vim:
Open vim configuration file ~/.vimrc by typing the following command: vim ~/.vimrc
Append set number
Press the Esc key
To save the config file, type :w and hit Enter key
You can temporarily disable the absolute line numbers within vim session, type: :set nonumber
Want to enable disabled the absolute line numbers within vim session? Try: :set number
We can see vim line numbers on the left side.
Relative line numbers
Apart from regular absolute line numbers, Vim supports relative and hybrid line numbers too
to help navigate around text files. The 'relativenumber' vim option displays the line number
relative to the line with the cursor in front of each line. Relative line numbers help you use
the count you can precede some vertical motion commands with, without having to calculate it
yourself. Once again edit the ~/vimrc, run: vim ~/vimrc
Finally, turn relative line numbers on: set relativenumber Save and close the file
in vim text editor.
How to show "Hybrid" line numbers in Vim by default
What happens when you put the following two config directives in ~/.vimrc ? set number
set relativenumber
That is right. We can enable both absolute and relative line numbers at the same time to get
"Hybrid" line numbers.
Conclusion
Today we learned about permanent line number settings for the vim text editor. By adding the
"set number" config directive in Vim configuration file named ~/.vimrc, we forced vim to show
line numbers each time vim started. See vim docs here for more info and following tutorials too:
Mktemp is part of GNU coreutils package. So don't bother with installation. We will see some practical examples now.
To create a new temporary file, simply run:
$ mktemp
You will see an output like below:
/tmp/tmp.U0C3cgGFpk
As you see in the output, a new temporary file with random name "tmp.U0C3cgGFpk" is created in /tmp directory. This file is just
an empty file.
You can also create a temporary file with a specified suffix. The following command will create a temporary file with ".txt" extension:
$ mktemp --suffix ".txt"
/tmp/tmp.sux7uKNgIA.txt
How about a temporary directory? Yes, it is also possible! To create a temporary directory, use -d option.
$ mktemp -d
This will create a random empty directory in /tmp folder.
Sample output:
/tmp/tmp.PE7tDnm4uN
All files will be created with u+rw permission, and directories with u+rwx , minus umask restrictions. In other words, the resulting
file will have read and write permissions for the current user, but no permissions for the group or others. And the resulting directory
will have read, write and executable permissions for the current user, but no permissions for groups or others.
You can verify the file permissions using "ls" command:
$ ls -al /tmp/tmp.U0C3cgGFpk
-rw------- 1 sk sk 0 May 14 13:20 /tmp/tmp.U0C3cgGFpk
Verify the directory permissions using "ls" command:
$ ls -ld /tmp/tmp.PE7tDnm4uN
drwx------ 2 sk sk 4096 May 14 13:25 /tmp/tmp.PE7tDnm4uN
Create temporary files or directories with custom names using mktemp command
As I already said, all files and directories are created with a random file names. We can also create a temporary file or directory
with a custom name. To do so, simply add at least 3 consecutive 'X's at the end of the file name like below.
$ mktemp ostechnixXXX
ostechnixq70
Similarly, to create directory, just run:
$ mktemp -d ostechnixXXX
ostechnixcBO
Please note that if you choose a custom name, the files/directories will be created in the current working directory, not /tmp
location . In this case, you need to manually clean up them.
Also, as you may noticed, the X's in the file name are replaced with random characters. You can however add any suffix of your
choice.
For instance, I want to add "blog" at the end of the filename. Hence, my command would be:
Now we do have the suffix "blog" at the end of the filename.
If you don't want to create any file or directory, you can simply perform a dry run like below.
$ mktemp -u
/tmp/tmp.oK4N4U6rDG
For help, run:
$ mktemp --help
Why do we actually need mktemp?
You might wonder why do we need "mktemp" while we can easily create empty files using "touch filename" command. The mktemp command
is mainly used for creating temporary files/directories with random name. So, we don't need to bother figuring out the names. Since
mktemp randomizes the names, there won't be any name collision. Also, mktemp creates files safely with permission 600(rw) and directories
with permission 700(rwx), so the other users can't access it. For more details, check man pages.
The first thing that you want to do anytime that you need to make changes to your disk is to
find out what partitions you already have. Displaying existing partitions allows you to make
informed decisions moving forward and helps you nail down the partition names will need for
future commands. Run the parted command to start parted in
interactive mode and list partitions. It will default to your first listed drive. You will then
use the print command to display disk information.
[root@rhel ~]# parted /dev/sdc
GNU Parted 3.2
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Error: /dev/sdc: unrecognised disk label
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdc: 1074MB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:
(parted)
Creating new partitions with parted
Now that you can see what partitions are active on the system, you are going to add a new
partition to /dev/sdc . You can see in the output above that there is no partition
table for this partition, so add one by using the mklabel command. Then use
mkpart to add the new partition. You are creating a new primary partition using
the ext4 architecture. For demonstration purposes, I chose to create a 50 MB partition.
(parted) mklabel msdos
(parted) mkpart
Partition type? primary/extended? primary
File system type? [ext2]? ext4
Start? 1
End? 50
(parted)
(parted) print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdc: 1074MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 50.3MB 49.3MB primary ext4 lba
Modifying existing partitions with parted
Now that you have created the new partition at 50 MB, you can resize it to 100 MB, and then
shrink it back to the original 50 MB. First, note the partition number. You can find this
information by using the print command. You are then going to use the
resizepart command to make the modifications.
(parted) resizepart
Partition number? 1
End? [50.3MB]? 100
(parted) print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdc: 1074MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 100MB 99.0MB primary
You can see in the above output that I resized partition number one from 50 MB to 100 MB.
You can then verify the changes with the print command. You can now resize it back
down to 50 MB. Keep in mind that shrinking a partition can cause data loss.
(parted) resizepart
Partition number? 1
End? [100MB]? 50
Warning: Shrinking a partition can cause data loss, are you sure you want to
continue?
Yes/No? yes
(parted) print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdc: 1074MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 50.0MB 49.0MB primary
Removing partitions with parted
Now, let's look at how to remove the partition you created at /dev/sdc1 by
using the rm command inside of the parted suite. Again, you will need
the partition number, which is found in the print output.
NOTE: Be sure that you have all of the information correct here, there are no safeguards or
are you sure? questions asked. When you run the rm command, it will
delete the partition number you give it.
(parted) rm 1
(parted) print
Model: ATA VBOX HARDDISK (scsi)
Disk /dev/sdc: 1074MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
Alt + , - switch mc 's layout from left-right to top-bottom.
Mind = blown. Useful for operating on files with long names.
Alt + t - switch the panel's listing mode in a loop: default, brief, long,
user-defined. "long" is especially useful, because it maximises one panel so that it takes
full width of the window and longer filenames fit on screen.
Alt + i - synchronize the active panel with the other panel. That is, show
the current directory in the other panel.
Ctrl + u - swap panels.
Alt + o - if the currently selected file is a directory, load that directory
on the other panel and move the selection to the next file. If the currently selected file is
not a directory, load the parent directory on the other panel and moves the selection to the
next file. This is useful for quick checking the contents of a list of directories.
Ctrl + PgUp (or just left arrow, if you've enabled Lynx-like
motion , see later) - move to the parent directory.
Alt + Shift + h - show the directory history. Might be easier to navigate
than going back one entry at a time.
Alt + y - move to the previous directory in history.
Alt + u - move to the next directory in history.
Common actions
Ctrl + Space - calculate the size of the selected directories. Press this
shortcut when the selection is on .. to calculate the size of all the
directories in the current directory.
Ctrl + x s (that is press Ctrl + x , let it go and then press s ) -
create a symbolic link (change s to l for a hardlink). I find it
very useful and intuitive - the link will, of course, be created in the other panel. You can
change it's destination and name, like with any other file operation.
Ctrl + x c - open the chmod dialog.
Ctrl + x o - open the chown dialog.
Panel options
Show backup files and Show hidden files - I keep both
enabled, as I often work with configuration files, etc.
Lynx-like motion - mentioned above, makes left arrow go to parent
directory, while the right arrow enters the directory under selection. Faster than
Home , Enter , Home , Enter , etc.
This options is quite smart, that is if the shell command line is not empty, the arrows
work as usual and allow moving the cursor in the command line.
Bonus assignments
Define your own listing mode ( Right/Left -> Listing mode...
-> User defined ). Hit F1 to see available columns and
options.
Play around in tree mode: Right/Left -> Tree or
Command -> Directory tree .
the Midnight Commander's built-in editor turned out to be. Below is one of the features of
mc 4.7, namely the use of the ctags / etags utilities together with mcedit to navigate through
the code.
Code Navigation Training
Support for this functionality appeared in mcedit from version 4.7.0-pre1.
To use it, you need to index the directory with the project using the ctags or etags utility,
for this you need to run the following commands:
$ cd /home/user/projects/myproj
$ find . -type f -name "*.[ch]" | etags -lc --declarations -
or $ find . -type f -name "*.[ch]" | ctags --c-kinds=+p --fields=+iaS --extra=+q -e
-L-
After the utility completes, a TAGS file will appear in the root directory of our project,
which mcedit will use.
Well, practically all that needs to be done in order for mcedit to find the definition of the
functions of variables or properties of the object under study.
Using
Imagine that we need to determine the place where the definition of the locked property
of an edit object is located in some source code of a rather large project.
/* Succesful, so unlock both files */
if (different_filename) {
if (save_lock)
edit_unlock_file (exp);
if (edit->locked)
edit->locked = edit_unlock_file (edit->filename);
} else {
if (edit->locked || save_lock)
edit->locked = edit_unlock_file (edit->filename);
}
Using ubuntu 10.10 the editor in mc (midnight commander) is nano. How can i switch to the
internal mc editor (mcedit)?
Isaiah ,
Press the following keys in order, one at a time:
F9 Activates the top menu.
o Selects the Option menu.
c Opens the configuration dialog.
i Toggles the use internal edit option.
s Saves your preferences.
Hurnst , 2014-06-21 02:34:51
Run MC as usual. On the command line right above the bottom row of menu selections type
select-editor . This should open a menu with a list of all of your installed
editors. This is working for me on all my current linux machines.
, 2010-12-09 18:07:18
You can also change the standard editor. Open a terminal and type this command:
sudo update-alternatives --config editor
You will get an list of the installed editors on your system, and you can chose your
favorite.
AntonioK , 2015-01-27 07:06:33
If you want to leave mc and system settings as it is now, you may just run it like
$ EDITOR=mcedit
> ,
Open Midnight Commander, go to Options -> Configuration and check "use internal editor"
Hit save and you are done.
Your hostname is a vital piece of
system information that you need to keep
track of as a system administrator.
Hostnames are the designations by which
we separate systems into easily
recognizable assets. This information is
especially important to make a note of
when working on a remotely managed
system. I have experienced multiple
instances of companies changing the
hostnames or IPs of storage servers and
then wondering why their data
replication broke. There are many ways
to change your hostname in Linux;
however, in this article, I'll focus on
changing your name as viewed by the
network (specifically in Red Hat
Enterprise Linux and Fedora).
Background
A quick bit of background. Before the
invention of DNS, your computer's
hostname was managed through the HOSTS
file located at
/etc/hosts
.
Anytime that a new computer was
connected to your local network, all
other computers on the network needed to
add the new machine into the
/etc/hosts
file in order to
communicate over the network. As this
method did not scale with the transition
into the world wide web era, DNS was a
clear way forward. With DNS configured,
your systems are smart enough to
translate unique IPs into hostnames and
back again, ensuring that there is
little confusion in web communications.
Modern Linux systems have
three different types of hostnames
configured. To minimize confusion, I
list them here and provide basic
information on each as well as a
personal best practice:
Transient hostname
:
How the network views your system.
Static hostname:
Set by the kernel.
Pretty hostname:
The
user-defined hostname.
It is recommended to pick a
pretty
hostname that is unique and
not easily confused with other systems.
Allow the transient and static names to
be variations on the pretty, and you
will be good to go in most
circumstances.
Working with hostnames
Now, let's look at how to view your
current hostname. The most basic command
used to see this information is
hostname
-f
. This command displays the
system's fully qualified domain name
(FQDN). To relate back to the three
types of hostnames, this is your
transient
hostname. A better way,
at least in terms of the information
provided, is to use the
systemd
command
hostnamectl
to view
your transient hostname and other system
information:
Image
Before moving on from the
hostname
command, I'll show
you how to use it to change your
transient hostname. Using
hostname
<x>
(where
x
is the
new hostname), you can change your
network name quickly, but be careful. I
once changed the hostname of a
customer's server by accident while
trying to view it. That was a small but
painful error that I overlooked for
several hours. You can see that process
below:
Image
It is also possible to use the
hostnamectl
command to change
your hostname. This command, in
conjunction with the right flags, can be
used to alter all three types of
hostnames. As stated previously, for the
purposes of this article, our focus is
on the transient hostname. The command
and its output look something like this:
Image
The final method to look at is the
sysctl
command. This
command allows you to change the kernel
parameter for your transient name
without having to reboot the system.
That method looks something like this:
Image
GNOME tip
Using GNOME, you can go to
Settings -> Details
to view and
change the static and pretty hostnames.
See below:
Image
Wrapping up
I hope that you found this
information useful as a quick and easy
way to manipulate your machine's
network-visible hostname. Remember to
always be careful when changing system
hostnames, especially in enterprise
environments, and to document changes as
they are made.
Want to try out Red Hat
Enterprise Linux?
Download
it now for free.
Topics:
Linux
Tyler Carrigan
Tyler is a community manager at
Enable Sysadmin, a submarine
veteran, and an all-round tech
enthusiast! He was first
introduced to Red Hat in 2012 by
way of a Red Hat Enterprise
Linux-based combat system inside
the USS Georgia Missile Control
Center.
More about me
The
Bash Debugger Project (bashdb)
lets you set breakpoints, inspect variables, perform a backtrace, and step through a bash
script line by line. In other words, it provides the features you expect in a C/C++ debugger to
anyone programming a bash script.
To see if your standard bash executable has bashdb support, execute the command shown below;
if you are not taken to a bashdb prompt then you'll have to install bashdb yourself.
$ bash
--debugger -c "set|grep -i dbg" ... bashdb
The Ubuntu Intrepid repository contains a package for bashdb, but there is no special
bashdb package in the openSUSE 11 or Fedora 9 repositories. I built from source using version
4.0-0.1 of bashdb on a 64-bit Fedora 9 machine, using the normal ./configure; make; sudo
make install commands.
You can start the Bash Debugger using the bash --debugger foo.sh syntax or the
bashdb foo.sh command. The former method is recommended except in cases where I/O
redirection might cause issues, and it's what I used. You can also use bashdb through
ddd or from an
Emacs buffer.
The syntax for many of the commands in bashdb mimics that of gdb, the GNU debugger. You can
step into functions, use next to execute the next line without
stepping into any functions, generate a backtrace with bt , exit bashdb with
quit or Ctrl-D, and examine a variable with print $foo . Aside from
the prefixing of the variable with $ at the end of the last sentence, there are
some other minor differences that you'll notice. For instance, pressing Enter on a blank line
in bashdb executes the previous step or next command instead of whatever the previous command
was.
The print command forces you to prefix shell variables with the dollar sign (
$foo ). A slightly shorter way of inspecting variables and functions is to use the
x
foo command, which uses declare to print variables and functions.
Both bashdb and your script run inside the same bash shell. Because bash lacks some
namespace properties, bashdb will include some functions and symbols into the global namespace
which your script can get at. bashdb prefixes its symbols with _Dbg_ , so you
should avoid that prefix in your scripts to avoid potential clashes. bashdb also uses some
environment variables; it uses the DBG_ prefix for its own, and relies on some
standard bash ones that begin with BASH_ .
To illustrate the use of bashdb, I'll work on the small bash script below, which expects a
numeric argument n and calculates the nth Fibonacci number .
#!/bin/bash
version="0.01"; fibonacci() { n=${1:?If you want the nth fibonacci number, you must supply n as
the first parameter.} if [ $n -le 1 ]; then echo $n else l=`fibonacci $((n-1))` r=`fibonacci
$((n-2))` echo $((l + r)) fi } for i in `seq 1 10` do result=$(fibonacci $i) echo "i=$i
result=$result" done
The below session shows bashdb in action, stepping over and then into the fibonacci function
and inspecting variables. I've made my input text bold for ease of reading. An initial
backtrace ( bt ) shows that the script begins at line 3, which is where the
version variable is written. The next and list commands then progress
to the next line of the script a few times and show the context of the current execution line.
After one of the next commands I press Enter to execute next again. I
invoke the examine command through the single letter shortcut x .
Notice that the variables are printed out using declare as opposed to their
display on the next line using print . Finally I set a breakpoint at the start of
the fibonacci function and continue the execution of the shell
script. The fibonacci function is called and I move to the next line
a few times and inspect a variable.
$ bash --debugger ./fibonacci.sh ...
(/home/ben/testing/bashdb/fibonacci.sh:3): 3: version="0.01"; bashdb bt ->0 in file
`./fibonacci.sh' at line 3 ##1 main() called from file `./fibonacci.sh' at line 0 bashdb
next (/home/ben/testing/bashdb/fibonacci.sh:16): 16: for i in `seq 1 10` bashdb
list 16:==>for i in `seq 1 10` 17: do 18: result=$(fibonacci $i) 19: echo "i=$i
result=$result" 20: done bashdb next (/home/ben/testing/bashdb/fibonacci.sh:18): 18:
result=$(fibonacci $i) bashdb (/home/ben/testing/bashdb/fibonacci.sh:19): 19: echo "i=$i
result=$result" bashdb x i result declare -- i="1" declare -- result="" bashdb print
$i $result 1 bashdb break fibonacci Breakpoint 1 set in file
/home/ben/testing/bashdb/fibonacci.sh, line 5. bashdb continue Breakpoint 1 hit (1
times). (/home/ben/testing/bashdb/fibonacci.sh:5): 5: fibonacci() { bashdb next
(/home/ben/testing/bashdb/fibonacci.sh:6): 6: n=${1:?If you want the nth fibonacci number, you
must supply n as the first parameter.} bashdb next
(/home/ben/testing/bashdb/fibonacci.sh:7): 7: if [ $n -le 1 ]; then bashdb x n declare
-- n="2" bashdb quit
Notice that the number in the bashdb prompt toward the end of the above example is enclosed
in parentheses. Each set of parentheses indicates that you have entered a subshell. In this
example this is due to being inside a shell function.
In the below example I use a watchpoint to see if and where the result variable
changes. Notice the initial next command. I found that if I didn't issue that next
then my watch would fail to work. As you can see, after I issue c to continue
execution, execution is stopped whenever the result variable is about to change, and the new
and old value are displayed.
(/home/ben/testing/bashdb/fibonacci.sh:3): 3: version="0.01";
bashdb<0> next (/home/ben/testing/bashdb/fibonacci.sh:16): 16: for i in `seq 1 10`
bashdb<1> watch result 0: ($result)==0 arith: 0 bashdb<2> c
Watchpoint 0: $result changed: old value: '' new value: '1'
(/home/ben/testing/bashdb/fibonacci.sh:19): 19: echo "i=$i result=$result" bashdb<3>
c i=1 result=1 i=2 result=1 Watchpoint 0: $result changed: old value: '1' new value: '2'
(/home/ben/testing/bashdb/fibonacci.sh:19): 19: echo "i=$i result=$result"
To get around the strange initial next requirement I used the
watche command in the below session, which lets you stop whenever an expression
becomes true. In this case I'm not overly interested in the first few Fibonacci numbers so I
set a watch to have execution stop when the result is greater than 4. You can also use a
watche command without a condition; for example, watche result would
stop execution whenever the result variable changed.
When a shell script goes wrong, many folks use the time-tested method of incrementally
adding in echo or printf statements to look for invalid values or
code paths that are never reached. With bashdb, you can save yourself time by just adding a few
watches on variables or setting a few breakpoints.
The granddaddy of HTML tools, with support for modern standards.
There used to be a fork called tidy-html5 which since became the official thing. Here is
its GitHub repository .
Tidy is a console application for Mac OS X, Linux, Windows, UNIX, and more. It corrects
and cleans up HTML and XML documents by fixing markup errors and upgrading legacy code to
modern standards.
For your needs, here is the command line to call Tidy:
tidy inputfile.html
Paul Brit ,
Update 2018: The homebrew/dupes is now deprecated, tidy-html5 may be directly
installed.
brew install tidy-html5
Original reply:
Tidy from OS X doesn't support HTML5 . But there is experimental
branch on Github which does.
To get it:
brew tap homebrew/dupes
brew install tidy --HEAD
brew untap homebrew/dupes
That's it! Have fun!
Boris , 2019-11-16 01:27:35
Error: No available formula with the name "tidy" . brew install
tidy-html5 works. – Pysis Apr 4 '17 at 13:34
I tried to rm -rf a folder, and got "device or resource busy".
In Windows, I would have used LockHunter to resolve this. What's the linux equivalent? (Please give as answer a simple "unlock
this" method, and not complete articles like this one .
Although they're useful, I'm currently interested in just ASimpleMethodThatWorks™)
camh , 2011-04-13 09:22:46
The tool you want is lsof , which stands for list open files .
It has a lot of options, so check the man page, but if you want to see all open files under a directory:
lsof +D /path
That will recurse through the filesystem under /path , so beware doing it on large directory trees.
Once you know which processes have files open, you can exit those apps, or kill them with the kill(1) command.
kip2 , 2014-04-03 01:24:22
sometimes it's the result of mounting issues, so I'd unmount the filesystem or directory you're trying to remove:
umount /path
BillThor ,
I use fuser for this kind of thing. It will list which process is using a file or files within a mount.
user73011 ,
Here is the solution:
Go into the directory and type ls -a
You will find a .xyz file
vi .xyz and look into what is the content of the file
ps -ef | grep username
You will see the .xyz content in the 8th column (last row)
kill -9 job_ids - where job_ids is the value of the 2nd column of corresponding error caused content in the
8th column
Now try to delete the folder or file.
Choylton B. Higginbottom ,
I had this same issue, built a one-liner starting with @camh recommendation:
The awk command grabs the PIDS. The tail command gets rid of the pesky first entry: "PID". I used
-9 on kill, others might have safer options.
user5359531 ,
I experience this frequently on servers that have NFS network file systems. I am assuming it has something to do with the filesystem,
since the files are typically named like .nfs000000123089abcxyz .
My typical solution is to rename or move the parent directory of the file, then come back later in a day or two and the file
will have been removed automatically, at which point I am free to delete the directory.
This typically happens in directories where I am installing or compiling software libraries.
gloriphobia , 2017-03-23 12:56:22
I had this problem when an automated test created a ramdisk. The commands suggested in the other answers, lsof and
fuser , were of no help. After the tests I tried to unmount it and then delete the folder. I was really confused
for ages because I couldn't get rid of it -- I kept getting "Device or resource busy" !
By accident I found out how to get rid of a ramdisk. I had to unmount it the same number of times that I had run the
mount command, i.e. sudo umount path
Due to the fact that it was created using automated testing, it got mounted many times, hence why I couldn't get rid of it
by simply unmounting it once after the tests. So, after I manually unmounted it lots of times it finally became a regular folder
again and I could delete it.
Hopefully this can help someone else who comes across this problem!
bil , 2018-04-04 14:10:20
Riffing off of Prabhat's question above, I had this issue in macos high sierra when I stranded an encfs process, rebooting solved
it, but this
ps -ef | grep name-of-busy-dir
Showed me the process and the PID (column two).
sudo kill -15 pid-here
fixed it.
Prabhat Kumar Singh , 2017-08-01 08:07:36
If you have the server accessible, Try
Deleting that dir from the server
Or, do umount and mount again, try umount -l : lazy umount if facing any issue on normal umount.
Example of my second to day, hour, minute, second converter:
# convert seconds to day-hour:min:sec
convertsecs2dhms() {
((d=${1}/(60*60*24)))
((h=(${1}%(60*60*24))/(60*60)))
((m=(${1}%(60*60))/60))
((s=${1}%60))
printf "%02d-%02d:%02d:%02d\n" $d $h $m $s
# PRETTY OUTPUT: uncomment below printf and comment out above printf if you want prettier output
# printf "%02dd %02dh %02dm %02ds\n" $d $h $m $s
}
# setting test variables: testing some constant variables & evaluated variables
TIME1="36"
TIME2="1036"
TIME3="91925"
# one way to output results
((TIME4=$TIME3*2)) # 183850
((TIME5=$TIME3*$TIME1)) # 3309300
((TIME6=100*86400+3*3600+40*60+31)) # 8653231 s = 100 days + 3 hours + 40 min + 31 sec
# outputting results: another way to show results (via echo & command substitution with backticks)
echo $TIME1 - `convertsecs2dhms $TIME1`
echo $TIME2 - `convertsecs2dhms $TIME2`
echo $TIME3 - `convertsecs2dhms $TIME3`
echo $TIME4 - `convertsecs2dhms $TIME4`
echo $TIME5 - `convertsecs2dhms $TIME5`
echo $TIME6 - `convertsecs2dhms $TIME6`
# OUTPUT WOULD BE LIKE THIS (If none pretty printf used):
# 36 - 00-00:00:36
# 1036 - 00-00:17:16
# 91925 - 01-01:32:05
# 183850 - 02-03:04:10
# 3309300 - 38-07:15:00
# 8653231 - 100-03:40:31
# OUTPUT WOULD BE LIKE THIS (If pretty printf used):
# 36 - 00d 00h 00m 36s
# 1036 - 00d 00h 17m 16s
# 91925 - 01d 01h 32m 05s
# 183850 - 02d 03h 04m 10s
# 3309300 - 38d 07h 15m 00s
# 1000000000 - 11574d 01h 46m 40s
Basile Starynkevitch ,
If $i represents some date in second since the Epoch, you could display it with
date -u -d @$i +%H:%M:%S
but you seems to suppose that $i is an interval (e.g. some duration) not a
date, and then I don't understand what you want.
Shilv , 2016-11-24 09:18:57
I use C shell, like this:
#! /bin/csh -f
set begDate_r = `date +%s`
set endDate_r = `date +%s`
set secs = `echo "$endDate_r - $begDate_r" | bc`
set h = `echo $secs/3600 | bc`
set m = `echo "$secs/60 - 60*$h" | bc`
set s = `echo $secs%60 | bc`
echo "Formatted Time: $h HOUR(s) - $m MIN(s) - $s SEC(s)"
Continuing @Daren`s answer, just to be clear: If you want to use the conversion to your time
zone , don't use the "u" switch , as in: date -d @$i +%T or in some cases
date -d @"$i" +%T
Step 2: Run TestDisk and create a new testdisk.log file
Use the following command in order to run the testdisk command line utility:
$ sudo testdisk
The output will give you a description of the utility. It will also let you create a testdisk.log file. This
file will later include useful information about how and where your lost file was found, listed and resumed.
The above output gives you three options about what to do with this file:
Create: (recommended)- This option lets you create a new log file.
Append: This option lets you append new information to already listed information in this file from any
previous session.
No Log: Choose this option if you do not want to record anything about the session for later use.
Important:
TestDisk is a pretty intelligent tool. It does know that many beginners will
also be using the utility for recovering lost files. Therefore, it predicts and suggests the option you should
be ideally selecting on a particular screen. You can see the suggested options in a highlighted form. You can
select an option through the up and down arrow keys and then entering to make your choice.
In the above output, I would opt for creating a new log file. The system might ask you the password for sudo
at this point.
Step 3: Select your recovery drive
The utility will now display a list of drives attached to your system. In my case, it is showing my hard
drive as it is the only storage device on my system.
Select Proceed, through the right and left arrow keys and hit Enter. As mentioned in the note in the above
screenshot, correct disk capacity must be detected in order for a successful file recovery to be performed.
Step 4: Select Partition Table Type of your Selected Drive
Now that you have selected a drive, you need to specify its partition table type of your on the following
screen:
Recovering lost files is only one of the features of testdisk, the utility offers much more than that.
Through the options displayed in the above screenshot, you can select any of those features. But here we are
interested only in recovering our accidentally deleted file. For this, select the Advanced option and hit
enter.
In this utility if you reach a point you did not intend to, you can go back by using the q key.
Step 6: Select the drive partition where you lost the file
If your selected drive has multiple partitions, the following screen lets you choose the relevant one from
them.
<img src="https://vitux.com/wp-content/uploads/2019/10/word-image-69.png" alt="Choose partition from where the file shall be recovered" width="736" height="499" srcset="https://vitux.com/wp-content/uploads/2019/10/word-image-69.png 736w, https://vitux.com/wp-content/uploads/2019/10/word-image-69-300x203.png 300w" sizes="(max-width: 736px) 100vw, 736px" />
I lost my file while I was using Linux, Debian. Make your choice and then choose the List option from the
options shown at the bottom of the screen.
This will list all the directories on your partition.
Step 7: Browse to the directory from where you lost the file
When the testdisk utility displays all the directories of your operating system, browse to the directory
from where you deleted/lost the file. I remember that I lost the file from the Downloads folder in my home
directory. So I will browse to home:
Tip: You can use the left arrow to go back to the previous directory.
When you have reached your required directory, you will see the deleted files in colored or highlighted
form.
And, here I see my lost file "accidently_removed.docx" in the list. Of course, I intentionally named it this
as I had to illustrate the whole process to you.
By now, you must have found your lost file in the list. Use the C option to copy the selected file. This
file will later be restored to the location you will specify in the next step:
Step 9: Specify the location where the found file will be restored
Now that we have copied the lost file that we have now found, the testdisk utility will display the
following screen so that we can specify where to restore it.
You can specify any accessible location as it is only a simple UI thing to copy and paste the file to your
desired location.
I am specifically selecting the location from where I lost the file, my Downloads folder:
See the text in green in the above screenshot? This is actually great news. Now my file is restored on the
specified location.
This might seem to be a slightly long process but it is definitely worth getting your lost file back. The
restored file will most probably be in a locked state. This means that only an authorized user can access and
open it.
We all need this tool time and again, but if you want to delete it till you further need it you can do so
through the following command:
$ sudo apt-get remove testdisk
You can also delete the testdisk.log file if you want. It is such a relief to get your lost file back!
You probably heard about cheat.sh . I use this service everyday! This is one of
the useful service for all Linux users. It displays concise Linux command examples.
For instance, to view the curl command cheatsheet , simply run the following command from your console:
$ curl cheat.sh/curl
It is that simple! You don't need to go through man pages or use any online resources to learn about commands. It can get you
the cheatsheets of most Linux and unix commands in couple seconds.
Want to know the meanig of an English word? Here is how you can get the meaning of a word – gustatory
$ curl 'dict://dict.org/d:gustatory'
220 pan.alephnull.com dictd 1.12.1/rf on Linux 4.4.0-1-amd64 <auth.mime> <[email protected]>
250 ok
150 1 definitions retrieved
151 "Gustatory" gcide "The Collaborative International Dictionary of English v.0.48"
Gustatory \Gust"a*to*ry\, a.
Pertaining to, or subservient to, the sense of taste; as, the
gustatory nerve which supplies the front of the tongue.
[1913 Webster]
.
250 ok [d/m/c = 1/0/16; 0.000r 0.000u 0.000s]
221 bye [d/m/c = 0/0/0; 0.000r 0.000u 0.000s]
Text sharing
You can share texts via some console services. These text sharing services are often useful for sharing code.
The above command will share the text "Welcome To OSTechNix" via ix.io site. Anyone can view access this text from a web browser
by navigating to the URL – http://ix.io/2bCA
Not just text, we can even share files to anyone using a console service called filepush .
$ curl --upload-file ostechnix.txt filepush.co/upload/ostechnix.txt
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 72 0 0 100 72 0 54 0:00:01 0:00:01 --:--:-- 54http://filepush.co/8x6h/ostechnix.txt
100 110 100 38 100 72 27 53 0:00:01 0:00:01 --:--:-- 81
The above command will upload the ostechnix.txt file to filepush.co site. You can access this file from anywhere by navgating
to the link – http://filepush.co/8x6h/ostechnix.txt
Another text sharing console service is termbin :
$ echo "Welcome To OSTechNix!" | nc termbin.com 9999
There is also another console service named
transfer.sh . But it doesn't
work at the time of writing this guide.
Browser
There are many text browsers are available for Linux. Browsh is one of them and you can access it right from your Terminal using
command:
$ ssh brow.sh
Browsh is a modern, text browser that supports graphics including video. Technically speaking, it is not much of a browser, but
some kind of terminal front-end of browser. It uses headless Firefox to render the web page and then converts it to ASCII art. Refer
the following guide for more details.
timeout is a command-line utility that runs a specified command and terminates it if it is still running after a given
period of time. In other words, timeout allows you to run a command with a time limit. The timeout command
is a part of the GNU core utilities package which is installed on almost any Linux distribution.
It is handy when you want to run a command that doesn't have a built-in timeout option.
In this article, we will explain how to use the Linux timeout command.
If no signal is given, timeout sends the SIGTERM signal to the managed command when the time limit is
reached. You can specify which signal to send using the -s ( --signal ) option.
For example, to send SIGKILL to the ping command after one minute you would use:
sudo timeout -s SIGKILL ping 8.8.8.8
The signal can be specified by its name like SIGKILL or its number like 9 . The following command is
identical to the previous one:
sudo timeout -s 9 ping 8.8.8.8
To get a list of all available signals, use the kill -l command:
SIGTERM , the default signal that is sent when the time limit is exceeded can be caught or ignored by some processes.
In that situations, the process continues to run after the termination signal is send.
To make sure the monitored command is killed, use the -k ( --kill-after ) option following by a time
period. When this option is used after the given time limit is reached, the timeout command sends SIGKILL
signal to the managed program that cannot be caught or ignored.
In the following example, timeout runs the command for one minute, and if it is not terminated, it will kill it after
ten seconds:
The timeout command is used to run a given command with a time limit.
timeout is a simple command that doesn't have a lot of options. Typically you will invoke timeout only
with two arguments, the duration, and the managed command.
If you have any questions or feedback, feel free to leave a comment.
Watch is a great utility that automatically refreshes data. Some of the more common uses for this command involve
monitoring system processes or logs, but it can be used in combination with pipes for more versatility.
Using watch command without any options will use the default parameter of 2.0 second refresh intervals.
As I mentioned before, one of the more common uses is monitoring system processes. Let's use it with the
free command
. This will give you up to date information about our system's memory usage.
watch free
Yes, it is that simple my friends.
Every 2.0s: free pop-os: Wed Dec 25 13:47:59 2019
total used free shared buff/cache available
Mem: 32596848 3846372 25571572 676612 3178904 27702636
Swap: 0 0 0
Adjust refresh rate of watch command
You can easily change how quickly the output is updated using the
-n
flag.
watch -n 10 free
Every 10.0s: free pop-os: Wed Dec 25 13:58:32 2019
total used free shared buff/cache available
Mem: 32596848 4522508 24864196 715600 3210144 26988920
Swap: 0 0 0
This changes from the default 2.0 second refresh to 10.0 seconds as you can see in the top left corner of our
output.
Remove title or header info from watch command output
watch -t free
The -t flag removes the title/header information to clean up output. The information will still refresh every 2
seconds but you can change that by combining the -n option.
total used free shared buff/cache available
Mem: 32596848 3683324 25089268 1251908 3824256 27286132
Swap: 0 0 0
Highlight the changes in watch command output
You can add the
-d
option and watch will automatically highlight changes for us. Let's take a
look at this using the date command. I've included a screen capture to show how the highlighting behaves.
<img src="https://i2.wp.com/linuxhandbook.com/wp-content/uploads/watch_command.gif?ssl=1" alt="Watch Command" data-recalc-dims="1"/>
Using pipes with watch
You can combine items using pipes. This is not a feature exclusive to watch, but it enhances the functionality of
this software. Pipes rely on the
|
symbol. Not coincidentally, this is called a pipe symbol or
sometimes a vertical bar symbol.
watch "cat /var/log/syslog | tail -n 3"
While this command runs, it will list the last 3 lines of the syslog file. The list will be refreshed every 2
seconds and any changes will be displayed.
Every 2.0s: cat /var/log/syslog | tail -n 3 pop-os: Wed Dec 25 15:18:06 2019
Dec 25 15:17:24 pop-os dbus-daemon[1705]: [session uid=1000 pid=1705] Successfully activated service 'org.freedesktop.Tracker1.Min
er.Extract'
Dec 25 15:17:24 pop-os systemd[1591]: Started Tracker metadata extractor.
Dec 25 15:17:45 pop-os systemd[1591]: tracker-extract.service: Succeeded.
Conclusion
Watch is a simple, but very useful utility. I hope I've given you ideas that will help you improve your workflow.
This is a straightforward command, but there are a wide range of potential uses. If you have any interesting uses
that you would like to share, let us know about them in the comments.
If you're like me, you still cling to soon-to-be-deprecated commands like ifconfig , nslookup , and
netstat . The new replacements are ip , dig , and ss , respectively. It's time
to (reluctantly) let go of legacy utilities and head into the future with ss . The ip command is worth
a mention here because part of netstat 's functionality has been replaced by ip . This article covers the
essentials for the ss command so that you don't have to dig (no pun intended) for them.
Formally, ss is the socket statistics command that replaces netstat . In this article, I provide
netstat commands and their ss replacements. Michale Prokop, the developer of ss , made it
easy for us to transition into ss from netstat by making some of netstat 's options operate
in much the same fashion in ss .
For example, to display TCP sockets, use the -t option:
$ netstat -t
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 rhel8:ssh khess-mac:62036 ESTABLISHED
$ ss -t
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 0 192.168.1.65:ssh 192.168.1.94:62036
You can see that the information given is essentially the same, but to better mimic what you see in the netstat command,
use the -r (resolve) option:
$ ss -tr
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 0 rhel8:ssh khess-mac:62036
And to see port numbers rather than their translations, use the -n option:
$ ss -ntr
State Recv-Q Send-Q Local Address:Port Peer Address:Port
ESTAB 0 0 rhel8:22 khess-mac:62036
It isn't 100% necessary that netstat and ss mesh, but it does make the transition a little easier. So,
try your standby netstat options before hitting the man page or the internet for answers, and you might be pleasantly
surprised at the results.
For example, the netstat command with the old standby options -an yield comparable results (which are
too long to show here in full):
The TCP entries fall at the end of the ss command's display and at the beginning of netstat 's. So,
there are layout differences even though the displayed information is really the same.
If you're wondering which netstat commands have been replaced by the ip command, here's one for you:
$ netstat -g
IPv6/IPv4 Group Memberships
Interface RefCnt Group
--------------- ------ ---------------------
lo 1 all-systems.mcast.net
enp0s3 1 all-systems.mcast.net
lo 1 ff02::1
lo 1 ff01::1
enp0s3 1 ff02::1:ffa6:ab3e
enp0s3 1 ff02::1:ff8d:912c
enp0s3 1 ff02::1
enp0s3 1 ff01::1
$ ip maddr
1: lo
inet 224.0.0.1
inet6 ff02::1
inet6 ff01::1
2: enp0s3
link 01:00:5e:00:00:01
link 33:33:00:00:00:01
link 33:33:ff:8d:91:2c
link 33:33:ff:a6:ab:3e
inet 224.0.0.1
inet6 ff02::1:ffa6:ab3e
inet6 ff02::1:ff8d:912c
inet6 ff02::1
inet6 ff01::1
The ss command isn't perfect (sorry, Michael). In fact, there is one significant ss bummer. You can
try this one for yourself to compare the two:
$ netstat -s
Ip:
Forwarding: 2
6231 total packets received
2 with invalid addresses
0 forwarded
0 incoming packets discarded
3104 incoming packets delivered
2011 requests sent out
243 dropped because of missing route
<truncated>
$ ss -s
Total: 182
TCP: 3 (estab 1, closed 0, orphaned 0, timewait 0)
Transport Total IP IPv6
RAW 1 0 1
UDP 3 2 1
TCP 3 2 1
INET 7 4 3
FRAG 0 0 0
If you figure out how to display the same info with ss , please let me know.
Maybe as ss evolves, it will include more features. I guess Michael or someone else could always just look at the
netstat command to glean those statistics from it. For me, I prefer netstat , and I'm not sure exactly
why it's being deprecated in favor of ss . The output from ss is less human-readable in almost every instance.
What do you think? What about ss makes it a better option than netstat ? I suppose I could ask the same
question of the other net-tools utilities as well. I don't find anything wrong with them. In my mind, unless you're
significantly improving an existing utility, why bother deprecating the other?
There, you have the ss command in a nutshell. As netstat fades into oblivion, I'm sure I'll eventually
embrace ss as its successor.
Ken Hess is an Enable SysAdmin Community Manager and an Enable SysAdmin contributor. Ken has used Red Hat Linux since
1996 and has written ebooks, whitepapers, actual books, thousands of exam review questions, and hundreds of articles on open
source and other topics. More about me
Thirteen Useful Tools for Working with Text on the Command Line
By
Karl Wakim
– Posted on
Jan 9, 2020
Jan 9, 2020
in
Linux
GNU/Linux distributions include a wealth of programs for handling text, most of which are provided by the GNU core
utilities. There's somewhat of a learning curve, but these utilities can prove very useful and efficient when used
correctly.
Here are thirteen powerful text manipulation tools every command-line user should know.
1. cat
Cat was designed to con
cat
enate files but is most
often used to display a single file. Without any arguments, cat reads standard input until
Ctrl
+
D
is pressed (from the terminal or from another program output if using a pipe). Standard input can also be explicitly
specified with a
-
.
Cat has a number of useful options, notably:
-A
prints "$" at the end of each line and displays non-printing characters using caret notation.
-n
numbers all lines.
-b
numbers lines that are not blank.
-s
reduces a series of blank lines to a single blank line.
In the following example, we are concatenating and numbering the contents of file1, standard input, and file3.
cat -n file1 - file3
2. sort
As its name suggests,
sort
sorts file contents
alphabetically and numerically.
3. uniq
Uniq takes a sorted file and removes duplicate lines. It is often chained with
sort
in a single command.
4. comm
Comm is used to compare two sorted files, line by line. It outputs three columns: the first two columns contain
lines unique to the first and second file respectively, and the third displays those found in both files.
5. cut
Cut is used to retrieve specific sections of lines, based on characters, fields, or bytes. It can read from a file
or from standard input if no file is specified.
Cutting by character position
The
-c
option specifies a single character position or
one or more ranges of characters.
For example:
-c 3
:
the 3rd character.
-c 3-5
:
from the 3rd to the 5th character.
-c -5
or
-c 1-5
: from the 1st to the 5th character.
-c 5-
:
from the 5th character to the end of the line.
-c 3,5-7
:
the 3rd and from the 5th to the 7th character.
Cutting by field
Fields are separated by a delimiter consisting of a single character, which is specified with the
-d
option. The
-f
option selects a field position or one or more ranges of
fields using the same format as above.
6. dos2unix
GNU/Linux and Unix usually terminate text lines with a line feed (LF), while Windows uses carriage return and line
feed (CRLF). Compatibility issues can arise when handling CRLF text on Linux, which is where dos2unix comes in. It
converts CRLF terminators to LF.
In the following example, the
file
command is used to
check the text format before and after using
dos2unix
.
7. fold
To make long lines of text easier to read and handle, you can use
fold
, which wraps lines to a specified width.
Fold strictly matches the specified width by default, breaking words where necessary.
fold -w 30 longline.txt
If breaking words is undesirable, you can use the
-s
option to break at spaces.
fold -w 30 -s longline.txt
8. iconv
This tool converts text from one encoding to another, which is very useful when dealing with unusual encodings.
"input_encoding" is the encoding you are converting from.
"output_encoding" is the encoding you are converting to.
"output_file" is the filename iconv will save to.
"input_file" is the filename iconv will read from.
Note:
you can list the available encodings with
iconv -l
9. sed
sed is a powerful and flexible
s
tream
ed
itor, most commonly used to find and replace strings
with the following syntax.
The following command will read from the specified file (or standard input), replacing the parts of text that
match the
regular expression
pattern with the replacement string and outputting the result to the terminal.
sed s/pattern/replacement/g filename
To modify the original file instead, you can use the
-i
flag.
10. wc
The
wc
utility prints the number of bytes, characters,
words, or lines in a file.
11. split
You can use
split
to divide a file into smaller files,
by number of lines, by size, or to a specific number of files.
Splitting by number of lines
split -l num_lines input_file output_prefix
Splitting by bytes
split -b bytes input_file output_prefix
Splitting to a specific number of files
split -n num_files input_file output_prefix
12. tac
Tac, which is cat in reverse, does exactly that: it displays files with the lines in reverse order.
13. tr
The tr tool is used to translate or delete sets of characters.
A set of characters is usually either a string or ranges of characters. For instance:
Mastering the Command Line: Use timedatectl to Control System Time and Date in Linux
By Himanshu Arora
– Posted on Nov 11, 2014 Nov 9, 2014 in Linux
The timedatectl command in Linux allows you to query and change the system
clock and its settings. It comes as part of systemd, a replacement for the sysvinit daemon used
in the GNU/Linux and Unix systems.
In this article, we will discuss this command and the features it provides using relevant
examples.
Timedatectl examples
Note – All examples described in this article are tested on GNU bash, version
4.3.11(1).
Display system date/time information
Simply run the command without any command line options or flags, and it gives you
information on the system's current date and time, as well as time-related settings. For
example, here is the output when I executed the command on my system:
$ timedatectl
Local time: Sat 2014-11-08 05:46:40 IST
Universal time: Sat 2014-11-08 00:16:40 UTC
Timezone: Asia/Kolkata (IST, +0530)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: n/a
So you can see that the output contains information on LTC, UTC, and time zone, as well as
settings related to NTP, RTC and DST for the localhost.
Update the system date or time
using the set-time option
To set the system clock to a specified date or time, use the set-time option
followed by a string containing the new date/time information. For example, to change the
system time to 6:40 am, I used the following command:
$ sudo timedatectl set-time "2014-11-08 06:40:00"
and here is the output:
$ timedatectl
Local time: Sat 2014-11-08 06:40:02 IST
Universal time: Sat 2014-11-08 01:10:02 UTC
Timezone: Asia/Kolkata (IST, +0530)
NTP enabled: yes
NTP synchronized: no
RTC in local TZ: no
DST active: n/a
Observe that the Local time field now shows the updated time. Similarly, you can update the
system date, too.
Update the system time zone using the set-timezone option
To set the system time zone to the specified value, you can use the
set-timezone option followed by the time zone value. To help you with the task,
the timedatectl command also provides another useful option.
list-timezones provides you with a list of available time zones to choose
from.
For example, here is the scrollable list of time zones the timedatectl command
produced on my system:
To change the system's current time zone from Asia/Kolkata to Asia/Kathmandu, here is the
command I used:
$ timedatectl set-timezone Asia/Kathmandu
and to verify the change, here is the output of the timedatectl command:
$ timedatectl
Local time: Sat 2014-11-08 07:11:23 NPT
Universal time: Sat 2014-11-08 01:26:23 UTC
Timezone: Asia/Kathmandu (NPT, +0545)
NTP enabled: yes
NTP synchronized: no
RTC in local TZ: no
DST active: n/a
You can see that the time zone was changed to the new value.
Configure RTC
You can also use the timedatectl command to configure RTC (real-time clock).
For those who are unaware, RTC is a battery-powered computer clock that keeps track of the time
even when the system is turned off. The timedatectl command offers a
set-local-rtc option which can be used to maintain the RTC in either local time or
universal time.
This option requires a boolean argument. If 0 is supplied, the system is configured to
maintain the RTC in universal time:
$ timedatectl set-local-rtc 0
but in case 1 is supplied, it will maintain the RTC in local time instead.
$ timedatectl set-local-rtc 1
A word of caution : Maintaining the RTC in the local time zone is not fully supported and
will create various problems with time zone changes and daylight saving adjustments. If at all
possible, use RTC in UTC.
Another point worth noting is that if set-local-rtc is invoked and the
--adjust-system-clock option is passed, the system clock is synchronized from the
RTC again, taking the new setting into account. Otherwise the RTC is synchronized from the
system clock.
Configure NTP-based network time synchronization
NTP, or Network Time Protocol, is a networking protocol for clock synchronization between
computer systems over packet-switched, variable-latency data networks. It is intended to
synchronize all participating computers to within a few milliseconds of
UTC.
The timedatectl command provides a set-ntp option that controls
whether NTP based network time synchronization is enabled. This option expects a boolean
argument. To enable NTP-based time synchronization, run the following command:
$ timedatectl set-ntp true
To disable, run:
$ timedatectl set-ntp false
Conclusion
As evident from the examples described above, the timedatectl command is a
handy tool for system administrators who can use it to to adjust various system clocks and RTC
configurations as well as poll remote servers for time information. To learn more about the
command, head over to its man page .
Time is an important aspect in Linux systems especially in critical services such as cron
jobs. Having the correct time on the server ensures that the server operates in a healthy
environment that consists of distributed systems and maintains accuracy in the workplace.
In this tutorial, we will focus on how to set time/date/time zone and to synchronize the
server clock with your Ubuntu Linux machine.
Check Current Time
You can verify the current time and date using the date and the
timedatectl commands. These linux commands
can be executed straight from the terminal as a regular user or as a superuser. The commands
are handy usefulness of the two commands is seen when you want to correct a wrong time from the
command line.
Using the date command
Log in as a root user and use the command as follows
$ date
Output
You can also use the same command to check a date 2 days ago
$ date --date="2 days ago"
Output
Using
timedatectl command
Checking on the status of the time on your system as well as the present time settings, use
the command timedatectl as shown
# timedatectl
or
# timedatectl status
Changing
Time
We use the timedatectl to change system time using the format HH:MM: SS. HH
stands for the hour in 24-hour format, MM stands for minutes and SS for seconds.
Setting the time to 09:08:07 use the command as follows (using the timedatectl)
# timedatectl set-time 09:08:07
using date command
Changing time means all the system processes are running on the same clock putting the
desktop and server at the same time. From the command line, use date command as follows
To change the locale to either AM or PM use the %p in the following format.
# date +%T%p -s "6:10:30AM"
# date +%T%p -s "12:10:30PM"
Change Date
Generally, you want your system date and time is set automatically. If for some reason you
have to change it manually using date command, we can use this command :
# date --set="20140125 09:17:00"
It will set your current date and time of your system into 'January 25, 2014' and '09:17:00
AM'. Please note, that you must have root privilege to do this.
You can use timedatectl to set the time and the date respectively. The accepted format is
YYYY-MM-DD, YYYY represents the year, MM the month in two digits and DD for the day in two
digits. Changing the date to 15 January 2019, you should use the following command
# timedatectl set-time 20190115
Create custom date format
To create custom date format, use a plus sign (+)
$ date +"Day : %d Month : %m Year : %Y"
Day: 05 Month: 12 Year: 2013
$ date +%D
12/05/13
%D format follows Year/Month/Day format .
You can also put the day name if you want. Here are some examples :
$ date +"%a %b %d %y"
Fri 06 Dec 2013
$ date +"%A %B %d %Y"
Friday December 06 2013
$ date +"%A %B %d %Y %T"
Friday December 06 2013 00:30:37
$ date +"%A %B-%d-%Y %c"
Friday December-06-2013 12:30:37 AM WIB
List/Change time zone
Changing the time zone is crucial when you want to ensure that everything synchronizes with
the Network Time Protocol. The first thing to do is to list all the region's time zones using
the list-time zones option or grep to make the command easy to understand
# timedatectl list-timezones
The above command will present a scrollable format.
Recommended timezone for servers is UTC as it doesn't have daylight savings. If you know,
the specific time zones set it using the name using the following command
# timedatectl set-timezone America/Los_Angeles
To display timezone execute
# timedatectl | grep "Time"
Set
the Local-rtc
The Real-time clock (RTC) which is also referred to as the hardware clock is independent of
the operating system and continues to run even when the server is shut down.
Use the following command
# timedatectl set-local-rtc 0
In addition, the following command for the local time
# timedatectl set-local-rtc 1
Check/Change CMOS Time
The computer CMOS battery will automatically synchronize time with system clock as long as
the CMOS is working correctly.
Use the hwclock command to check the CMOS date as follows
# hwclock
To synchronize the CMOS date with system date use the following format
# hwclock –systohc
To have the correct time for your Linux environment is critical because many operations
depend on it. Such operations include logging events and corn jobs as well. we hope you found
this article useful.
Mirroring a running system into a ramdiskGreg Marsden
In this blog post, Oracle Linux kernel developer William Roche presents a method to
mirror a running system into a ramdisk.
A RAM mirrored System ?
There are cases where a system can boot correctly but after some time, can lose its system
disk access - for example an iSCSI system disk configuration that has network issues, or any
other disk driver problem. Once the system disk is no longer accessible, we rapidly face a hang
situation followed by I/O failures, without the possibility of local investigation on this
machine. I/O errors can be reported on the console:
XFS (dm-0): Log I/O Error Detected....
Or losing access to basic commands like:
# ls
-bash: /bin/ls: Input/output error
The approach presented here allows a small system disk space to be mirrored in memory to
avoid the above I/O failures situation, which provides the ability to investigate the reasons
for the disk loss. The system disk loss will be noticed as an I/O hang, at which point there
will be a transition to use only the ram-disk.
To enable this, the Oracle Linux developer Philip "Bryce" Copeland created the following
method (more details will follow):
Create a "small enough" system disk image using LVM (a minimized Oracle Linux
installation does that)
After the system is started, create a ramdisk and use it as a mirror for the system
volume
when/if the (primary) system disk access is lost, the ramdisk continues to provide all
necessary system functions.
Disk and memory sizes:
As we are going to mirror the entire system installation to the memory, this system
installation image has to fit in a fraction of the memory - giving enough memory room to hold
the mirror image and necessary running space.
Of course this is a trade-off between the memory available to the server and the minimal
disk size needed to run the system. For example a 12GB disk space can be used for a minimal
system installation on a 16GB memory machine.
A standard Oracle Linux installation uses XFS as root fs, which (currently) can't be shrunk.
In order to generate a usable "small enough" system, it is recommended to proceed to the OS
installation on a correctly sized disk space. Of course, a correctly sized installation
location can be created using partitions of large physical disk. Then, the needed application
filesystems can be mounted from their current installation disk(s). Some system adjustments may
also be required (services added, configuration changes, etc...).
This configuration phase should not be underestimated as it can be difficult to separate the
system from the needed applications, and keeping both on the same space could be too large for
a RAM disk mirroring.
The idea is not to keep an entire system load active when losing disks access, but to be
able to have enough system to avoid system commands access failure and analyze the
situation.
We are also going to avoid the use of swap. When the system disk access is lost, we don't
want to require it for swap data. Also, we don't want to use more memory space to hold a swap
space mirror. The memory is better used directly by the system itself.
The system installation can have a swap space (for example a 1.2GB space on our 12GB disk
example) but we are neither going to mirror it nor use it.
Our 12GB disk example could be used with: 1GB /boot space, 11GB LVM Space (1.2GB swap
volume, 9.8 GB root volume).
Ramdisk
memory footprint:
The ramdisk size has to be a little larger (8M) than the root volume size that we are going
to mirror, making room for metadata. But we can deal with 2 types of ramdisk:
A classical Block Ram Disk (brd) device
A memory compressed Ram Block Device (zram)
We can expect roughly 30% to 50% memory space gain from zram compared to brd, but zram must
use 4k I/O blocks only. This means that the filesystem used for root has to only deal with a
multiple of 4k I/Os.
Basic commands:
Here is a simple list of commands to manually create and use a ramdisk and mirror the root
filesystem space. We create a temporary configuration that needs to be undone or the subsequent
reboot will not work. But we also provide below a way of automating at startup and
shutdown.
Note the root volume size (considered to be ol/root in this example):
# lvconvert -y -m 0 ol/root /dev/ram0Logical volume ol/rootsuccessfully converted.# vgreduce ol /dev/ram0Removed"/dev/ram0"from volume group"ol"# mount /boot# mount /boot/efi# swapon
-a
What about in-memory compression ?
As indicated above, zRAM devices can compress data in-memory, but 2 main problems need to be
fixed:
LVM does take into account zRAM devices by default
zRAM only works with 4K I/Os
Make lvm work with zram:
The lvm configuration file has to be changed to take into account the "zram" type of
devices. Including the following "types" entry to the /etc/lvm/lvm.conf file in its "devices"
section:
We can notice here that the sector size (sectsz) used on this root fs is a standard 512
bytes. This fs type cannot be mirrored with a zRAM device, and needs to be recreated with 4k
sector sizes.
Transforming the root file system to 4k sector size:
This is simply a backup (to a zram disk) and restore procedure after recreating the root FS.
To do so, the system has to be booted from another system image. Booting from an installation
DVD image can be a good possibility.
Boot from an OL installation DVD [Choose "Troubleshooting", "Rescue a Oracle Linux
system", "3) Skip to shell"]
A service Unit file can also be created: /etc/systemd/system/raid1-ramdisk.service
[https://github.com/oracle/linux-blog-sample-code/blob/ramdisk-system-image/raid1-ramdisk.service]
[Unit]Description=Enable RAMdisk RAID 1 on LVMAfter=local-fs.targetBefore=shutdown.target reboot.target halt.target[Service]ExecStart=/usr/sbin/start-raid1-ramdiskExecStop=/usr/sbin/stop-raid1-ramdiskType=oneshotRemainAfterExit=yesTimeoutSec=0[Install]WantedBy=multi-user.target
Conclusion:
When the system disk access problem manifests itself, the ramdisk mirror branch will provide
the possibility to investigate the situation. This procedure goal is not to keep the system
running on this memory mirror configuration, but help investigate a bad situation.
When the problem is identified and fixed, I really recommend to come back to a standard
configuration -- enjoying the entire memory of the system, a standard system disk, a possible
swap space etc.
Hoping the method described here can help. I also want to thank for their reviews Philip
"Bryce" Copeland who also created the first prototype of the above scripts, and Mark Kanda who
also helped testing many aspects of this work.
chkservice, a terminal user interface (TUI) for managing systemd units, has been updated recently with window resize and search
support.
chkservice is a simplistic
systemd unit manager that uses ncurses for its terminal interface.
Using it you can enable or disable, and start or stop a systemd unit. It also shows the units status (enabled, disabled, static or
masked).
You can navigate the chkservice user interface using keyboard shortcuts:
Up or l to move cursor up
Down or j to move cursor down
PgUp or b to move page up
PgDown or f to move page down
To enable or disable a unit press Space , and to start or stop a unity press s . You can access the help
screen which shows all available keys by pressing ? .
The command line tool had its first release in August 2017, with no new releases until a few days ago when version 0.2 was released,
quickly followed by 0.3.
With the latest 0.3 release, chkservice adds a search feature that allows easily searching through all systemd units.
To
search, type / followed by your search query, and press Enter . To search for the next item matching your
search query you'll have to type / again, followed by Enter or Ctrl + m (without entering
any search text).
Another addition to the latest chkservice is window resize support. In the 0.1 version, the tool would close when the user tried
to resize the terminal window. That's no longer the case now, chkservice allowing the resize of the terminal window it runs in.
And finally, the last addition to the latest chkservice 0.3 is G-g navigation support . Press G
( Shift + g ) to navigate to the bottom, and g to navigate to the top.
Download and install chkservice
The initial (0.1) chkservice version can be found
in the official repositories of a few Linux distributions, including Debian and Ubuntu (and Debian or Ubuntu based Linux distribution
-- e.g. Linux Mint, Pop!_OS, Elementary OS and so on).
There are some third-party repositories available as well, including a Fedora Copr, Ubuntu / Linux Mint PPA, and Arch Linux AUR,
but at the time I'm writing this, only the AUR package
was updated to the latest chkservice version 0.3.
You may also install chkservice from source. Use the instructions provided in the tool's
readme to either create a DEB package or install
it directly.
No new interesting ideas for such an important topic whatsoever. One of the main problems here is documenting actions of
each administrator in such a way that the set of actions was visible to everybody in a convenient and transparent matter. With
multiple terminal opened Unix history is not the file from which you can deduct each sysadmin actions as parts of the
history from additional terminals are missing. , not smooch access. Actually Solaris has some ideas implemented in Solaris 10, but they never made it to Linux
In our team we have three seasoned Linux sysadmins having to administer a few dozen Debian
servers. Previously we have all worked as root using SSH public key authentication. But we had a discussion on what is the best practice
for that scenario and couldn't agree on anything.
Everybody's SSH public key is put into ~root/.ssh/authorized_keys2
Advantage: easy to use, SSH agent forwarding works easily, little overhead
Disadvantage: missing auditing (you never know which "root" made a change), accidents are more likely
Using personalized accounts and sudo
That way we would login with personalized accounts using SSH public keys and use sudo to do single tasks with root
permissions. In addition we could give ourselves the "adm" group that allows us to view log files.
Advantage: good auditing, sudo prevents us from doing idiotic things too easily
Disadvantage: SSH agent forwarding breaks, it's a hassle because barely anything can be done as non-root
Using multiple UID 0 users
This is a very unique proposal from one of the sysadmins. He suggest to create three users in /etc/passwd all having UID 0 but
different login names. He claims that this is not actually forbidden and allow everyone to be UID 0 but still being able to audit.
Advantage: SSH agent forwarding works, auditing might work (untested), no sudo hassle
Disadvantage: feels pretty dirty - couldn't find it documented anywhere as an allowed way
Comments:
About your "couldn't find it documented anywhere as an allowed way" statement: Have a look to the -o flag in
the useradd manual page. This flag is there to allow multiple users sharing the same uid. –
jlliagre May 21 '12 at 11:38
Can you explain what you mean by "SSH agent forwarding breaks" in the second option? We use this at my work and ssh agent
forwarding works just fine. – Patrick May 21 '12 at 12:01
Another consequence of sudo method: You can no longer SCP/FTP as root. Any file transfers will first need to be moved into
the person's home directory and then copied over in the terminal. This is an advantage and a disadvantage depending on perspective.
– user606723 May 21 '12 at 14:43
The second option is the best one IMHO. Personal accounts, sudo access. Disable root access via SSH completely. We have a few hundred
servers and half a dozen system admins, this is how we do it.
How does agent forwarding break exactly?
Also, if it's such a hassle using sudo in front of every task you can invoke a sudo shell with sudo -s
or switch to a root shell with sudo su -
10 Rather than disable root access by SSH completely, I recommend making root access by SSH require keys, creating one key
with a very strong keyphrase and keeping it locked away for emergency use only. If you have permanent console access this is less
useful, but if you don't, it can be very handy. – EightBitTony
May 21 '12 at 13:02
17 I recommend disabling root login over SSH for security purposes. If you really need to be logged in as root, log in as
a non-root user and su. – taz May 21 '12 at 14:29
+1.. I'd go further than saying "The second option is the best". I'd it's the only reasonable option. Options one and three
vastly decrease the security of the system from both outside attacks and mistakes. Besides, #2 is how the system was designed
to be primarily used. – Ben Lee May 23 '12 at 18:28
2 Please, elaborate on sudo -s . Am I correct to understand that sudo -i is no difference to using
su - or basically logging in as root apart from additional log entry compared to plain root login? If that's true,
how and why is it better than plain root login? – PF4Public
Jun 3 '16 at 20:32
add a comment
| 9 With regard to the 3rd suggested strategy, other than perusal of the useradd -o -u userXXX options as recommended
by @jlliagre, I am not familiar with running multiple users as the same uid. (hence if you do go ahead with that, I would be interested
if you could update the post with any issues (or sucesses) that arise...)
I guess my first observation regarding the first option "Everybody's SSH public key is put into ~root/.ssh/authorized_keys2",
is that unless you absolutely are never going to work on any other systems;
then at least some of the time, you are going to have to work with user accounts and sudo
The second observation would be, that if you work on systems that aspire to HIPAA, PCI-DSS compliance, or stuff like CAPP and
EAL, then you are going to have to work around the issues of sudo because;
It an industry standard to provide non-root individual user accounts, that can be audited, disabled, expired, etc, typically
using some centralized user database.
So; Using personalized accounts and sudo
It is unfortunate that as a sysadmin, almost everything you will need to do on a remote machine is going to require some elevated
permissions, however it is annoying that most of the SSH based tools and utilities are busted while you are in sudo
Hence I can pass on some tricks that I use to work-around the annoyances of sudo that you mention. The first problem
is that if root login is blocked using PermitRootLogin=no or that you do not have the root using ssh key, then it makes
SCP files something of a PITA.
Problem 1 : You want to scp files from the remote side, but they require root access, however you cannot login to the remote box
as root directly.
Boring Solution : copy the files to home directory, chown, and scp down.
ssh userXXX@remotesystem , sudo su - etc, cp /etc/somefiles to /home/userXXX/somefiles
, chown -R userXXX /home/userXXX/somefiles , use scp to retrieve files from remote.
Less Boring Solution : sftp supports the -s sftp_server flag, hence you can do something like the following (if you
have configured password-less sudo in /etc/sudoers );
(you can also use this hack-around with sshfs, but I am not sure its recommended... ;-)
If you don't have password-less sudo rights, or for some configured reason that method above is broken, I can suggest one more
less boring file transfer method, to access remote root files.
Port Forward Ninja Method :
Login to the remote host, but specify that the remote port 3022 (can be anything free, and non-reserved for admins, ie >1024)
is to be forwarded back to port 22 on the local side.
[localuser@localmachine ~]$ ssh userXXX@remotehost -R 3022:localhost:22
Last login: Mon May 21 05:46:07 2012 from 123.123.123.123
------------------------------------------------------------------------
This is a private system; blah blah blah
------------------------------------------------------------------------
Get root in the normal fashion...
-bash-3.2$ sudo su -
[root@remotehost ~]#
Now you can scp the files in the other direction avoiding the boring boring step of making a intermediate copy of the files;
Problem 2: SSH agent forwarding : If you load the root profile, e.g. by specifying a login shell, the necessary environment variables
for SSH agent forwarding such as SSH_AUTH_SOCK are reset, hence SSH agent forwarding is "broken" under sudo su
- .
Half baked answer :
Anything that properly loads a root shell, is going to rightfully reset the environment, however there is a slight work-around
your can use when you need BOTH root permission AND the ability to use the SSH Agent, AT THE SAME TIME
This achieves a kind of chimera profile, that should really not be used, because it is a nasty hack , but is useful when you need
to SCP files from the remote host as root, to some other remote host.
Anyway, you can enable that your user can preserve their ENV variables, by setting the following in sudoers;
Defaults:userXXX !env_reset
this allows you to create nasty hybrid login environments like so;
login as normal;
[localuser@localmachine ~]$ ssh userXXX@remotehost
Last login: Mon May 21 12:33:12 2012 from 123.123.123.123
------------------------------------------------------------------------
This is a private system; blah blah blah
------------------------------------------------------------------------
-bash-3.2$ env | grep SSH_AUTH
SSH_AUTH_SOCK=/tmp/ssh-qwO715/agent.1971
create a bash shell, that runs /root/.profile and /root/.bashrc . but preserves SSH_AUTH_SOCK
-bash-3.2$ sudo -E bash -l
So this shell has root permissions, and root $PATH (but a borked home directory...)
bash-3.2# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel) context=user_u:system_r:unconfined_t
bash-3.2# echo $PATH
/usr/kerberos/sbin:/usr/local/sbin:/usr/sbin:/sbin:/home/xtrabm/xtrabackup-manager:/usr/kerberos/bin:/opt/admin/bin:/usr/local/bin:/bin:/usr/bin:/opt/mx/bin
But you can use that invocation to do things that require remote sudo root, but also the SSH agent access like so;
add a comment
| 2 The 3rd option looks ideal - but have you actually tried it out to see what's happenning? While you might see the
additional usernames in the authentication step, any reverse lookup is going to return the same value.
Allowing root direct ssh access is a bad idea, even if your machines are not connected to the internet / use strong passwords.
Usually I use 'su' rather than sudo for root access.
4 Adding multiple users with the same UID adds problems. When applications go to look up the username for a UID number, they
can look up the wrong username. Applications which run under root can think they're running as the wrong user, and lots of weird
errors will start popping up (I tried it once). – Patrick
May 21 '12 at 11:59
8 The third option is just a bloody bad idea . You're essentially breaking the 1:1 relation between UIDs and usernames,
and literally everything in unix expects that relation to hold. Just because there's no explicit rule not to do it doesn't
mean it's a good idea. – Shadur May 21 '12 at 14:28
Sorry, but the third option is a horrible idea. Having multiple UID 0 people logging in is just asking for problems to be
multiplied. Option number 2 is the only sane one. – Doug May
21 '12 at 20:01
The third option doesn't deserve that many downvotes. There is no command in Unix I'm aware of that is confused by this trick,
people might be but commands should not care. It is just a different login name but as soon as you are logged in, the first name
found matching the uid in the password database is used so just make sure the real username (here root) appears first there. –
jlliagre May 21 '12 at 21:38
@Patrick Have you seen this in practice? As much as I tested, then applications pick root user if root
user is the first one in /etc/passwd with UID 0. I tend to agree with jlliagre. The only downside that I see,
is that each user is a root user and sometimes it might be confusing to understand who did what. –
Martin Sep 4 '16 at 18:31
on one ill-fated day.I can see to be bad enough if you have more than a handful admins.
(2) Is probably more engineered - and you can become full-fledged root through sudo su -. Accidents are still possible though.
(3) I would not touch with a barge pole. I used it on Suns, in order to have a non-barebone-sh root account (if I remember correctly)
but it was never robust - plus I doubt it would be very auditable.
Means that you're allowing SSH access as root . If this machine is in any way public facing, this is just a terrible
idea; back when I ran SSH on port 22, my VPS got multiple attempts hourly to authenticate as root. I had a basic IDS set up to
log and ban IPs that made multiple failed attempts, but they kept coming. Thankfully, I'd disabled SSH access as the root user
as soon as I had my own account and sudo configured. Additionally, you have virtually no audit trail doing this.
Provides root access as and when it is needed. Yes, you barely have any privileges as a standard user, but this is pretty
much exactly what you want; if an account does get compromised, you want it to be limited in its abilities. You want any super
user access to require a password re-entry. Additionally, sudo access can be controlled through user groups, and restricted to
particular commands if you like, giving you more control over who has access to what. Additionally, commands run as sudo can be
logged, so it provides a much better audit trail if things go wrong. Oh, and don't just run "sudo su -" as soon as you log in.
That's terrible, terrible practice.
Your sysadmin's idea is bad. And he should feel bad. No, *nix machines probably won't stop you from doing this, but both your
file system, and virtually every application out there expects each user to have a unique UID. If you start going down this road,
I can guarantee that you'll run into problems. Maybe not immediately, but eventually. For example, despite displaying nice friendly
names, files and directories use UID numbers to designate their owners; if you run into a program that has a problem with duplicate
UIDs down the line, you can't just change a UID in your passwd file later on without having to do some serious manual file system
cleanup.
sudo is the way forward. It may cause additional hassle with running commands as root, but it provides you with a
more secure box, both in terms of access and auditing.
Definitely option 2, but use groups to give each user as much control as possible without needing to use sudo. sudo in front of every
command loses half the benefit because you are always in the danger zone. If you make the relevant directories writable by the sysadmins
without sudo you return sudo to the exception which makes everyone feel safer.
In the old days, sudo did not exist. As a consequence, having multiple UID 0 users was the only available alternative. But it's
still not that good, notably with logging based on the UID to obtain the username. Nowadays, sudo is the only appropriate solution.
Forget anything else.
It is documented permissible by fact. BSD unices have had their toor account for a long time, and bashroot
users tend to be accepted practice on systems where csh is standard (accepted malpractice ;)
add a comment
| 0 Perhaps I'm weird, but method (3) is what popped into my mind first as well. Pros: you'd have every users name in logs and
would know who did what as root. Cons: they'd each be root all the time, so mistakes can be catastrophic.
I'd like to question why you need all admins to have root access. All 3 methods you propose have one distinct disadvantage: once
an admin runs a sudo bash -l or sudo su - or such, you lose your ability to track who does what and after
that, a mistake can be catastrophic. Moreover, in case of possible misbehaviour, this even might end up a lot worse.
Instead you might want to consider going another way:
Create your admin users as regular users
Decide who needs to do what job (apache management / postfix management etc)
Add users to related groups (such as add "martin" to "postfix" and "mail", "amavis" if you use it, etc.)
give only relative sudo powers: (visudo -> let martin use /etc/init.d/postfix , /usr/bin/postsuper etc.)
This way, martin would be able to safely handle postfix, and in case of mistake or misbehaviour, you'd only lose your postfix
system, not entire server.
Same logic can be applied to any other subsystem, such as apache, mysql, etc.
Of course, this is purely theoretical at this point, and might be hard to set up. It does look like a better way to go tho. At
least to me. If anyone tries this, please let me know how it went.
I should add handling SSH connections is pretty basic in this context. Whatever method you use, do not permit root logins
over SSH, let the individual users ssh with their own credentials, and handle sudo/nosudo/etc from there. –
Tuncay Göncüoğlu Nov 30 '12 at 17:53
Did you know that Perl is a great programming language for system administrators? Perl is
platform-independent so you can do things on different operating systems without rewriting your
scripts. Scripting in Perl is quick and easy, and its portability makes your scripts amazingly
useful. Here are a few examples, just to get your creative juices flowing! Renaming a bunch
of files
Suppose you need to rename a whole bunch of files in a directory. In this case, we've got a
directory full of .xml files, and we want to rename them all to .html
. Easy-peasy!
Then just cd to the directory where you need to make the change, and run the script. You
could put this in a cron job, if you needed to run it regularly, and it is easily enhanced to
accept parameters.
Speaking of accepting parameters, let's take a look at a script that does just
that.
Suppose you need to regularly create Linux user accounts on your system, and the format of
the username is first initial/last name, as is common in many businesses. (This is, of course,
a good idea, until you get John Smith and Jane Smith working at the same company -- or want
John to have two accounts, as he works part-time in two different departments. But humor me,
okay?) Each user account needs to be in a group based on their department, and home directories
are of the format /home/<department>/<username> . Let's take a look at a
script to do that:
# If the user calls the script with no parameters,
# give them help!
if ( not @ ARGV ) {
usage () ;
}
# Gather our options; if they specify any undefined option,
# they'll get sent some help!
my %opts ;
GetOptions ( \%opts ,
'fname=s' ,
'lname=s' ,
'dept=s' ,
'run' ,
) or usage () ;
# Let's validate our inputs. All three parameters are
# required, and must be alphabetic.
# You could be clever, and do this with a foreach loop,
# but let's keep it simple for now.
if ( not $opts { fname } or $opts { fname } !~ /^[a-zA-Z]+$/ ) {
usage ( "First name must be alphabetic" ) ;
}
if ( not $opts { lname } or $opts { lname } !~ /^[a-zA-Z]+$/ ) {
usage ( "Last name must be alphabetic" ) ;
}
if ( not $opts { dept } or $opts { dept } !~ /^[a-zA-Z]+$/ ) {
usage ( "Department must be alphabetic" ) ;
}
print "$cmd \n "
;
if ( $opts { run }) { system $cmd ;
} else { print "You need to
add the --run flag to actually execute \n " ;
}
sub usage {
my ( $msg ) = @_ ;
if ( $msg ) { print "$msg \n\n "
;
} print "Usage: $0
--fname FirstName --lname LastName --dept Department --run \n " ; exit ;
}
As with the previous script, there are opportunities for enhancement, but something like
this might be all that you need for this task.
One more, just for fun!
Change copyright text in every Perl source file in a directory
tree
Now we're going to try a mass edit. Suppose you've got a directory full of code, and each
file has a copyright statement somewhere in it. (Rich Bowen wrote a great article, Copyright
statements proliferate inside open source code a couple of years ago that discusses the
wisdom of copyright statements in open source code. It is a good read, and I recommend it
highly. But again, humor me.) You want to change that text in each and every file in the
directory tree. File::Find and File::Slurp are your
friends!
#!/usr/bin/perl
use strict ;
use warnings ;
use File :: Find qw
( find ) ;
use File :: Slurp qw (
read_file write_file ) ;
# If the user gives a directory name, use that. Otherwise,
# use the current directory.
my $dir = $ARGV [ 0 ] || '.' ;
# File::Find::find is kind of dark-arts magic.
# You give it a reference to some code,
# and a directory to hunt in, and it will
# execute that code on every file in the
# directory, and all subdirectories. In this
# case, \&change_file is the reference
# to our code, a subroutine. You could, if
# what you wanted to do was really short,
# include it in a { } block instead. But doing
# it this way is nice and readable.
find ( \&change_file , $dir ) ;
sub change_file {
my $name = $_ ;
# If the file is a directory, symlink, or other
# non-regular file, don't do anything
if ( not - f $name ) { return ;
}
# If it's not Perl, don't do anything.
# Gobble up the file, complete with carriage
# returns and everything.
# Be wary of this if you have very large files
# on a system with limited memory!
my $data = read_file ( $name ) ;
# Use a regex to make the change. If the string appears
# more than once, this will change it everywhere!
Because of Perl's portability, you could use this script on a Windows system as well as a
Linux system -- it Just Works because of the underlying Perl interpreter code. In our
create-an-account code above, that one is not portable, but is Linux-specific because it uses
Linux commands such as adduser .
In my experience, I've found it useful to have a Git repository of these things somewhere
that I can clone on each new system I'm working with. Over time, you'll think of changes to
make to the code to enhance the capabilities, or you'll add new scripts, and Git can help you
make sure that all your tools and tricks are available on all your systems.
I hope these little scripts have given you some ideas how you can use Perl to make your
system administration life a little easier. In addition to these longer scripts, take a look at
a fantastic list of Perl one-liners, and links to other
Perl magic assembled by Mischa Peterson.
Chronyd is a better choice for most networks than ntpd for keeping computers synchronized
with the Network Time Protocol.
"Does anybody really know what time it is? Does anybody really care?"
– Chicago ,
1969
Perhaps that rock group didn't care what time it was, but our computers do need to know the
exact time. Timekeeping is very important to computer networks. In banking, stock markets, and
other financial businesses, transactions must be maintained in the proper order, and exact time
sequences are critical for that. For sysadmins and DevOps professionals, it's easier to follow
the trail of email through a series of servers or to determine the exact sequence of events
using log files on geographically dispersed hosts when exact times are kept on the computers in
question.
I used to work at an organization that received over 20 million emails per day and had four
servers just to accept and do a basic filter on the incoming flood of email. From there, emails
were sent to one of four other servers to perform more complex anti-spam assessments, then they
were delivered to one of several additional servers where the emails were placed in the correct
inboxes. At each layer, the emails would be sent to one of the next-level servers, selected
only by the randomness of round-robin DNS. Sometimes we had to trace a new message through the
system until we could determine where it "got lost," according to the pointy-haired bosses. We
had to do this with frightening regularity.
Most of that email turned out to be spam. Some people actually complained that their [joke,
cat pic, recipe, inspirational saying, or other-strange-email]-of-the-day was missing and asked
us to find it. We did reject those opportunities.
Our email and other transactional searches were aided by log entries with timestamps that --
today -- can resolve down to the nanosecond in even the slowest of modern Linux computers. In
very high-volume transaction environments, even a few microseconds of difference in the system
clocks can mean sorting thousands of transactions to find the correct one(s).
The NTP
server hierarchy
Computers worldwide use the Network Time Protocol (NTP) to
synchronize their times with internet standard reference clocks via a hierarchy of NTP servers.
The primary servers are at stratum 1, and they are connected directly to various national time
services at stratum 0 via satellite, radio, or even modems over phone lines. The time service
at stratum 0 may be an atomic clock, a radio receiver tuned to the signals broadcast by an
atomic clock, or a GPS receiver using the highly accurate clock signals broadcast by GPS
satellites.
To prevent time requests from time servers lower in the hierarchy (i.e., with a higher
stratum number) from overwhelming the primary reference servers, there are several thousand
public NTP stratum 2 servers that are open and available for anyone to use. Many organizations
with large numbers of hosts that need an NTP server will set up their own time servers so that
only one local host accesses the stratum 2 time servers, then they configure the remaining
network hosts to use the local time server which, in my case, is a stratum 3 server.
NTP
choices
The original NTP daemon, ntpd , has been joined by a newer one, chronyd . Both keep the
local host's time synchronized with the time server. Both services are available, and I have
seen nothing to indicate that this will change anytime soon.
Chrony has features that make it the better choice for most environments for the following
reasons:
Chrony can synchronize to the time server much faster than NTP. This is good for laptops
or desktops that don't run constantly.
It can compensate for fluctuating clock frequencies, such as when a host hibernates or
enters sleep mode, or when the clock speed varies due to frequency stepping that slows clock
speeds when loads are low.
It handles intermittent network connections and bandwidth saturation.
It adjusts for network delays and latency.
After the initial time sync, Chrony never steps the clock. This ensures stable and
consistent time intervals for system services and applications.
Chrony can work even without a network connection. In this case, the local host or server
can be updated manually.
The NTP and Chrony RPM packages are available from standard Fedora repositories. You can
install both and switch between them, but modern Fedora, CentOS, and RHEL releases have moved
from NTP to Chrony as their default time-keeping implementation. I have found that Chrony works
well, provides a better interface for the sysadmin, presents much more information, and
increases control.
Just to make it clear, NTP is a protocol that is implemented with either NTP or Chrony. If
you'd like to know more, read this comparison between NTP and Chrony as
implementations of the NTP protocol.
This article explains how to configure Chrony clients and servers on a Fedora host, but the
configuration for CentOS and RHEL current releases works the same.
Chrony structure
The Chrony daemon, chronyd , runs in the background and monitors the time and status of the
time server specified in the chrony.conf file. If the local time needs to be adjusted, chronyd
does it smoothly without the programmatic trauma that would occur if the clock were instantly
reset to a new time.
Chrony's chronyc tool allows someone to monitor the current status of Chrony and make
changes if necessary. The chronyc utility can be used as a command that accepts subcommands, or
it can be used as an interactive text-mode program. This article will explain both
uses.
Client configuration
The NTP client configuration is simple and requires little or no intervention. The NTP
server can be defined during the Linux installation or provided by the DHCP server at boot
time. The default /etc/chrony.conf file (shown below in its entirety) requires no intervention
to work properly as a client. For Fedora, Chrony uses the Fedora NTP pool, and CentOS and RHEL
have their own NTP server pools. Like many Red Hat-based distributions, the configuration file
is well commented.
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
pool 2.fedora.pool.ntp.org iburst
# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift
# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3
# Enable kernel synchronization of the real-time clock (RTC).
# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *
# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2
# Allow NTP client access from local network.
#allow 192.168.0.0/16
# Serve time even if not synchronized to a time source.
#local stratum 10
# Specify file containing keys for NTP authentication.
keyfile /etc/chrony.keys
# Get TAI-UTC offset and leap seconds from the system tz database.
leapsectz right/UTC
# Specify directory for log files.
logdir /var/log/chrony
# Select which information is logged.
#log measurements statistics tracking
Let's look at the current status of NTP on a virtual machine I use for testing. The chronyc
command, when used with the tracking subcommand, provides statistics that report how far off
the local system is from the reference server.
[root@studentvm1 ~]# chronyc tracking
Reference ID : 23ABED4D (ec2-35-171-237-77.compute-1.amazonaws.com)
Stratum : 3
Ref time (UTC) : Fri Nov 16 16:21:30 2018
System time : 0.000645622 seconds slow of NTP time
Last offset : -0.000308577 seconds
RMS offset : 0.000786140 seconds
Frequency : 0.147 ppm slow
Residual freq : -0.073 ppm
Skew : 0.062 ppm
Root delay : 0.041452706 seconds
Root dispersion : 0.022665167 seconds
Update interval : 1044.2 seconds
Leap status : Normal
[root@studentvm1 ~]#
The Reference ID in the first line of the result is the server the host is synchronized to
-- in this case, a stratum 3 reference server that was last contacted by the host at 16:21:30
2018. The other lines are described in the chronyc(1) man page .
The sources subcommand is also useful because it provides information about the time source
configured in chrony.conf .
The first source in the list is the time server I set up for my personal network. The others
were provided by the pool. Even though my NTP server doesn't appear in the Chrony configuration
file above, my DHCP server provides its IP address for the NTP server. The "S" column -- Source
State -- indicates with an asterisk ( * ) the server our host is synced to. This is consistent
with the data from the tracking subcommand.
The -v option provides a nice description of the fields in this output.
[root@studentvm1
~]# chronyc sources -v
210 Number of sources = 5
If I wanted my server to be the preferred reference time source for this host, I would add
the line below to the /etc/chrony.conf file.
server 192.168.0.51 iburst prefer
I usually place this line just above the first pool server statement near the top of the
file. There is no special reason for this, except I like to keep the server statements
together. It would work just as well at the bottom of the file, and I have done that on several
hosts. This configuration file is not sequence-sensitive.
The prefer option marks this as the preferred reference source. As such, this host will
always be synchronized with this reference source (as long as it is available). We can also use
the fully qualified hostname for a remote reference server or the hostname only (without the
domain name) for a local reference time source as long as the search statement is set in the
/etc/resolv.conf file. I prefer the IP address to ensure that the time source is accessible
even if DNS is not working. In most environments, the server name is probably the better
option, because NTP will continue to work even if the server's IP address changes.
If you don't have a specific reference source you want to synchronize to, it is fine to use
the defaults.
Configuring an NTP server with Chrony
The nice thing about the Chrony configuration file is that this single file configures the
host as both a client and a server. To add a server function to our host -- it will always be a
client, obtaining its time from a reference server -- we just need to make a couple of changes
to the Chrony configuration, then configure the host's firewall to accept NTP requests.
Open the /etc/chrony.conf file in your favorite text editor and uncomment the local stratum
10 line. This enables the Chrony NTP server to continue to act as if it were connected to a
remote reference server if the internet connection fails; this enables the host to continue to
be an NTP server to other hosts on the local network.
Let's restart chronyd and track how the service is working for a few minutes. Before we
enable our host as an NTP server, we want to test a bit.
The results should look like this. The watch command runs the chronyc tracking command every
two seconds so we can watch changes occur over time.
Every 2.0s: chronyc tracking
studentvm1: Fri Nov 16 20:59:31 2018
Reference ID : C0A80033 (192.168.0.51)
Stratum : 4
Ref time (UTC) : Sat Nov 17 01:58:51 2018
System time : 0.001598277 seconds fast of NTP time
Last offset : +0.001791533 seconds
RMS offset : 0.001791533 seconds
Frequency : 0.546 ppm slow
Residual freq : -0.175 ppm
Skew : 0.168 ppm
Root delay : 0.094823152 seconds
Root dispersion : 0.021242738 seconds
Update interval : 65.0 seconds
Leap status : Normal
Notice that my NTP server, the studentvm1 host, synchronizes to the host at 192.168.0.51,
which is my internal network NTP server, at stratum 4. Synchronizing directly to the Fedora
pool machines would result in synchronization at stratum 3. Notice also that the amount of
error decreases over time. Eventually, it should stabilize with a tiny variation around a
fairly small range of error. The size of the error depends upon the stratum and other network
factors. After a few minutes, use Ctrl+C to break out of the watch loop.
To turn our host into an NTP server, we need to allow it to listen on the local network.
Uncomment the following line to allow hosts on the local network to access our NTP server.
#
Allow NTP client access from local network.
allow 192.168.0.0/16
Note that the server can listen for requests on any local network it's attached to. The IP
address in the "allow" line is just intended for illustrative purposes. Be sure to change the
IP network and subnet mask in that line to match your local network's.
Restart chronyd .
[root@studentvm1 ~]# systemctl restart chronyd
To allow other hosts on your network to access this server, configure the firewall to allow
inbound UDP packets on port 123. Check your firewall's documentation to find out how to do
that.
Testing
Your host is now an NTP server. You can test it with another host or a VM that has access to
the network on which the NTP server is listening. Configure the client to use the new NTP
server as the preferred server in the /etc/chrony.conf file, then monitor that client using the
chronyc tools we used above.
Chronyc as an interactive tool
As I mentioned earlier, chronyc can be used as an interactive command tool. Simply run the
command without a subcommand and you get a chronyc command prompt.
[root@studentvm1 ~]#
chronyc
chrony version 3.4
Copyright (C) 1997-2003, 2007, 2009-2018 Richard P. Curnow and others
chrony comes with ABSOLUTELY NO WARRANTY. This is free software, and
you are welcome to redistribute it under certain conditions. See the
GNU General Public License version 2 for details.
chronyc>
You can enter just the subcommands at this prompt. Try using the tracking , ntpdata , and
sources commands. The chronyc command line allows command recall and editing for chronyc
subcommands. You can use the help subcommand to get a list of possible commands and their
syntax.
Conclusion
Chrony is a powerful tool for synchronizing the times of client hosts, whether they are all
on the local network or scattered around the globe. It's easy to configure because, despite the
large number of options available, only a few configurations are required for most
circumstances.
After my client computers have synchronized with the NTP server, I like to set the system
hardware clock from the system (OS) time by using the following command:
/sbin/hwclock --systohc
This command can be added as a cron job or a script in cron.daily to keep the hardware clock
synced with the system time.
Chrony and NTP (the service) both use the same configuration, and the files' contents are
interchangeable. The man pages for chronyd , chronyc , and chrony.conf contain an amazing amount of
information that can help you get started or learn about esoteric configuration options.
Do you run your own NTP server? Let us know in the comments and be sure to tell us which
implementation you are using, NTP or Chrony.
Alexey thanks for great video, I have a question, how did you integrate the fzf and bat.
When I am in my zsh using tmux then when I type fzf and search for a file I am not able to
select multiple files using TAB I can do this inside VIM but not in the tmux iTerm terminal
also I am not able to see the preview I have already installed bat using brew on my mac book
pro. also when I type cd ** it doesn't work
Thanks for the video. When searching in vim dotfiles are hidden. How can we configure so
that dotfiles are shown but .git and .git subfolders are ignored?
Having a hard time remembering a command? Normally you might resort to a man page, but some
man pages have a hard time getting to the point. It's the reason Chris Allen Lane came up with
the idea (and more importantly, the code) for a cheat command .
The cheat command displays cheatsheets for common tasks in your terminal. It's a man page
without the preamble. It cuts to the chase and tells you exactly how to do whatever it is
you're trying to do. And if it lacks a common example that you think ought to be included, you
can submit an update.
$ cheat tar
# To extract an uncompressed archive:
tar -xvf '/path/to/foo.tar'
# To extract a .gz archive:
tar -xzvf '/path/to/foo.tgz'
[ ... ]
You can also treat cheat as a local cheatsheet system, which is great for all the in-house
commands you and your team have invented over the years. You can easily add a local cheatsheet
to your own home directory, and cheat will find and display it just as if it were a popular
system command.
In Figure 1, two complete physical hard drives and one partition from a third hard drive
have been combined into a single volume group. Two logical volumes have been created from the
space in the volume group, and a filesystem, such as an EXT3 or EXT4 filesystem has been
created on each of the two logical volumes.
Figure 1: LVM allows combining partitions and entire hard drives into Volume
Groups.
Adding disk space to a host is fairly straightforward but, in my experience, is done
relatively infrequently. The basic steps needed are listed below. You can either create an
entirely new volume group or you can add the new space to an existing volume group and either
expand an existing logical volume or create a new one.
Adding a new logical volume
There are times when it is necessary to add a new logical volume to a host. For example,
after noticing that the directory containing virtual disks for my VirtualBox virtual machines
was filling up the /home filesystem, I decided to create a new logical volume in which to store
the virtual machine data, including the virtual disks. This would free up a great deal of space
in my /home filesystem and also allow me to manage the disk space for the VMs
independently.
The basic steps for adding a new logical volume are as follows.
If necessary, install a new hard drive.
Optional: Create a partition on the hard drive.
Create a physical volume (PV) of the complete hard drive or a partition on the hard
drive.
Assign the new physical volume to an existing volume group (VG) or create a new volume
group.
Create a new logical volumes (LV) from the space in the volume group.
Create a filesystem on the new logical volume.
Add appropriate entries to /etc/fstab for mounting the filesystem.
Mount the filesystem.
Now for the details. The following sequence is taken from an example I used as a lab project
when teaching about Linux filesystems.
Example
This example shows how to use the CLI to extend an existing volume group to add more space
to it, create a new logical volume in that space, and create a filesystem on the logical
volume. This procedure can be performed on a running, mounted filesystem.
WARNING: Only the EXT3 and EXT4 filesystems can be resized on the fly on a running, mounted
filesystem. Many other filesystems including BTRFS and ZFS cannot be resized.
Install
hard drive
If there is not enough space in the volume group on the existing hard drive(s) in the system
to add the desired amount of space it may be necessary to add a new hard drive and create the
space to add to the Logical Volume. First, install the physical hard drive, and then perform
the following steps.
Create Physical Volume from hard drive
It is first necessary to create a new Physical Volume (PV). Use the command below, which
assumes that the new hard drive is assigned as /dev/hdd.
pvcreate /dev/hdd
It is not necessary to create a partition of any kind on the new hard drive. This creation
of the Physical Volume which will be recognized by the Logical Volume Manager can be performed
on a newly installed raw disk or on a Linux partition of type 83. If you are going to use the
entire hard drive, creating a partition first does not offer any particular advantages and uses
disk space for metadata that could otherwise be used as part of the PV.
Extend the
existing Volume Group
In this example we will extend an existing volume group rather than creating a new one; you
can choose to do it either way. After the Physical Volume has been created, extend the existing
Volume Group (VG) to include the space on the new PV. In this example the existing Volume Group
is named MyVG01.
vgextend /dev/MyVG01 /dev/hdd
Create the Logical Volume
First create the Logical Volume (LV) from existing free space within the Volume Group. The
command below creates a LV with a size of 50GB. The Volume Group name is MyVG01 and the Logical
Volume Name is Stuff.
lvcreate -L +50G --name Stuff MyVG01
Create the filesystem
Creating the Logical Volume does not create the filesystem. That task must be performed
separately. The command below creates an EXT4 filesystem that fits the newly created Logical
Volume.
mkfs -t ext4 /dev/MyVG01/Stuff
Add a filesystem label
Adding a filesystem label makes it easy to identify the filesystem later in case of a crash
or other disk related problems.
e2label /dev/MyVG01/Stuff Stuff
Mount the filesystem
At this point you can create a mount point, add an appropriate entry to the /etc/fstab file,
and mount the filesystem.
You should also check to verify the volume has been created correctly. You can use the
df , lvs, and vgs commands to do this.
Resizing a logical volume in
an LVM filesystem
The need to resize a filesystem has been around since the beginning of the first versions of
Unix and has not gone away with Linux. It has gotten easier, however, with Logical Volume
Management.
If necessary, install a new hard drive.
Optional: Create a partition on the hard drive.
Create a physical volume (PV) of the complete hard drive or a partition on the hard
drive.
Assign the new physical volume to an existing volume group (VG) or create a new volume
group.
Create one or more logical volumes (LV) from the space in the volume group, or expand an
existing logical volume with some or all of the new space in the volume group.
If you created a new logical volume, create a filesystem on it. If adding space to an
existing logical volume, use the resize2fs command to enlarge the filesystem to fill the
space in the logical volume.
Add appropriate entries to /etc/fstab for mounting the filesystem.
Mount the filesystem.
Example
This example describes how to resize an existing Logical Volume in an LVM environment using
the CLI. It adds about 50GB of space to the /Stuff filesystem. This procedure can be used on a
mounted, live filesystem only with the Linux 2.6 Kernel (and higher) and EXT3 and EXT4
filesystems. I do not recommend that you do so on any critical system, but it can be done and I
have done so many times; even on the root (/) filesystem. Use your judgment.
WARNING: Only the EXT3 and EXT4 filesystems can be resized on the fly on a running, mounted
filesystem. Many other filesystems including BTRFS and ZFS cannot be resized.
Install the
hard drive
If there is not enough space on the existing hard drive(s) in the system to add the desired
amount of space it may be necessary to add a new hard drive and create the space to add to the
Logical Volume. First, install the physical hard drive and then perform the following
steps.
Create a Physical Volume from the hard drive
It is first necessary to create a new Physical Volume (PV). Use the command below, which
assumes that the new hard drive is assigned as /dev/hdd.
pvcreate /dev/hdd
It is not necessary to create a partition of any kind on the new hard drive. This creation
of the Physical Volume which will be recognized by the Logical Volume Manager can be performed
on a newly installed raw disk or on a Linux partition of type 83. If you are going to use the
entire hard drive, creating a partition first does not offer any particular advantages and uses
disk space for metadata that could otherwise be used as part of the PV.
Add PV to
existing Volume Group
For this example, we will use the new PV to extend an existing Volume Group. After the
Physical Volume has been created, extend the existing Volume Group (VG) to include the space on
the new PV. In this example, the existing Volume Group is named MyVG01.
vgextend /dev/MyVG01 /dev/hdd
Extend the Logical Volume
Extend the Logical Volume (LV) from existing free space within the Volume Group. The command
below expands the LV by 50GB. The Volume Group name is MyVG01 and the Logical Volume Name is
Stuff.
lvextend -L +50G /dev/MyVG01/Stuff
Expand the filesystem
Extending the Logical Volume will also expand the filesystem if you use the -r option. If
you do not use the -r option, that task must be performed separately. The command below resizes
the filesystem to fit the newly resized Logical Volume.
resize2fs /dev/MyVG01/Stuff
You should check to verify the resizing has been performed correctly. You can use the
df , lvs, and vgs commands to do this.
Tips
Over the years I have learned a few things that can make logical volume management even
easier than it already is. Hopefully these tips can prove of some value to you.
Use the Extended file systems unless you have a clear reason to use another filesystem.
Not all filesystems support resizing but EXT2, 3, and 4 do. The EXT filesystems are also very
fast and efficient. In any event, they can be tuned by a knowledgeable sysadmin to meet the
needs of most environments if the defaults tuning parameters do not.
Use meaningful volume and volume group names.
Use EXT filesystem labels.
I know that, like me, many sysadmins have resisted the change to Logical Volume Management.
I hope that this article will encourage you to at least try LVM. I am really glad that I did;
my disk management tasks are much easier since I made the switch. TopicsBusiness Linux How-tos and tutorials
SysadminAbout the author David Both - David Both is an Open Source Software and
GNU/Linux advocate, trainer, writer, and speaker who lives in Raleigh North Carolina. He is a
strong proponent of and evangelist for the "Linux Philosophy." David has been in the IT
industry for nearly 50 years. He has taught RHCE classes for Red Hat and has worked at MCI
Worldcom, Cisco, and the State of North Carolina. He has been working with Linux and Open
Source Software for over 20 years. David prefers to purchase the components and build
his...
NixCraft
Use the site's internal search function. With more than a decade of regular updates, there's
gold to be found here -- useful scripts and handy hints that can solve your problem straight
away. This is often the second place I look after Google.
Webmin
This gives you a nice web interface to remotely edit your configuration files. It cuts down on
a lot of time spent having to juggle directory paths and sudo nano , which is
handy when you're handling several customers.
Windows Subsystem for
Linux
The reality of the modern workplace is that most employees are on Windows, while the grown-up
gear in the server room is on Linux. So sometimes you find yourself trying to do admin tasks
from (gasp) a Windows desktop.
What do you do? Install a virtual machine? It's actually much faster and far less work to
configure if you install the Windows Subsystem for Linux compatibility layer, now available at
no cost on Windows 10.
This gives you a Bash terminal in a window where you can run Bash scripts and Linux binaries
on the local machine, have full access to both Windows and Linux filesystems, and mount network
drives. It's available in Ubuntu, OpenSUSE, SLES, Debian, and Kali flavors.
mRemoteNG
This is an excellent SSH and remote desktop client for when you have 100+ servers to
manage.
Setting up a network so you don't have to do it again
A poorly planned network is the sworn enemy of the admin who hates working overtime.
IP
Addressing Schemes that Scale
The diabolical thing about running out of IP addresses is that, when it happens, the network's
grown large enough that a new addressing scheme is an expensive, time-consuming pain in the
proverbial.
Ain't nobody got time for that!
At some point, IPv6 will finally arrive to save the day. Until then, these
one-size-fits-most IP addressing schemes should keep you going, no matter how many
network-connected wearables, tablets, smart locks, lights, security cameras, VoIP headsets, and
espresso machines the world throws at us.
Linux Chmod Permissions Cheat
Sheet
A short but sweet cheat sheet of Bash commands to set permissions across the network. This is
so when Bill from Customer Service falls for that ransomware scam, you're recovering just his
files and not the entire company's.
VLSM Subnet Calculator
Just put in the number of networks you want to create from an address space and the number of
hosts you want per network, and it calculates what the subnet mask should be for
everything.
Single-purpose Linux distributions
Need a Linux box that does just one thing? It helps if someone else has already sweated the
small stuff on an operating system you can install and have ready immediately.
Each of these has, at one point, made my work day so much easier.
Porteus Kiosk
This is for when you want a computer totally locked down to just a web browser. With a little
tweaking, you can even lock the browser down to just one website. This is great for public
access machines. It works with touchscreens or with a keyboard and mouse.
Parted Magic
This is an operating system you can boot from a USB drive to partition hard drives, recover
data, and run benchmarking tools.
IPFire
Hahahaha, I still can't believe someone called a router/firewall/proxy combo "I pee fire."
That's my second favorite thing about this Linux distribution. My favorite is that it's a
seriously solid software suite. It's so easy to set up and configure, and there is a heap of
plugins available to extend it.
What about your top tools and cheat sheets?
So, how about you? What tools, resources, and cheat sheets have you found to make the
workday easier? I'd love to know. Please share in the comments.
"... If you lose a drive in a volume group, you can force the volume group online with the missing physical volume, but you will be unable to open the LV's that were contained on the dead PV, whether they be in whole or in part. ..."
"... So, if you had for instance 10 LV's, 3 total on the first drive, #4 partially on first drive and second drive, then 5-7 on drive #2 wholly, then 8-10 on drive 3, you would be potentially able to force the VG online and recover LV's 1,2,3,8,9,10.. #4,5,6,7 would be completely lost. ..."
"... LVM doesn't really have the concept of a partition it uses PVs (Physical Volumes), which can be a partition. These PVs are broken up into extents and then these are mapped to the LVs (Logical Volumes). When you create the LVs you can specify if the data is striped or mirrored but the default is linear allocation. So it would use the extents in the first PV then the 2nd then the 3rd. ..."
"... As Peter has said the blocks appear as 0's if a PV goes missing. So you can potentially do data recovery on files that are on the other PVs. But I wouldn't rely on it. You normally see LVM used in conjunction with RAIDs for this reason. ..."
"... it's effectively as if a huge chunk of your disk suddenly turned to badblocks. You can patch things back together with a new, empty drive to which you give the same UUID, and then run an fsck on any filesystems on logical volumes that went across the bad drive to hope you can salvage something. ..."
1) How does the system determine what partition to use first?
2) Can I find what disk a file or folder is physically on?
3) If I lose a drive in the LVM, do I lose all data, or just data physically on that disk?
storage lvm
share
HopelessN00b 49k 25 25 gold badges
121 121 silver badges 194 194 bronze badges asked Dec 2 '10 at 2:28 Luke has no
name Luke has no name 989 10 10 silver badges 13 13 bronze badges
The system fills from the first disk in the volume group to the last, unless you
configure striping with extents.
I don't think this is possible, but where I'd start to look is in the lvs/vgs commands
man pages.
If you lose a drive in a volume group, you can force the volume group online with the
missing physical volume, but you will be unable to open the LV's that were contained on the
dead PV, whether they be in whole or in part.
So, if you had for instance 10 LV's, 3 total on
the first drive, #4 partially on first drive and second drive, then 5-7 on drive #2 wholly,
then 8-10 on drive 3, you would be potentially able to force the VG online and recover LV's
1,2,3,8,9,10.. #4,5,6,7 would be completely lost.
Peter Grace Peter Grace 2,676 2 2 gold
badges 22 22 silver badges 38 38 bronze badges
1) How does the system determine what partition to use first?
LVM doesn't really have the concept of a partition it uses PVs (Physical Volumes), which can
be a partition. These PVs are broken up into extents and then these are mapped to the LVs
(Logical Volumes). When you create the LVs you can specify if the data is striped or mirrored
but the default is linear allocation. So it would use the extents in the first PV then the 2nd
then the 3rd.
2) Can I find what disk a file or folder is physically on?
You can determine what PVs a LV has allocation extents on. But I don't know of a way to get
that information for an individual file.
3) If I lose a drive in the LVM, do I lose all data, or just data physically on that
disk?
As Peter has said the blocks appear as 0's if a PV goes missing. So you can potentially do
data recovery on files that are on the other PVs. But I wouldn't rely on it. You normally see
LVM used in conjunction with RAIDs for this reason.
So here's a derivative of my question: I have 3x1 TB drives and I want to use 3TB of that
space. What's the best way to configure the drives so I am not splitting my data over
folders/mountpoints? or is there a way at all, other than what I've implied above? –
Luke has no
name Dec 2 '10 at 5:12
If you want to use 3TB and aren't willing to split data over folders/mount points I don't
see any other way. There may be some virtual filesystem solution to this problem like unionfs
although I'm not sure if it would solve your particular problem. But LVM is certainly the
most straight forward and simple solution as such it's the one I'd go with. –
3dinfluence Dec 2
'10 at 14:51
add a comment | 2 I don't know the answer to #2, so I'll leave that
to someone else. I suspect "no", but I'm willing to be happily surprised.
1 is: you tell it, when you combine the physical volumes into a volume group.
3 is: it's effectively as if a huge chunk of your disk suddenly turned to badblocks. You can
patch things back together with a new, empty drive to which you give the same UUID, and then
run an fsck on any filesystems on logical volumes that went across the bad drive to hope you can
salvage something.
And to the overall, unasked question: yeah, you probably don't really want to do that.
Hi, generally I configure /etc/aliases to forward root messages to my work email address. I
found this useful, because sometimes I become aware of something wrong...
I create specific email filter on my MUA to put everything with "fail" in subject in my
ALERT subfolder, "update" or "upgrade" in my UPGRADE subfolder, and so on.
It is a bit annoying, because with > 50 server, there is lot of "rumor", anyway.
Can I recover a RAID 5 array if two drives have failed?Ask Question Asked 9 years ago Active
2 years, 3 months ago Viewed 58k times I have a Dell 2600 with 6 drives configured in a
RAID 5 on a PERC 4 controller. 2 drives failed at the same time, and according to what I know a
RAID 5 is recoverable if 1 drive fails. I'm not sure if the fact I had six drives in the array
might save my skin.
I bought 2 new drives and plugged them in but no rebuild happened as I expected. Can anyone
shed some light? raid
raid5 dell-poweredge share Share a link to this question
11 Regardless of how many drives are in use, a RAID 5 array only allows
for recovery in the event that just one disk at a time fails.
What 3molo says is a fair point but even so, not quite correct I think - if two disks in a
RAID5 array fail at the exact same time then a hot spare won't help, because a hot spare
replaces one of the failed disks and rebuilds the array without any intervention, and a rebuild
isn't possible if more than one disk fails.
For now, I am sorry to say that your options for recovering this data are going to involve
restoring a backup.
For the future you may want to consider one of the more robust forms of RAID (not sure what
options a PERC4 supports) such as RAID 6 or a nested RAID array . Once you get above a
certain amount of disks in an array you reach the point where the chance that more than one of
them can fail before a replacement is installed and rebuilt becomes unacceptably high.
share Share a link to this
answer Copy link | improve this answer edited Jun 8 '12 at 13:37
longneck 21.1k 3 3 gold badges 43 43 silver
badges 76 76 bronze badges answered Sep 21 '10 at 14:43 Rob Moir Rob Moir 30k 4 4 gold badges 53 53
silver badges 84 84 bronze badges
1 thanks robert I will take this advise into consideration when I rebuild the server,
lucky for me I full have backups that are less than 6 hours old. regards – bonga86 Sep 21 '10 at
15:00
If this is (somehow) likely to occur again in the future, you may consider RAID6. Same
idea as RAID5 but with two Parity disks, so the array can survive any two disks failing.
– gWaldo Sep 21
'10 at 15:04
g man(mmm...), i have recreated the entire system from scratch with a RAID 10. So
hopefully if 2 drives go out at the same time again the system will still function? Otherwise
everything has been restored and working thanks for ideas and input – bonga86 Sep 23 '10 at
11:34
Depends which two drives go... RAID 10 means, for example, 4 drives in two mirrored pairs
(2 RAID 1 mirrors) striped together (RAID 0) yes? If you lose both disks in 1 of the mirrors
then you've still got an outage. – Rob Moir Sep 23 '10 at 11:43
add a comment | 2 You can try to force one or both of the failed
disks to be online from the BIOS interface of the controller. Then check that the data and the
file system are consistent. share Share a link to this answer Copy link | improve this answer answered Sep 21 '10 at
15:35 Mircea
Vutcovici Mircea Vutcovici 13.6k 3 3 gold badges 42 42 silver badges 69 69 bronze badges
2 Dell systems, especially, in my experience, built on PERC3 or PERC4 cards had a nasty
tendency to simply have a hiccup on the SCSI bus which would know two or more drives
off-line. A drive being offline does NOT mean it failed. I've never had a two drives fail at
the same time, but probably a half dozen times, I've had two or more drives go off-line. I
suggest you try Mircea's suggestion first... could save you a LOT of time. – Multiverse IT Sep 21 '10 at
16:32
Hey guys, i tried the force option many times. Both "failed" drives would than come back
online, but when I do a restart it says logical drive :degraded and obviously because of that
they system still could not boot. – bonga86 Sep 23 '10 at 11:27
add a comment | 2 Direct answer is "No". In-direct -- "It depends".
Mainly it depends on whether disks are partially out of order, or completely. In case there're
partially broken, you can give it a try -- I would copy (using tool like ddrescue) both failed
disks. Then I'd try to run the bunch of disks using Linux SoftRAID -- re-trying with proper
order of disks and stripe-size in read-only mode and counting CRC mismatches. It's quite
doable, I should say -- this text in Russian mentions 12 disk RAID50's
recovery using LSR , for example. share Share a link to this answer Copy link | improve this answer edited Jun 8 '12 at 15:12
Skyhawk 13.5k 3 3 gold badges 45 45 silver
badges 91 91 bronze badges answered Jun 8 '12 at 14:11 poige poige 7,370 2 2 gold badges
16 16 silver badges 38 38 bronze badges add a comment | 0 It is possible if raid was with one spare drive ,
and one of your failed disks died before the second one. So, you just need need to try
reconstruct array virtually with 3d party software . Found small article about this process on
this page: http://www.angeldatarecovery.com/raid5-data-recovery/
And, if you realy need one of died drives you can send it to recovery shops. With of this
images you can reconstruct raid properly with good channces.
In this article we will talk about foremost , a very useful open source
forensic utility which is able to recover deleted files using the technique called data
carving . The utility was originally developed by the United States Air Force Office of
Special Investigations, and is able to recover several file types (support for specific file
types can be added by the user, via the configuration file). The program can also work on
partition images produced by dd or similar tools.
Software Requirements and Linux Command Line Conventions
Category
Requirements, Conventions or Software Version Used
System
Distribution-independent
Software
The "foremost" program
Other
Familiarity with the command line interface
Conventions
# - requires given linux commands to be executed with root
privileges either directly as a root user or by use of sudo command $ - requires given linux commands to be executed as a regular
non-privileged user
Installation
Since foremost is already present in all the major Linux distributions
repositories, installing it is a very easy task. All we have to do is to use our favorite
distribution package manager. On Debian and Ubuntu, we can use apt :
$ sudo apt install foremost
In recent versions of Fedora, we use the dnf package manager to install
packages , the dnf is a successor of yum . The name of the
package is the same:
$ sudo dnf install foremost
If we are using ArchLinux, we can use pacman to install foremost .
The program can be found in the distribution "community" repository:
$ sudo pacman -S foremost
SUBSCRIBE TO NEWSLETTER
Subscribe to Linux Career NEWSLETTER and
receive latest Linux news, jobs, career advice and tutorials.
WARNING
No matter which file recovery tool or process your are going to use to recover your files,
before you begin it is recommended to perform a low level hard drive or partition backup,
hence avoiding an accidental data overwrite !!! In this case you may re-try to recover your
files even after unsuccessful recovery attempt. Check the following dd command guide on
how to perform hard drive or partition low level backup.
The foremost utility tries to recover and reconstruct files on the base of
their headers, footers and data structures, without relying on filesystem metadata
. This forensic technique is known as file carving . The program supports various
types of files, as for example:
jpg
gif
png
bmp
avi
exe
mpg
wav
riff
wmv
mov
pdf
ole
doc
zip
rar
htm
cpp
The most basic way to use foremost is by providing a source to scan for deleted
files (it can be either a partition or an image file, as those generated with dd
). Let's see an example. Imagine we want to scan the /dev/sdb1 partition: before
we begin, a very important thing to remember is to never store retrieved data on the same
partition we are retrieving the data from, to avoid overwriting delete files still present on
the block device. The command we would run is:
$ sudo foremost -i /dev/sdb1
By default, the program creates a directory called output inside the directory
we launched it from and uses it as destination. Inside this directory, a subdirectory for each
supported file type we are attempting to retrieve is created. Each directory will hold the
corresponding file type obtained from the data carving process:
When foremost completes its job, empty directories are removed. Only the ones
containing files are left on the filesystem: this let us immediately know what type of files
were successfully retrieved. By default the program tries to retrieve all the supported file
types; to restrict our search, we can, however, use the -t option and provide a
list of the file types we want to retrieve, separated by a comma. In the example below, we
restrict the search only to gif and pdf files:
$ sudo foremost -t gif,pdf -i /dev/sdb1
https://www.youtube.com/embed/58S2wlsJNvo
In this video we will test the forensic data recovery program Foremost to
recover a single png file from /dev/sdb1 partition formatted with the
EXT4 filesystem.
As we already said, if a destination is not explicitly declared, foremost creates an
output directory inside our cwd . What if we want to specify an
alternative path? All we have to do is to use the -o option and provide said path
as argument. If the specified directory doesn't exist, it is created; if it exists but it's not
empty, the program throws a complain:
ERROR: /home/egdoc/data is not empty
Please specify another directory or run with -T.
To solve the problem, as suggested by the program itself, we can either use another
directory or re-launch the command with the -T option. If we use the
-T option, the output directory specified with the -o option is
timestamped. This makes possible to run the program multiple times with the same destination.
In our case the directory that would be used to store the retrieved files would be:
/home/egdoc/data_Thu_Sep_12_16_32_38_2019
The configuration file
The foremost configuration file can be used to specify file formats not
natively supported by the program. Inside the file we can find several commented examples
showing the syntax that should be used to accomplish the task. Here is an example involving the
png type (the lines are commented since the file type is supported by
default):
# PNG (used in web pages)
# (NOTE THIS FORMAT HAS A BUILTIN EXTRACTION FUNCTION)
# png y 200000 \x50\x4e\x47? \xff\xfc\xfd\xfe
The information to provide in order to add support for a file type, are, from left to right,
separated by a tab character: the file extension ( png in this case), whether the
header and footer are case sensitive ( y ), the maximum file size in Bytes (
200000 ), the header ( \x50\x4e\x47? ) and and the footer (
\xff\xfc\xfd\xfe ). Only the latter is optional and can be omitted.
If the path of the configuration file it's not explicitly provided with the -c
option, a file named foremost.conf is searched and used, if present, in the
current working directory. If it is not found the default configuration file,
/etc/foremost.conf is used instead.
Adding the support for a file type
By reading the examples provided in the configuration file, we can easily add support for a
new file type. In this example we will add support for flac audio files.
Flac (Free Lossless Audio Coded) is a non-proprietary lossless audio format which
is able to provide compressed audio without quality loss. First of all, we know that the header
of this file type in hexadecimal form is 66 4C 61 43 00 00 00 22 (
fLaC in ASCII), and we can verify it by using a program like hexdump
on a flac file:
As you can see the file signature is indeed what we expected. Here we will assume a maximum
file size of 30 MB, or 30000000 Bytes. Let's add the entry to the file:
flac y 30000000 \x66\x4c\x61\x43\x00\x00\x00\x22
The footer signature is optional so here we didn't provide it. The program
should now be able to recover deleted flac files. Let's verify it. To test that
everything works as expected I previously placed, and then removed, a flac file from the
/dev/sdb1 partition, and then proceeded to run the command:
As expected, the program was able to retrieve the deleted flac file (it was the only file on
the device, on purpose), although it renamed it with a random string. The original filename
cannot be retrieved because, as we know, files metadata is contained in the filesystem, and not
in the file itself:
The audit.txt file contains information about the actions performed by the program, in this
case:
Foremost version 1.5.7 by Jesse Kornblum, Kris
Kendall, and Nick Mikus
Audit File
Foremost started at Thu Sep 12 23:47:04 2019
Invocation: foremost -i /dev/sdb1 -o /home/egdoc/Documents/output
Output directory: /home/egdoc/Documents/output
Configuration file: /etc/foremost.conf
------------------------------------------------------------------
File: /dev/sdb1
Start: Thu Sep 12 23:47:04 2019
Length: 200 MB (209715200 bytes)
Num Name (bs=512) Size File Offset Comment
0: 00020482.flac 28 MB 10486784
Finish: Thu Sep 12 23:47:04 2019
1 FILES EXTRACTED
flac:= 1
------------------------------------------------------------------
Foremost finished at Thu Sep 12 23:47:04 2019
Conclusion
In this article we learned how to use foremost, a forensic program able to retrieve deleted
files of various types. We learned that the program works by using a technique called
data carving , and relies on files signatures to achieve its goal. We saw an
example of the program usage and we also learned how to add the support for a specific file
type using the syntax illustrated in the configuration file. For more information about the
program usage, please consult its manual page.
Delete Files That Have Not Been Accessed For A Given Time On Linux
by sk · Published
September 16, 2019 · Updated September 17, 2019
We already have covered how to
manually find and
delete files older than X days using "find" command in Linux . Today we will do the same,
but only if the files have not been accessed for a certain period of time. Say hello to
"Tmpwatch" , a command line utility to recursively delete files that haven't been accessed for
a given time. Not just files, tmpwatch will also delete empty directories as well.
By default, Tmpwatch will decide which files/directories should be deleted based on their
atime (access time). You can, of course, change this behaviour by using ctime (inode change
time), mtime (modification time) values as well. Normally, Tmpwatch can be used to delete the
contents of /tmp directory and other unused/unwanted stuffs like old log files.
An
important warning!!
Before start using this tool, you must know that Tmpwatch will delete files and directories
recursively based on the given criteria. Do not run tmpwatch in / (root directory) . This
directory contains important files which are required to keep the Linux system running. If
you're not careful enough, tmpwatch will delete the important system files and directories that
matches the given criteria in the entire root directory. There is no safeguard mechanism built
into Tmpwatch tool to prevent you from running it on root directory. So, there is no way to
undo the operation. You have been warned!
Install Tmpwatch
Tmpwatch is available in the default repositories of most Linux distributions.
On Fedora, you can install it using command:
$ sudo dnf install tmpwatch
On CentOS:
$ sudo yum install tmpwatch
On openSUSE:
$ sudo zypper install tmpwatch
On Debian and its derivatives like Ubuntu, Tmpwatch is available in different name i.e
Tmpreaper . Tmpreaper is mostly based on `tmpwatch-1.2/1.4′ by Erik Troan from Redhat.
Now, tmpreaper is being maintained for Debian by Paul Slootman .
To install tmpreaper on Debian, Ubuntu, Linux Mint, run:
$ sudo apt install tmpreaper
Delete Files That Have Not Been Accessed For A Given Time Using Tmpwatch / Tmpreaper
Usage of Tmpwatch and Tmpreaper is almost same. If you're on Debian-based systems, replace
"Tmpwatch" with "Tmpreaper" in the following examples.
Delete files which are not
accessed more than X days
To delete files more than 10 days old, run:
tmpwatch 10d /var/log/
The above command will delete all the files and empty directories which are not accessed
more than 10 days from /var/log/ folder.
Delete files which are not modified more than X
days
Like I already said, Tmpwatch will delete files based on their access time. You can also
delete files based on their modification time (mtime) using -m option.
For example, the following command will delete files which are not modified for the 10 days
in /var/log/ folder.
tmpwatch -m 10d /var/log/
Here, -m refers the modification time and d is the <time_spec> parameter. The
<time_spec> parameter defines the age threshold for removing files. You can use the
following time_spec parameters for removing files.
d – for days,
h – for hours,
m – for minutes,
s – for seconds.
Hours is the default.
For instance, to delete files which are not modified for the past 10 hours , simply run:
tmpwatch -m 10 /var/log/
As you might have noticed, I haven't used time_spec parameter in the above command. Because,
h (for hours) is default parameter, so we don't have to mention it when deleting files that
haven't been modified for the past X hours.
Delete Symlinks
If you want to delete symlinks, not just regular files and directories, use -s option like
below.
tmpwatch -s 10 /var/log/
Delete all files
To remove all file types, not just regular files, symlinks, and directories, use -a
option.
tmpwatch -a 10 /var/log/
The above command will delete all types of files including regular files, symlinks, and
directories in the /var/log/ folder.
Exclude directories from deletion
Sometimes, you might want to delete files, but not directories. if so, the command would
be:
tmpwatch -am 10 --nodirs /var/log/
The above command will delete all files except the directories which are not modified for
the past 10 hours.
Perform a test run without actually delete anything
Sometimes, you might want to view which files are actually going to be deleted. This will be
helpful when running Tmpwatch on an important directory. If so, run Tmpwatch in test mode with
-t option.
tmpwatch -t 30 /var/log/
Sample output from CentOS 7 server:
removing file /var/log/wtmp
removing directory /var/log/ppp if empty
removing directory /var/log/tuned if empty
removing directory /var/log/anaconda if empty
removing file /var/log/dmesg.old
removing file /var/log/boot.log
removing file /var/log/dnf.librepo.log
On Debian-based systems, you will see an output like below.
$ tmpreaper -t 30 /var/log/
(PID 1803) Pretending to clean up directory `/var/log/'.
(PID 1804) Pretending to clean up directory `apache2'.
Pretending to remove file `apache2/error.log'.
Pretending to remove file `apache2/access.log'.
Pretending to remove file `apache2/other_vhosts_access.log'.
(PID 1804) Back from recursing down `apache2'.
(PID 1804) Pretending to clean up directory `dbconfig-common'.
Pretending to remove file `dbconfig-common/dbc.log'.
(PID 1804) Back from recursing down `dbconfig-common'.
(PID 1804) Pretending to clean up directory `dist-upgrade'.
(PID 1804) Back from recursing down `dist-upgrade'.
(PID 1804) Pretending to clean up directory `lxd'.
(PID 1804) Back from recursing down `lxd'.
Pretending to remove file `/var/log//cloud-init.log'.
(PID 1804) Pretending to clean up directory `landscape'.
Pretending to remove file `landscape/sysinfo.log'.
(PID 1804) Back from recursing down `landscape'.
[...]
This will only simulate the operation, but don't actually delete anything. Tmpwatch will
simply perform a dry run and show you which files are going to be deleted in the
output.
Force file deletion
If you want to forcibly delete the files, use -f option.
tmpwatch -f 10h /var/log/
Normally, the files owned by the current user, with no write access are not removed. The -f
option will delete them as well.
Skip certain files from deletion
Tmpreaper has an option to skip files from deletion. This will be useful when you want to
keep certain types of files and deleting everything else. If so, use –protect option like
below.
tmpreaper --protect '*.txt' -t 10h /var/log/
This command will skip all files that have .txt extension from deletion
Sample output:
(PID 2623) Pretending to clean up directory `/var/log/'.
(PID 2624) Pretending to clean up directory `apache2'.
Pretending to remove file `apache2/error.log'.
Pretending to remove file `apache2/access.log'.
Pretending to remove file `apache2/other_vhosts_access.log'.
(PID 2624) Back from recursing down `apache2'.
(PID 2624) Pretending to clean up directory `dbconfig-common'.
Pretending to remove file `dbconfig-common/dbc.log'.
(PID 2624) Back from recursing down `dbconfig-common'.
(PID 2624) Pretending to clean up directory `dist-upgrade'.
(PID 2624) Back from recursing down `dist-upgrade'.
Pretending to remove empty directory `dist-upgrade'.
Entry matching `--protect' pattern skipped. `ostechnix.txt'
(PID 2624) Pretending to clean up directory `lxd'.
As you can see, Tmpreaper skips the *.txt files from deletion.
This option is not available in Tmpwatch, by the way.
Setting up cron job to delete
files periodically
You may not want to manually run Tmpwatch/Tmpreaper all the time. In that case, you could
setup a cron job to automate the clean process.
When installing Tmpreaper , it will create a daily cron job ( /etc/cron.daily/tmpreaper ).
This job will read the options from /etc/timereaper.conf file and act accordingly. Open the
file and change the values as per your requirement. By default, Tmpreaper will delete files
that 7 days older. You can, however, change this by modifying the value "TMPREAPER_TIME=7d" in
tmpreaper.conf file.
If you use "Tmpwatch", you need to manually create cron job and put the cron entry in
it.
# crontab -e
Add the following line:
0 1 * * * /usr/sbin/tmpwatch 30d /var/log/
As per the above cron job, Tmpwatch will run everyday at 1am and delete files which are 30
days older.
For more details about setting cron jobs, refer the following link.
Artistic Style is a source code indenter, formatter, and beautifier for the C, C++, C++/CLI,
Objective‑C, C# and Java programming languages.
When indenting source code, we as programmers have a tendency to use both spaces and tab
characters to create the wanted indentation. Moreover, some editors by default insert spaces
instead of tabs when pressing the tab key. Other editors (Emacs for example) have the ability
to "pretty up" lines by automatically setting up the white space before the code on the line,
possibly inserting spaces in code that up to now used only tabs for indentation.
The NUMBER of spaces for each tab character in the source code can change between editors
(unless the user sets up the number to his liking...). One of the standard problems programmers
face when moving from one editor to another is that code containing both spaces and tabs, which
was perfectly indented, suddenly becomes a mess to look at. Even if you as a programmer take
care to ONLY use spaces or tabs, looking at other people's source code can still be
problematic.
To address this problem, Artistic Style was created – a filter written in C++ that
automatically re-indents and re-formats C / C++ / Objective‑C / C++/CLI / C# / Java
source files. It can be used from a command line, or it can be incorporated as a library in
another program.
Vim can indent bash scripts. But not reformat them before indenting.
Backup your bash script, open it with vim, type gg=GZZ and indent will be
corrected. (Note for the impatient: this overwrites the file, so be sure to do that backup!)
Though, some bugs with << (expecting EOF as first character on a line)
e.g.
The granddaddy of HTML tools, with support for modern standards.
There used to be a fork called tidy-html5 which since became the official thing. Here is its
GitHub repository
.
Tidy is a console application for Mac OS X, Linux, Windows, UNIX, and more. It corrects and
cleans up HTML and XML documents by fixing markup errors and upgrading legacy code to modern
standards.
For your needs, here is the command line to call Tidy:
You may also use -ctime +N , used to match (and delete in this example) files
that had their status last changed N days ago (the file attributes/metadata AND/OR file content
was modified) , as opposed to -mtime , which only matches files based on when
their content was last modified:
DiffMerge is a cross-platform GUI application for comparing and merging files. It has two
functionality engines, the Diff engine which shows the difference between two files, which
supports intra-line highlighting and editing and a Merge engine which outputs the changed lines
between three files.
Meld is a lightweight GUI diff and merge tool. It enables users to compare files,
directories plus version controlled programs. Built specifically for developers, it comes with
the following features:
Two-way and three-way comparison of files and directories
Update of file comparison as a users types more words
Makes merges easier using auto-merge mode and actions on changed blocks
Easy comparisons using visualizations
Supports Git, Mercurial, Subversion, Bazaar plus many more
Diffuse is another popular, free, small and simple GUI diff and merge tool that you can use
on Linux. Written in Python, It offers two major functionalities, that is: file comparison and
version control, allowing file editing, merging of files and also output the difference between
files.
You can view a comparison summary, select lines of text in files using a mouse pointer,
match lines in adjacent files and edit different file. Other features include:
Syntax highlighting
Keyboard shortcuts for easy navigation
Supports unlimited undo
Unicode support
Supports Git, CVS, Darcs, Mercurial, RCS, Subversion, SVK and Monotone
XXdiff is a free, powerful file and directory comparator and merge tool that runs on Unix
like operating systems such as Linux, Solaris, HP/UX, IRIX, DEC Tru64. One limitation of XXdiff
is its lack of support for unicode files and inline editing of diff files.
It has the following list of features:
Shallow and recursive comparison of two, three file or two directories
Horizontal difference highlighting
Interactive merging of files and saving of resulting output
Supports merge reviews/policing
Supports external diff tools such as GNU diff, SIG diff, Cleareddiff and many more
Extensible using scripts
Fully customizable using resource file plus many other minor features
KDiff3 is yet another cool, cross-platform diff and merge tool made from KDevelop . It works
on all Unix-like platforms including Linux and Mac OS X, Windows.
It can compare or merge two to three files or directories and has the following notable
features:
Indicates differences line by line and character by character
Supports auto-merge
In-built editor to deal with merge-conflicts
Supports Unicode, UTF-8 and many other codecs
Allows printing of differences
Windows explorer integration support
Also supports auto-detection via byte-order-mark "BOM"
TkDiff is also a cross-platform, easy-to-use GUI wrapper for the Unix diff tool. It provides
a side-by-side view of the differences between two input files. It can run on Linux, Windows
and Mac OS X.
Additionally, it has some other exciting features including diff bookmarks, a graphical map
of differences for easy and quick navigation plus many more.
Having read this review of some of the best file and directory comparator and merge tools,
you probably want to try out some of them. These may not be the only diff tools available you
can find on Linux, but they are known to offer some the best features, you may also want to let
us know of any other diff tools out there that you have tested and think deserve to be
mentioned among the best.
The Bash options for debugging are turned off by default, but once they are turned on by
using the set command, they stay on until explicitly turned off. If you are not sure which
options are enabled, you can examine the $- variable to see the current state of
all the variables.
$ echo $-
himBHs
$ set -xv && echo $-
himvxBHs
There is another useful switch we can use to help us find variables referenced without
having any value set. This is the -u switch, and just like -x and
-v it can also be used on the command line, as we see in the following
example:
We mistakenly assigned a value of 7 to the variable called "level" then tried to echo a
variable named "score" that simply resulted in printing nothing at all to the screen.
Absolutely no debug information was given. Setting our -u switch allows us to see
a specific error message, "score: unbound variable" that indicates exactly what went wrong.
We can use those options in short Bash scripts to give us debug information to identify
problems that do not otherwise trigger feedback from the Bash interpreter. Let's walk through a
couple of examples.
#!/bin/bash
read -p "Path to be added: " $path
if [ "$path" = "/home/mike/bin" ]; then
echo $path >> $PATH
echo "new path: $PATH"
else
echo "did not modify PATH"
fi
In the example above we run the addpath script normally and it simply does not modify our
PATH . It does not give us any indication of why or clues to mistakes made.
Running it again using the -x option clearly shows us that the left side of our
comparison is an empty string. $path is an empty string because we accidentally
put a dollar sign in front of "path" in our read statement. Sometimes we look right at a
mistake like this and it doesn't look wrong until we get a clue and think, "Why is
$path evaluated to an empty string?"
Looking this next example, we also get no indication of an error from the interpreter. We
only get one value printed per line instead of two. This is not an error that will halt
execution of the script, so we're left to simply wonder without being given any clues. Using
the -u switch,we immediately get a notification that our variable j
is not bound to a value. So these are real time savers when we make mistakes that do not result
in actual errors from the Bash interpreter's point of view.
Now surely you are thinking that sounds fine, but we seldom need help debugging mistakes
made in one-liners at the command line or in short scripts like these. We typically struggle
with debugging when we deal with longer and more complicated scripts, and we rarely need to set
these options and leave them set while we run multiple scripts. Setting -xv
options and then running a more complex script will often add confusion by doubling or tripling
the amount of output generated.
Fortunately we can use these options in a more precise way by placing them inside our
scripts. Instead of explicitly invoking a Bash shell with an option from the command line, we
can set an option by adding it to the shebang line instead.
#!/bin/bash -x
This will set the -x option for the entire file or until it is unset during the
script execution, allowing you to simply run the script by typing the filename instead of
passing it to Bash as a parameter. A long script or one that has a lot of output will still
become unwieldy using this technique however, so let's look at a more specific way to use
options.
For a more targeted approach, surround only the suspicious blocks of code with the options
you want. This approach is great for scripts that generate menus or detailed output, and it is
accomplished by using the set keyword with plus or minus once again.
#!/bin/bash
read -p "Path to be added: " $path
set -xv
if [ "$path" = "/home/mike/bin" ]; then
echo $path >> $PATH
echo "new path: $PATH"
else
echo "did not modify PATH"
fi
set +xv
We surrounded only the blocks of code we suspect in order to reduce the output, making our
task easier in the process. Notice we turn on our options only for the code block containing
our if-then-else statement, then turn off the option(s) at the end of the suspect block. We can
turn these options on and off multiple times in a single script if we can't narrow down the
suspicious areas, or if we want to evaluate the state of variables at various points as we
progress through the script. There is no need to turn off an option If we want it to continue
for the remainder of the script execution.
For completeness sake we should mention also that there are debuggers written by third
parties that will allow us to step through the code execution line by line. You might want to
investigate these tools, but most people find that that they are not actually needed.
As seasoned programmers will suggest, if your code is too complex to isolate suspicious
blocks with these options then the real problem is that the code should be refactored. Overly
complex code means bugs can be difficult to detect and maintenance can be time consuming and
costly.
One final thing to mention regarding Bash debugging options is that a file globbing option
also exists and is set with -f . Setting this option will turn off globbing
(expansion of wildcards to generate file names) while it is enabled. This -f
option can be a switch used at the command line with bash, after the shebang in a file or, as
in this example to surround a block of code.
#!/bin/bash
echo "ignore fileglobbing option turned off"
ls *
echo "ignore file globbing option set"
set -f
ls *
set +f
There are more involved techniques worth considering if your scripts are complicated,
including using an assert function as mentioned earlier. One such method to keep in mind is the
use of trap. Shell scripts allow us to trap signals and do something at that point.
A simple but useful example you can use in your Bash scripts is to trap on EXIT
.
#!/bin/bash
trap 'echo score is $score, status is $status' EXIT
if [ -z ]; then
status="default"
else
status=
fi
score=0
if [ ${USER} = 'superman' ]; then
score=99
elif [ $# -gt 1 ]; then
score=
fi
As you can see just dumping the current values of variables to the screen can be useful to
show where your logic is failing. The EXIT signal obviously does not need an
explicit exit statement to be generated; in this case the echo
statement is executed when the end of the script is reached.
Another useful trap to use with Bash scripts is DEBUG . This happens after
every statement, so it can be used as a brute force way to show the values of variables at each
step in the script execution.
#!/bin/bash
trap 'echo "line ${LINENO}: score is $score"' DEBUG
score=0
if [ "${USER}" = "mike" ]; then
let "score += 1"
fi
let "score += 1"
if [ "" = "7" ]; then
score=7
fi
exit 0
When you notice your Bash script not behaving as expected and the reason is not clear to you
for whatever reason, consider what information would be useful to help you identify the cause
then use the most comfortable tools available to help you pinpoint the issue. The xtrace option
-x is easy to use and probably the most useful of the options presented here, so
consider trying it out next time you're faced with a script that's not doing what you thought
it would
If you want to match the pattern regardless of it's case (Capital letters or lowercase
letters) you can set the nocasematch shell option with the shopt builtin. You can do
this as the first line of your script. Since the script will run in a subshell it won't effect
your normal environment.
#!/bin/bash
shopt -s nocasematch
read -p "Name a Star Trek character: " CHAR
case $CHAR in
"Seven of Nine" | Neelix | Chokotay | Tuvok | Janeway )
echo "$CHAR was in Star Trek Voyager"
;;&
Archer | Phlox | Tpol | Tucker )
echo "$CHAR was in Star Trek Enterprise"
;;&
Odo | Sisko | Dax | Worf | Quark )
echo "$CHAR was in Star Trek Deep Space Nine"
;;&
Worf | Data | Riker | Picard )
echo "$CHAR was in Star Trek The Next Generation" && echo "/etc/redhat-release"
;;
*) echo "$CHAR is not in this script."
;;
esac
The Linux exec command is a
bash builtin
and a very interesting
utility. It is not something most people who are new to Linux know. Most seasoned admins understand it but only use it occasionally.
If you are a developer, programmer or DevOp engineer it is probably something you use more often. Lets take a deep dive into the
builtin exec command, what it does and how to use it.
In order to understand the exec command, you need a fundamental understanding of how sub-shells work.
... ... ...
What the Exec Command Does
In it's most basic function the exec command changes the default behavior of creating a sub-shell to run a command. If you run
exec followed by a command, that command will REPLACE the original process, it will NOT create a sub-shell.
An additional feature of the exec command, is
redirection
and manipulation
of
file descriptors
. Explaining redirection and file descriptors is outside the scope of this tutorial. If these are new to you please read "
Linux IO, Standard Streams and
Redirection
" to get acquainted with these terms and functions.
In the following sections we will expand on both of these functions and try to demonstrate how to use them.
How to Use the Exec Command with Examples
Let's look at some examples of how to use the exec command and it's options.
Basic Exec Command Usage – Replacement of Process
If you call exec and supply a command without any options, it simply replaces the shell with
command
.
Let's run an experiment. First, I ran the ps command to find the process id of my second terminal window. In this case it was
17524. I then ran "exec tail" in that second terminal and checked the ps command again. If you look at the screenshot below, you
will see the tail process replaced the bash process (same process ID).
Screenshot 3
Since the tail command replaced the bash shell process, the shell will close when the tail command terminates.
Exec Command Options
If the -l option is supplied, exec adds a dash at the beginning of the first (zeroth) argument given. So if we ran the following
command:
exec -l tail -f /etc/redhat-release
It would produce the following output in the process list. Notice the highlighted dash in the CMD column.
The -c option causes the supplied command to run with a empty environment. Environmental variables like
PATH
, are cleared before the command it run.
Let's try an experiment. We know that the printenv command prints all the settings for a users environment. So here we will open
a new bash process, run the printenv command to show we have some variables set. We will then run printenv again but this time with
the exec -c option.
In the example above you can see that an empty environment is used when using exec with the -c option. This is why there was no
output to the printenv command when ran with exec.
The last option, -a [name], will pass
name
as the first argument to
command
. The command will still run as expected,
but the name of the process will change. In this next example we opened a second terminal and ran the following command:
exec -a PUTORIUS tail -f /etc/redhat-release
Here is the process list showing the results of the above command:
Screenshot 5
As you can see, exec passed PUTORIUS as first argument to
command
, therefore it shows in the process list with that name.
Using the Exec Command for Redirection & File Descriptor Manipulation
The exec command is often used for redirection. When a file descriptor is redirected with exec it affects the current shell. It
will exist for the life of the shell or until it is explicitly stopped.
If no
command
is specified, redirections may be used to affect the current shell environment.
– Bash Manual
Here are some examples of how to use exec for redirection and manipulating file descriptors. As we stated above, a deep dive into
redirection and file descriptors is outside the scope of this tutorial. Please read "
Linux IO, Standard Streams and
Redirection
" for a good primer and see the resources section for more information.
Redirect all standard output (STDOUT) to a file:
exec >file
In the example animation below, we use exec to redirect all standard output to a file. We then enter some commands that should
generate some output. We then use exec to redirect STDOUT to the /dev/tty to restore standard output to the terminal. This effectively
stops the redirection. Using the
cat
command
we can see that the file contains all the redirected output.
Open a file as file descriptor 6 for writing:
exec 6> file2write
Open file as file descriptor 8 for reading:
exec 8< file2read
Copy file descriptor 5 to file descriptor 7:
exec 7<&5
Close file descriptor 8:
exec 8<&-
Conclusion
In this article we covered the basics of the exec command. We discussed how to use it for process replacement, redirection and
file descriptor manipulation.
In the past I have seen exec used in some interesting ways. It is often used as a wrapper script for starting other binaries.
Using process replacement you can call a binary and when it takes over there is no trace of the original wrapper script in the process
table or memory. I have also seen many System Administrators use exec when transferring work from one script to another. If you call
a script inside of another script the original process stays open as a parent. You can use exec to replace that original script.
I am sure there are people out there using exec in some interesting ways. I would love to hear your experiences with exec. Please
feel free to leave a comment below with anything on your mind.
Type the following command to display the seconds since the epoch:
date +%s
date +%s
Sample outputs: 1268727836
Convert Epoch To Current Time
Type the command:
date -d @Epoch
date -d @1268727836
date -d "1970-01-01 1268727836 sec GMT"
date -d @Epoch date -d @1268727836 date -d "1970-01-01 1268727836 sec GMT"
Sample outputs:
Tue Mar 16 13:53:56 IST 2010
Please note that @ feature only works with latest version of date (GNU coreutils v5.3.0+).
To convert number of seconds back to a more readable form, use a command like this:
In ksh93 however, the argument is taken as a date expression where various
and hardly documented formats are supported.
For a Unix epoch time, the syntax in ksh93 is:
printf '%(%F %T)T\n' '#1234567890'
ksh93 however seems to use its own algorithm for the timezone and can get it
wrong. For instance, in Britain, it was summer time all year in 1970, but:
Time conversion using Bash This article show how you can obtain the UNIX epoch time
(number of seconds since 1970-01-01 00:00:00 UTC) using the Linux bash "date" command. It also
shows how you can convert a UNIX epoch time to a human readable time.
Obtain UNIX epoch time using bash
Obtaining the UNIX epoch time using bash is easy. Use the build-in date command and instruct it
to output the number of seconds since 1970-01-01 00:00:00 UTC. You can do this by passing a
format string as parameter to the date command. The format string for UNIX epoch time is
'%s'.
lode@srv-debian6:~$ date "+%s"
1234567890
To convert a specific date and time into UNIX epoch time, use the -d parameter.
The next example shows how to convert the timestamp "February 20th, 2013 at 08:41:15" into UNIX
epoch time.
lode@srv-debian6:~$ date "+%s" -d "02/20/2013 08:41:15"
1361346075
Converting UNIX epoch time to human readable time
Even though I didn't find it in the date manual, it is possible to use the date command to
reformat a UNIX epoch time into a human readable time. The syntax is the following:
lode@srv-debian6:~$ date -d @1234567890
Sat Feb 14 00:31:30 CET 2009
The same thing can also be achieved using a bit of perl programming:
lode@srv-debian6:~$ perl -e 'print scalar(localtime(1234567890)), "\n"'
Sat Feb 14 00:31:30 2009
Please note that the printed time is formatted in the timezone in which your Linux system is
configured. My system is configured in UTC+2, you can get another output for the same
command.
The Code-TidyAll
distribution provides a command line script called tidyall that will use
Perl::Tidy to change the
layout of the code.
This tandem needs 2 configuration file.
The .perltidyrc file contains the instructions to Perl::Tidy that describes the layout of a
Perl-file. We used the following file copied from the source code of the Perl Maven
project.
-pbp
-nst
-et=4
--maximum-line-length=120
# Break a line after opening/before closing token.
-vt=0
-vtc=0
The tidyall command uses a separate file called .tidyallrc that describes which files need
to be beautified.
Once I installed Code::TidyAll and placed those files in
the root directory of the project, I could run tidyall -a .
That created a directory called .tidyall.d/ where it stores cached versions of the files,
and changed all the files that were matches by the select statements in the .tidyallrc
file.
Then, I added .tidyall.d/ to the .gitignore file to avoid adding that subdirectory to the
repository and ran tidyall -a again to make sure the .gitignore file is sorted.
Switch statement for bash script
<a rel='nofollow' target='_blank'
href='//rev.linuxquestions.org/www/delivery/ck.php?n=a054b75'><img border='0'
alt=''
src='//rev.linuxquestions.org/www/delivery/avw.php?zoneid=10&n=a054b75'
/></a>
[ Log in to get rid of
this advertisement] Hello, i am currently trying out the switch statement using
bash script.
while true
do
showmenu
read choice
echo "Enter a choice:"
case "$choice" in
"1")
echo "Number One"
;;
"2")
echo "Number Two"
;;
"3")
echo "Number Three"
;;
"4")
echo "Number One, Two, Three"
;;
"5")
echo "Program Exited"
exit 0
;;
*)
echo "Please enter number ONLY ranging from 1-5!"
;;
esac
done
OUTPUT:
1. Number1
2. Number2
3. Number3
4. All
5. Quit
Enter a choice:
So, when the code is run, a menu with option 1-5 will be shown, then the user
will be asked to enter a choice and finally an output is shown. But it is possible
if the user want to enter multiple choices. For example, user enter choice "1" and
"3", so the output will be "Number One" and "Number Three". Any idea?
Just something to get you started.
Code:
#! /bin/bash
showmenu ()
{
typeset ii
typeset -i jj=1
typeset -i kk
typeset -i valid=0 # valid=1 if input is good
while (( ! valid ))
do
for ii in "${options[@]}"
do
echo "$jj) $ii"
let jj++
done
read -e -p 'Select a list of actions : ' -a answer
jj=0
valid=1
for kk in "${answer[@]}"
do
if (( kk < 1 || kk > "${#options[@]}" ))
then
echo "Error Item $jj is out of bounds" 1>&2
valid=0
break
fi
let jj++
done
done
}
typeset -r c1=Number1
typeset -r c2=Number2
typeset -r c3=Number3
typeset -r c4=All
typeset -r c5=Quit
typeset -ra options=($c1 $c2 $c3 $c4 $c5)
typeset -a answer
typeset -i kk
while true
do
showmenu
for kk in "${answer[@]}"
do
case $kk in
1)
echo 'Number One'
;;
2)
echo 'Number Two'
;;
3)
echo 'Number Three'
;;
4)
echo 'Number One, Two, Three'
;;
5)
echo 'Program Exit'
exit 0
;;
esac
done
done
Vim can indent bash scripts. But not reformat them before indenting.
Backup your bash script, open it with vim, type gg=GZZ and indent will be
corrected. (Note for the impatient: this overwrites the file, so be sure to do that backup!)
Though, some bugs with << (expecting EOF as first character on a line)
e.g.
A shell parser, formatter and interpreter. Supports
POSIX Shell
,
Bash
and
mksh
. Requires Go 1.11 or later.
Quick start
To parse shell scripts, inspect them, and print them out, see the
syntax examples
.
For high-level operations like performing shell expansions on strings, see the
shell examples
.
shfmt
Go 1.11 and later can download the latest v2 stable release:
cd $(mktemp -d); go mod init tmp; go get mvdan.cc/sh/cmd/shfmt
The latest v3 pre-release can be downloaded in a similar manner, using the
/v3
module:
cd $(mktemp -d); go mod init tmp; go get mvdan.cc/sh/v3/cmd/shfmt
Finally, any older release can be built with their respective older Go versions by manually cloning,
checking out a tag, and running
go build ./cmd/shfmt
.
shfmt
formats shell programs. It can use tabs or any number of spaces to indent. See
canonical.sh
for a quick look at its
default style.
You can feed it standard input, any number of files or any number of directories to recurse into. When
recursing, it will operate on
.sh
and
.bash
files and ignore files starting with a
period. It will also operate on files with no extension and a shell shebang.
shfmt -l -w script.sh
Typically, CI builds should use the command below, to error if any shell scripts in a project don't adhere
to the format:
shfmt -d .
Use
-i N
to indent with a number of spaces instead of tabs. There are other formatting options
- see
shfmt -h
. For example, to get the formatting appropriate for
Google's Style
guide, use
shfmt -i 2 -ci
.
bash -n
can be useful to check for syntax errors in shell scripts. However,
shfmt
>/dev/null
can do a better job as it checks for invalid UTF-8 and does all parsing statically, including
checking POSIX Shell validity:
$((
and
((
ambiguity is not supported. Backtracking would complicate the
parser and make streaming support via
io.Reader
impossible. The POSIX spec recommends to
space the operands
if
$( (
is meant.
$ echo '$((foo); (bar))' | shfmt
1:1: reached ) without matching $(( with ))
Some builtins like
export
and
let
are parsed as keywords. This is to allow
statically parsing them and building their syntax tree, as opposed to just keeping the arguments as a slice
of arguments.
JavaScript
A subset of the Go packages are available as an npm package called
mvdan-sh
. See the
_js
directory for more information.
Docker
To build a Docker image, checkout a specific version of the repository and run:
Parsing bash script options with getopts Posted on January 4, 2015 | 5 minutes |
Kevin Sookocheff A common task in shell scripting is to parse command line arguments to your
script. Bash provides the getopts built-in function to do just that. This tutorial
explains how to use the getopts built-in function to parse arguments and options
to a bash script.
The getopts function takes three parameters. The first is a specification of
which options are valid, listed as a sequence of letters. For example, the string
'ht' signifies that the options -h and -t are valid.
The second argument to getopts is a variable that will be populated with the
option or argument to be processed next. In the following loop, opt will hold the
value of the current option that has been parsed by getopts .
while getopts ":ht" opt; do
case ${opt} in
h ) # process option a
;;
t ) # process option t
;;
\? ) echo "Usage: cmd [-h] [-t]"
;;
esac
done
This example shows a few additional features of getopts . First, if an invalid
option is provided, the option variable is assigned the value ? . You can catch
this case and provide an appropriate usage message to the user. Second, this behaviour is only
true when you prepend the list of valid options with : to disable the default
error handling of invalid options. It is recommended to always disable the default error
handling in your scripts.
The third argument to getopts is the list of arguments and options to be
processed. When not provided, this defaults to the arguments and options provided to the
application ( $@ ). You can provide this third argument to use
getopts to parse any list of arguments and options you provide.
Shifting
processed options
The variable OPTIND holds the number of options parsed by the last call to
getopts . It is common practice to call the shift command at the end
of your processing loop to remove options that have already been handled from $@
.
shift $((OPTIND -1))
Parsing options with arguments
Options that themselves have arguments are signified with a : . The argument to
an option is placed in the variable OPTARG . In the following example, the option
t takes an argument. When the argument is provided, we copy its value to the
variable target . If no argument is provided getopts will set
opt to : . We can recognize this error condition by catching the
: case and printing an appropriate error message.
while getopts ":t:" opt; do
case ${opt} in
t )
target=$OPTARG
;;
\? )
echo "Invalid option: $OPTARG" 1>&2
;;
: )
echo "Invalid option: $OPTARG requires an argument" 1>&2
;;
esac
done
shift $((OPTIND -1))
An extended example – parsing nested arguments and options
Let's walk through an extended example of processing a command that takes options, has a
sub-command, and whose sub-command takes an additional option that has an argument. This is a
mouthful so let's break it down using an example. Let's say we are writing our own version of
the pip command . In
this version you can call pip with the -h option to display a help
message.
> pip -h
Usage:
pip -h Display this help message.
pip install Install a Python package.
We can use getopts to parse the -h option with the following
while loop. In it we catch invalid options with \? and
shift all arguments that have been processed with shift $((OPTIND
-1)) .
while getopts ":h" opt; do
case ${opt} in
h )
echo "Usage:"
echo " pip -h Display this help message."
echo " pip install Install a Python package."
exit 0
;;
\? )
echo "Invalid Option: -$OPTARG" 1>&2
exit 1
;;
esac
done
shift $((OPTIND -1))
Now let's add the sub-command install to our script. install takes
as an argument the Python package to install.
> pip install urllib3
install also takes an option, -t . -t takes as an
argument the location to install the package to relative to the current directory.
> pip install urllib3 -t ./src/lib
To process this line we must find the sub-command to execute. This value is the first
argument to our script.
subcommand=$1
shift # Remove `pip` from the argument list
Now we can process the sub-command install . In our example, the option
-t is actually an option that follows the package argument so we begin by removing
install from the argument list and processing the remainder of the line.
case "$subcommand" in
install)
package=$1
shift # Remove `install` from the argument list
;;
esac
After shifting the argument list we can process the remaining arguments as if they are of
the form package -t src/lib . The -t option takes an argument itself.
This argument will be stored in the variable OPTARG and we save it to the variable
target for further work.
case "$subcommand" in
install)
package=$1
shift # Remove `install` from the argument list
while getopts ":t:" opt; do
case ${opt} in
t )
target=$OPTARG
;;
\? )
echo "Invalid Option: -$OPTARG" 1>&2
exit 1
;;
: )
echo "Invalid Option: -$OPTARG requires an argument" 1>&2
exit 1
;;
esac
done
shift $((OPTIND -1))
;;
esac
Putting this all together, we end up with the following script that parses arguments to our
version of pip and its sub-command install .
package="" # Default to empty package
target="" # Default to empty target
# Parse options to the `pip` command
while getopts ":h" opt; do
case ${opt} in
h )
echo "Usage:"
echo " pip -h Display this help message."
echo " pip install <package> Install <package>."
exit 0
;;
\? )
echo "Invalid Option: -$OPTARG" 1>&2
exit 1
;;
esac
done
shift $((OPTIND -1))
subcommand=$1; shift # Remove 'pip' from the argument list
case "$subcommand" in
# Parse options to the install sub command
install)
package=$1; shift # Remove 'install' from the argument list
# Process package options
while getopts ":t:" opt; do
case ${opt} in
t )
target=$OPTARG
;;
\? )
echo "Invalid Option: -$OPTARG" 1>&2
exit 1
;;
: )
echo "Invalid Option: -$OPTARG requires an argument" 1>&2
exit 1
;;
esac
done
shift $((OPTIND -1))
;;
esac
After processing the above sequence of commands, the variable package will hold
the package to install and the variable target will hold the target to install the
package to. You can use this as a template for processing any set of arguments and options to
your scripts.
Update: It's been more than 5 years since I started this answer. Thank you for LOTS of great
edits/comments/suggestions. In order save maintenance time, I've modified the code block to
be 100% copy-paste ready. Please do not post comments like "What if you changed X to Y ".
Instead, copy-paste the code block, see the output, make the change, rerun the script, and
comment "I changed X to Y and " I don't have time to test your ideas and tell you if they
work.
Method #1: Using bash without getopt[s]
Two common ways to pass key-value-pair arguments are:
cat >/tmp/demo-space-separated.sh <<'EOF'
#!/bin/bash
POSITIONAL=()
while [[ $# -gt 0 ]]
do
key="$1"
case $key in
-e|--extension)
EXTENSION="$2"
shift # past argument
shift # past value
;;
-s|--searchpath)
SEARCHPATH="$2"
shift # past argument
shift # past value
;;
-l|--lib)
LIBPATH="$2"
shift # past argument
shift # past value
;;
--default)
DEFAULT=YES
shift # past argument
;;
*) # unknown option
POSITIONAL+=("$1") # save it in an array for later
shift # past argument
;;
esac
done
set -- "${POSITIONAL[@]}" # restore positional parameters
echo "FILE EXTENSION = ${EXTENSION}"
echo "SEARCH PATH = ${SEARCHPATH}"
echo "LIBRARY PATH = ${LIBPATH}"
echo "DEFAULT = ${DEFAULT}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
echo "Last line of file specified as non-opt/last argument:"
tail -1 "$1"
fi
EOF
chmod +x /tmp/demo-space-separated.sh
/tmp/demo-space-separated.sh -e conf -s /etc -l /usr/lib /etc/hosts
output from copy-pasting the block above:
FILE EXTENSION = conf
SEARCH PATH = /etc
LIBRARY PATH = /usr/lib
DEFAULT =
Number files in SEARCH PATH with EXTENSION: 14
Last line of file specified as non-opt/last argument:
#93.184.216.34 example.com
cat >/tmp/demo-equals-separated.sh <<'EOF'
#!/bin/bash
for i in "$@"
do
case $i in
-e=*|--extension=*)
EXTENSION="${i#*=}"
shift # past argument=value
;;
-s=*|--searchpath=*)
SEARCHPATH="${i#*=}"
shift # past argument=value
;;
-l=*|--lib=*)
LIBPATH="${i#*=}"
shift # past argument=value
;;
--default)
DEFAULT=YES
shift # past argument with no value
;;
*)
# unknown option
;;
esac
done
echo "FILE EXTENSION = ${EXTENSION}"
echo "SEARCH PATH = ${SEARCHPATH}"
echo "LIBRARY PATH = ${LIBPATH}"
echo "DEFAULT = ${DEFAULT}"
echo "Number files in SEARCH PATH with EXTENSION:" $(ls -1 "${SEARCHPATH}"/*."${EXTENSION}" | wc -l)
if [[ -n $1 ]]; then
echo "Last line of file specified as non-opt/last argument:"
tail -1 $1
fi
EOF
chmod +x /tmp/demo-equals-separated.sh
/tmp/demo-equals-separated.sh -e=conf -s=/etc -l=/usr/lib /etc/hosts
output from copy-pasting the block above:
FILE EXTENSION = conf
SEARCH PATH = /etc
LIBRARY PATH = /usr/lib
DEFAULT =
Number files in SEARCH PATH with EXTENSION: 14
Last line of file specified as non-opt/last argument:
#93.184.216.34 example.com
To better understand ${i#*=} search for "Substring Removal" in this guide . It is
functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a
needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two
needless subprocesses.
More recent getopt versions don't have these limitations.
Additionally, the POSIX shell (and others) offer getopts which doesn't have
these limitations. I've included a simplistic getopts example.
Usage demo-getopts.sh -vf /etc/hosts foo bar
cat >/tmp/demo-getopts.sh <<'EOF'
#!/bin/sh
# A POSIX variable
OPTIND=1 # Reset in case getopts has been used previously in the shell.
# Initialize our own variables:
output_file=""
verbose=0
while getopts "h?vf:" opt; do
case "$opt" in
h|\?)
show_help
exit 0
;;
v) verbose=1
;;
f) output_file=$OPTARG
;;
esac
done
shift $((OPTIND-1))
[ "${1:-}" = "--" ] && shift
echo "verbose=$verbose, output_file='$output_file', Leftovers: $@"
EOF
chmod +x /tmp/demo-getopts.sh
/tmp/demo-getopts.sh -vf /etc/hosts foo bar
output from copy-pasting the block above:
verbose=1, output_file='/etc/hosts', Leftovers: foo bar
The advantages of getopts are:
It's more portable, and will work in other shells like dash .
It can handle multiple single options like -vf filename in the typical
Unix way, automatically.
The disadvantage of getopts is that it can only handle short options (
-h , not --help ) without additional code.
There is a getopts tutorial which explains
what all of the syntax and variables mean. In bash, there is also help getopts ,
which might be informative.
No answer mentions enhanced getopt . And the top-voted answer is misleading: It either
ignores -vfd style short options (requested by the OP) or options after
positional arguments (also requested by the OP); and it ignores parsing-errors. Instead:
Use enhanced getopt from util-linux or formerly GNU glibc .
1
It works with getopt_long() the C function of GNU glibc.
Has all useful distinguishing features (the others don't have them):
handles spaces, quoting characters and even binary in arguments
2 (non-enhanced getopt can't do this)
it can handle options at the end: script.sh -o outFile file1 file2 -v
( getopts doesn't do this)
allows = -style long options: script.sh --outfile=fileOut
--infile fileIn (allowing both is lengthy if self parsing)
allows combined short options, e.g. -vfd (real work if self
parsing)
allows touching option-arguments, e.g. -oOutfile or
-vfdoOutfile
Is so old already 3 that no GNU system is missing this (e.g. any
Linux has it).
You can test for its existence with: getopt --test → return value
4.
Other getopt or shell-builtin getopts are of limited
use.
verbose: y, force: y, debug: y, in: ./foo/bar/someFile, out: /fizz/someOtherFile
with the following myscript
#!/bin/bash
# saner programming env: these switches turn some bugs into errors
set -o errexit -o pipefail -o noclobber -o nounset
# -allow a command to fail with !'s side effect on errexit
# -use return value from ${PIPESTATUS[0]}, because ! hosed $?
! getopt --test > /dev/null
if [[ ${PIPESTATUS[0]} -ne 4 ]]; then
echo 'I'm sorry, `getopt --test` failed in this environment.'
exit 1
fi
OPTIONS=dfo:v
LONGOPTS=debug,force,output:,verbose
# -regarding ! and PIPESTATUS see above
# -temporarily store output to be able to check for errors
# -activate quoting/enhanced mode (e.g. by writing out "--options")
# -pass arguments only via -- "$@" to separate them correctly
! PARSED=$(getopt --options=$OPTIONS --longoptions=$LONGOPTS --name "$0" -- "$@")
if [[ ${PIPESTATUS[0]} -ne 0 ]]; then
# e.g. return value is 1
# then getopt has complained about wrong arguments to stdout
exit 2
fi
# read getopt's output this way to handle the quoting right:
eval set -- "$PARSED"
d=n f=n v=n outFile=-
# now enjoy the options in order and nicely split until we see --
while true; do
case "$1" in
-d|--debug)
d=y
shift
;;
-f|--force)
f=y
shift
;;
-v|--verbose)
v=y
shift
;;
-o|--output)
outFile="$2"
shift 2
;;
--)
shift
break
;;
*)
echo "Programming error"
exit 3
;;
esac
done
# handle non-option arguments
if [[ $# -ne 1 ]]; then
echo "$0: A single input file is required."
exit 4
fi
echo "verbose: $v, force: $f, debug: $d, in: $1, out: $outFile"
1 enhanced getopt is available on most "bash-systems", including
Cygwin; on OS X try brew install gnu-getopt or sudo port
install getopt 2 the POSIX exec() conventions have no reliable way to
pass binary NULL in command line arguments; those bytes prematurely end the argument 3 first version released in 1997 or before (I only tracked it back to
1997)
#!/bin/bash
for i in "$@"
do
case $i in
-p=*|--prefix=*)
PREFIX="${i#*=}"
;;
-s=*|--searchpath=*)
SEARCHPATH="${i#*=}"
;;
-l=*|--lib=*)
DIR="${i#*=}"
;;
--default)
DEFAULT=YES
;;
*)
# unknown option
;;
esac
done
echo PREFIX = ${PREFIX}
echo SEARCH PATH = ${SEARCHPATH}
echo DIRS = ${DIR}
echo DEFAULT = ${DEFAULT}
To better understand ${i#*=} search for "Substring Removal" in this guide . It is
functionally equivalent to `sed 's/[^=]*=//' <<< "$i"` which calls a
needless subprocess or `echo "$i" | sed 's/[^=]*=//'` which calls two
needless subprocesses.
I'm about 4 years late to this question, but want to give back. I used the earlier answers as
a starting point to tidy up my old adhoc param parsing. I then refactored out the following
template code. It handles both long and short params, using = or space separated arguments,
as well as multiple short params grouped together. Finally it re-inserts any non-param
arguments back into the $1,$2.. variables. I hope it's useful.
#!/usr/bin/env bash
# NOTICE: Uncomment if your script depends on bashisms.
#if [ -z "$BASH_VERSION" ]; then bash $0 $@ ; exit $? ; fi
echo "Before"
for i ; do echo - $i ; done
# Code template for parsing command line parameters using only portable shell
# code, while handling both long and short params, handling '-f file' and
# '-f=file' style param data and also capturing non-parameters to be inserted
# back into the shell positional parameters.
while [ -n "$1" ]; do
# Copy so we can modify it (can't modify $1)
OPT="$1"
# Detect argument termination
if [ x"$OPT" = x"--" ]; then
shift
for OPT ; do
REMAINS="$REMAINS \"$OPT\""
done
break
fi
# Parse current opt
while [ x"$OPT" != x"-" ] ; do
case "$OPT" in
# Handle --flag=value opts like this
-c=* | --config=* )
CONFIGFILE="${OPT#*=}"
shift
;;
# and --flag value opts like this
-c* | --config )
CONFIGFILE="$2"
shift
;;
-f* | --force )
FORCE=true
;;
-r* | --retry )
RETRY=true
;;
# Anything unknown is recorded for later
* )
REMAINS="$REMAINS \"$OPT\""
break
;;
esac
# Check for multiple short options
# NOTICE: be sure to update this pattern to match valid options
NEXTOPT="${OPT#-[cfr]}" # try removing single short opt
if [ x"$OPT" != x"$NEXTOPT" ] ; then
OPT="-$NEXTOPT" # multiple short opts, keep going
else
break # long form, exit inner loop
fi
done
# Done with that param. move to next
shift
done
# Set the non-parameters back into the positional parameters ($1 $2 ..)
eval set -- $REMAINS
echo -e "After: \n configfile='$CONFIGFILE' \n force='$FORCE' \n retry='$RETRY' \n remains='$REMAINS'"
for i ; do echo - $i ; done
> ,
I have found the matter to write portable parsing in scripts so frustrating that I have
written Argbash - a FOSS
code generator that can generate the arguments-parsing code for your script plus it has some
nice features:
Example:6) Forcefully overwrite write protected file at destination (mv -f)
Use '-f' option in mv command to forcefully overwrite the write protected file at
destination. Let's assumes we have a file named " bands.txt " in our present working directory
and in /tmp/sysadmin.
[linuxbuzz@web ~]$ ls -l bands.txt /tmp/sysadmin/bands.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 0 Aug 25 00:24 bands.txt
-r--r--r--. 1 linuxbuzz linuxbuzz 0 Aug 25 00:24 /tmp/sysadmin/bands.txt
[linuxbuzz@web ~]$
As we can see under /tmp/sysadmin, bands.txt is write protected file,
Example:8) Create backup at destination while using mv command (mv -b)
Use '-b' option to take backup of a file at destination while performing mv command, at
destination backup file will be created with tilde character appended to it, example is shown
below,
Example:9) Move file only when its newer than destination (mv -u)
There are some scenarios where we same file at source and destination and we wan to move the
file only when file at source is newer than the destination, so to accomplish, use -u option in
mv command. Example is shown below
[linuxbuzz@web ~]$ ls -l tools.txt /tmp/sysadmin/tools.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 55 Aug 25 00:55 /tmp/sysadmin/tools.txt
-rw-rw-r--. 1 linuxbuzz linuxbuzz 87 Aug 25 00:57 tools.txt
[linuxbuzz@web ~]$
Execute below mv command to mv file only when its newer than destination,
An array variable whose members are the line numbers in source files corresponding to each member of FUNCNAME .
${BASH_LINENO[$i]} is the line number in the source file where ${FUNCNAME[$i]} was called. The corresponding
source file name is ${BASH_SOURCE[$i]} . Use LINENO to obtain the current line number.
I have a test script which has a lot of commands and will generate lots of output, I use
set -x or set -v and set -e , so the script would stop
when error occurs. However, it's still rather difficult for me to locate which line did the
execution stop in order to locate the problem. Is there a method which can output the line
number of the script before each line is executed? Or output the line number before the
command exhibition generated by set -x ? Or any method which can deal with my
script line location problem would be a great help. Thanks.
You mention that you're already using -x . The variable PS4 denotes
the value is the prompt printed before the command line is echoed when the -x
option is set and defaults to : followed by space.
You can change PS4 to emit the LINENO (The line number in the
script or shell function currently executing).
For example, if your script reads:
$ cat script
foo=10
echo ${foo}
echo $((2 + 2))
Executing it thus would print line numbers:
$ PS4='Line ${LINENO}: ' bash -x script
Line 1: foo=10
Line 2: echo 10
10
Line 3: echo 4
4
Simple (but powerful) solution: Place echo around the code you think that causes
the problem and move the echo line by line until the messages does not appear
anymore on screen - because the script has stop because of an error before.
Even more powerful solution: Install bashdb the bash debugger and debug the
script line by line
In a fairly sophisticated script I wouldn't like to see all line numbers; rather I would
like to be in control of the output.
Define a function
echo_line_no () {
grep -n "$1" $0 | sed "s/echo_line_no//"
# grep the line(s) containing input $1 with line numbers
# replace the function name with nothing
} # echo_line_no
Use it with quotes like
echo_line_no "this is a simple comment with a line number"
Output is
16 "this is a simple comment with a line number"
if the number of this line in the source file is 16.
Sure. Why do you need this? How do you work with this? What can you do with this? Is this
simple approach really sufficient or useful? Why do you want to tinker with this at all?
Skip to content
April 24, 2011
by
Admin
Important
This is an edited version of a post that originally appeared on a blog called The Michigan
Telephone Blog, which was written by a friend before he decided to stop blogging. It is
reposted with his permission. Comments dated before the year 2013 were originally posted
to his blog.
If you've installed
Midnight
Commander
and haven't changed the default colors, when you try to access a dropdown menu you
may see this:
Midnight
Commander -- Original Colors
REALLY hard to read that menu, isn't it? Wouldn't you rather see this?
Midnight
Commander -- Changed Colors
To fix the unreadable menus, just make sure Midnight Commander is
not
open,
then use any text editor (such as nano) to open ~/.mc/ini:
nano ~/.mc/ini
Assuming that there is no existing [Colors] section in the file, just add this at the bottom
of the file (if the second line exceeds the blog column width, just use copy and paste to get it
all):
If there is an existing [Colors] section, you can try tweaking it using the parameters shown
above. If you have a very recent version of Midnight Commander (which you probably will have if
you are running Ubuntu), then instead of
menu=
you'll need to use
menunormal=
, as shown here:
Note that for some reason the base_color parameter must appear, or the other items are
ignored. Save the change, exit the editor, and open Midnight Commander. If you then close
Midnight Commander, you may find that the position of the [Colors] section has moved within the
ini file -- apparently Midnight Commander rewrites the file when you close it -- but if you don't
like the changes you can remove the [Colors] section to reverse the change.
The above would compress the current directory (%d) to a file also in the current directory. If you want to compress the directory
pointed to by the cursor rather than the current directory, use %f instead:
tar -czf %f_$(date '+%%Y%%m%%d').tar.gz %f
mc handles escaping of special characters so there is no need to put %f in quotes.
By the way, midnight commander's special treatment of percent signs occurs not just in the user menu file but also at the command
line. This is an issue when using shell commands with constructs like ${var%.c} . At the command line, the same as
in the user menu file, percent signs can be escaped by doubling them.
I need to know how to check the current colour for mc and how to change it.
I google it and they talk about changeing some initial file /.mc/ini which i have no idea
(no one ever gives full filename.)and i cant find it at all. Wasted an hour of my life. I
just need the simplest way to change it, not another 10+ steps to change a stupid
colour.
gengisdave
12-22-2014 03:22 AM
in some distros (mine, e.g.) it is located in ~/.local/mc/ini
sycamorex
12-22-2014 03:24 AM
This is the full filename. Mind you on my distro it's in ~/.config/mc/ini
Find / Create this file and add the following (obviously change the colour values):
The syntax is: variable=foreground_colour,background_colour
Code:
[Colors]
base_color=lightgray,green:normal=green,default:selected=white,gray:marked=yellow,default:markselect=yellow,gray:directory=blue,default:executable=brightgreen,default:link=cyan,default:device=brightmagenta,default:special=lightgray,default:errors=red,default:reverse=green,default:gauge=green,default:input=white,gray:dnormal=green,gray:dfocus=brightgreen,gray:dhotnormal=cyan,gray:dhotfocus=brightcyan,gray:menu=green,default:menuhot=cyan,default:menusel=green,gray:menuhotsel=cyan,default:helpnormal=cyan,default:editnormal=green,default:editbold=blue,default:editmarked=gray,blue:stalelink=red,default
Also, have a look at this: http://blog.mybox.ro/2010/05/10/skin...ght-commander/
Editing
Midnight Commander's color scheme In a previous
post I was sort of laying out a "formula" on how to transform your Midnight Commander
default color scheme into a trasnparent skin, without talking too much about how you can change
the other colors.
To my great shame, I didn't pay too much attention to this blog or to the comments asking
for further advice. I found Mateus' comment rather late (just now!) and decided to dig further,
in order to find out how exactly to deal with more refined color changes, while still keeping
the transparent background (in both in Midnight Commander and its editor).
So the first thing to know is which are the colors that Midnight Commander supports; the
available colors are:
black
gray
lightgray
white
red
brightred
green
brightgreen
blue
brightblue
magenta
brightmagenta
cyan
brightcyan
brown
yellow
default
The " default " color is the one giving out the nice transparency.
Now, there are certain "components" in Midnight Commander's display that can have their
colors altered. Here they are:
Each and every one of these "components" can have its own colors set accordingly to the
user's wish. Each component is assigned a color pair and must be followed by a colon (':') in
order to separate it from the color pair of the next component. Here's how this basic syntax
must look like:
component=foreground_color,background_color:
When you start modifying the color scheme in your Midnight Commander configuration file
(located at ~/.mc/ini ), you just have to add a section called " [Colors] " and proceed with
enumerating the color pairs. So you'd have something like this:
For increased readability, I will "truncate" that long line, adding a backslash ('\') to
indicate that in fact what follows on the next line should be adjacent to the text on the
previous line. This being said, the [Colors] section could look like this:
Now that you've gotten the hang of this, let's see how the [Colors] section looks like in
the default Midnight Commander color scheme (you know, the "ugly" one, with blue and dull
cyan):
IMPORTANT NOTE: For visual impact's sake and due to Blogspot breaking long lines, I wrote
each color pair on a single row, followed by a backslash ('\'). Please note that this does NOT
work in the ~/.mc/ini file, so the final [Colors] section in your Midnight Commander
configuration file MUST be a SINGLE line with no spaces and with each color pair separated from
the next one by a colon (':').
Now let's see. What you want to change first of all is most of the background of these
"components", such that the display will be one with a neat looking transparent background. So
first of all you might want to make a few changes to these color pairs by replacing the
background color "blue" with "default". After doing these changes, your [Colors] section will
look a bit like this:
Now you've got the basic "Midnight Commander transparent scheme" that was the result of
this
post .
Proceeding to Mateus' question, regarding how to change the rest of the colors now, it's
about the same as before. What he didn't like there (and as a matter of fact I don't quite like
it, either) is the dull cyan that's still seen in the following places:
the bottom line (the one displaying the F1...F10 function keys);
the line that signifies the current selection, the "prompt" which shows you on which
file/directory you're "on" at a given moment;
the uppermost line (the "menu" line);
the menus themselves, once you open them.
To "fix" issues 1, 2, and 3 it is sufficient to alter the value of the " selected "
parameter. Notice how it is initially
selected=black,cyan:\
My personal choice is to replace the background cyan, which I don't really like, with green.
To do this, I'll change this color pair to
selected=black,green:\
You can, of course, change the foreground color as well. For me, it's alright to keep the
foreground (the text) "black". You can change it to whatever suits your taste.
To "fix" issue number 4 in the list above, you need to change the " menu " parameter. To get
it transparent, just change the "cyan" background to "default". Make other adjustments as you
see fit. In other words, change
menu=white,cyan:\
into, for instance,
menu=ligthgray,default:\
However, there are a few "leftovers" from the default color scheme.
One of them is the parameter regarding the hotkeys in the menus (the "underlined" character
on most of the menu options, showing you what key you can press in order to access that option
faster than by moving to it with the arrow keys). This color pair is called " menuhot ". I
changed it from
menuhot=yellow,cyan:\
into
menuhot=yellow,default:\
Another thing which might bother you is the color of the line in the panel you're in when
you've "selected all" files (when you've pressed the "*" key). This parameter is called "
markselect ". I changed it from
markselect=yellow,cyan:\
into
markselect=white,green:\
The color pair of the selected buttons in dialogs is called " dfocus ". I changed mine
from
dfocus=black,cyan:\
into
dfocus=black,green:\
In the "focused" buttons or options, the underlined character is called " dhotfocus ". I
changed mine from
dhotfocus=blue,cyan:\
into
dhotfocus=brightgreen,green:\
since the background color was already green, after I modified the " dfocus " color
pair.
The other buttons or options in the dialogs which have hotkeys assigned to them, but which
are not "focused" (the buttons/options that you're not located on at a given moment) are still
displayed in blue on a light gray background. This color pair is referred to as " dhotnormal ".
Since the blue looks a bit odd there, I changed
dhotnormal=blue,lightgray:\
into
dhotnormal=brightgreen,default:\
Well, this is nice, in window titles and on normal (unfocused) hotkeys I get the transparent
background. The problem now is that the rest of the dialog window is still light gray. To
change this (to make the window transparent as well), you only need to alter the " dnormal "
color pair, such as changing it from
dnormal=black,lightgray:\
into
dnormal=white,default:\
You may notice that the input fields stay cyan, as well; you find these fields in quite a
lot of dialog boxes. To alter this, I changed
input=black,cyan:\
into
input=black,green:\
One thing which I consider useful is to have symbolic links displayed in bright cyan (as in
the colored listings in the terminal). So I just changed
link=lightgray,default:\
into
link=brightcyan,default:\
Now, regarding the rest of the color pairs, I don't really know what they do. However, if at
some point after using Midnight Commander more with this new, neat, transparent/green color
scheme you'll notice unwanted leftovers, you can try out other changes in the color pairs
values, one at a time, until you determine the troublesome one.
After operating the changes above, my [Colors] section in ~/.mc/ini now looks like this:
I need to direct you to the " IMPORTANT NOTE " above. The final [Colors] section above is
written like this - one pair on each row, followed by a backslash - for clarity's sake. The
actual final [Colors] section in your ~/.mc/ini file will have to be a one-liner, with no
blanks and no backslashes. So it will probably look similar to this:
Note #1: In the above 'code' block, there is only one line below [Colors] . I truncated the
line with the backslash because of blogspot rendering issues. You just write all that on one
single line, without the "\" (backslash-es).
Note #2: At the end of this line, the " editnormal,=default: " option means that mcedit will
have transparent background in your console, as well.
To my great shame, I didn't pay too much attention to this blog or to the comments asking
for further advice. I found Mateus' comment rather late (just now!) and decided to dig further,
in order to find out how exactly to deal with more refined color changes, while still keeping
the transparent background (in both in Midnight Commander and its editor).
So the first thing to know is which are the colors that Midnight Commander supports; the
available colors are:
black
gray
lightgray
white
red
brightred
green
brightgreen
blue
brightblue
magenta
brightmagenta
cyan
brightcyan
brown
yellow
default
The " default " color is the one giving out the nice transparency.
Now, there are certain "components" in Midnight Commander's display that can have their
colors altered. Here they are:
Each and every one of these "components" can have its own colors set accordingly to the
user's wish. Each component is assigned a color pair and must be followed by a colon (':') in
order to separate it from the color pair of the next component. Here's how this basic syntax
must look like:
component=foreground_color,background_color:
When you start modifying the color scheme in your Midnight Commander configuration file
(located at ~/.mc/ini ), you just have to add a section called " [Colors] " and proceed with
enumerating the color pairs. So you'd have something like this:
For increased readability, I will "truncate" that long line, adding a backslash ('\') to
indicate that in fact what follows on the next line should be adjacent to the text on the
previous line. This being said, the [Colors] section could look like this:
Now that you've gotten the hang of this, let's see how the [Colors] section looks like in
the default Midnight Commander color scheme (you know, the "ugly" one, with blue and dull
cyan):
IMPORTANT NOTE: For visual impact's sake and due to Blogspot breaking long lines, I wrote
each color pair on a single row, followed by a backslash ('\'). Please note that this does NOT
work in the ~/.mc/ini file, so the final [Colors] section in your Midnight Commander
configuration file MUST be a SINGLE line with no spaces and with each color pair separated from
the next one by a colon (':').
Now let's see. What you want to change first of all is most of the background of these
"components", such that the display will be one with a neat looking transparent background. So
first of all you might want to make a few changes to these color pairs by replacing the
background color "blue" with "default". After doing these changes, your [Colors] section will
look a bit like this:
Now you've got the basic "Midnight Commander transparent scheme" that was the result of
this
post .
Proceeding to Mateus' question, regarding how to change the rest of the colors now, it's
about the same as before. What he didn't like there (and as a matter of fact I don't quite like
it, either) is the dull cyan that's still seen in the following places:
the bottom line (the one displaying the F1...F10 function keys);
the line that signifies the current selection, the "prompt" which shows you on which
file/directory you're "on" at a given moment;
the uppermost line (the "menu" line);
the menus themselves, once you open them.
To "fix" issues 1, 2, and 3 it is sufficient to alter the value of the " selected "
parameter. Notice how it is initially
selected=black,cyan:\
My personal choice is to replace the background cyan, which I don't really like, with green.
To do this, I'll change this color pair to
selected=black,green:\
You can, of course, change the foreground color as well. For me, it's alright to keep the
foreground (the text) "black". You can change it to whatever suits your taste.
To "fix" issue number 4 in the list above, you need to change the " menu " parameter. To get
it transparent, just change the "cyan" background to "default". Make other adjustments as you
see fit. In other words, change
menu=white,cyan:\
into, for instance,
menu=ligthgray,default:\
However, there are a few "leftovers" from the default color scheme.
One of them is the parameter regarding the hotkeys in the menus (the "underlined" character
on most of the menu options, showing you what key you can press in order to access that option
faster than by moving to it with the arrow keys). This color pair is called " menuhot ". I
changed it from
menuhot=yellow,cyan:\
into
menuhot=yellow,default:\
Another thing which might bother you is the color of the line in the panel you're in when
you've "selected all" files (when you've pressed the "*" key). This parameter is called "
markselect ". I changed it from
markselect=yellow,cyan:\
into
markselect=white,green:\
The color pair of the selected buttons in dialogs is called " dfocus ". I changed mine
from
dfocus=black,cyan:\
into
dfocus=black,green:\
In the "focused" buttons or options, the underlined character is called " dhotfocus ". I
changed mine from
dhotfocus=blue,cyan:\
into
dhotfocus=brightgreen,green:\
since the background color was already green, after I modified the " dfocus " color
pair.
The other buttons or options in the dialogs which have hotkeys assigned to them, but which
are not "focused" (the buttons/options that you're not located on at a given moment) are still
displayed in blue on a light gray background. This color pair is referred to as " dhotnormal ".
Since the blue looks a bit odd there, I changed
dhotnormal=blue,lightgray:\
into
dhotnormal=brightgreen,default:\
Well, this is nice, in window titles and on normal (unfocused) hotkeys I get the transparent
background. The problem now is that the rest of the dialog window is still light gray. To
change this (to make the window transparent as well), you only need to alter the " dnormal "
color pair, such as changing it from
dnormal=black,lightgray:\
into
dnormal=white,default:\
You may notice that the input fields stay cyan, as well; you find these fields in quite a
lot of dialog boxes. To alter this, I changed
input=black,cyan:\
into
input=black,green:\
One thing which I consider useful is to have symbolic links displayed in bright cyan (as in
the colored listings in the terminal). So I just changed
link=lightgray,default:\
into
link=brightcyan,default:\
Now, regarding the rest of the color pairs, I don't really know what they do. However, if at
some point after using Midnight Commander more with this new, neat, transparent/green color
scheme you'll notice unwanted leftovers, you can try out other changes in the color pairs
values, one at a time, until you determine the troublesome one.
After operating the changes above, my [Colors] section in ~/.mc/ini now looks like this:
I need to direct you to the " IMPORTANT NOTE " above. The final [Colors] section above is
written like this - one pair on each row, followed by a backslash - for clarity's sake. The
actual final [Colors] section in your ~/.mc/ini file will have to be a one-liner, with no
blanks and no backslashes. So it will probably look similar to this:
Koszti Lajos Midnight Commander is the most pupular file manager on
unix like systems. It's fast and it has all features what you need. But it's only blue and we
know, that everyone loves the eyecandy, everyone likes customizing his/her own desktop. But is
there any way to custimize the mc ?
Yes, and I try to show you, how can you create your theme .
You can change the Midnight Commander colors if you edit the ~/.mc/ini file, where you have
to add a new section, named [Colors] . You should define the new colors in this section, for
example:
Help colors: helpnormal, helpitalic, helpbold, helplink, helpslink
Viewer color: viewunderline
Special highlighting colors: executable, directory, link, stalelink, device, special,
core
Editor colors: editnormal, editbold, editmarked
And which are the colors? I don't know all, but here are some of them: white, gray, blue, green, yellow, magenta, cyan, red, brown, birghtgreen, brightblue,
brightmagenta, brightcyan, brightred, default
On the screenshot you can see, that the directory color is blue, the files are green, the
executable files are birghtgreen and the selected line is white on a gray background.
And here is a small shell script, which will help for you to test your new theme:
#!/bin/sh mc --colors
normal=green,default:selected=brightmagenta,gray:marked=yellow,default:markselect=yellow,gray:directory=blue,default:executable=brightgreen,default:link=cyan,default:device=brightmagenta,default:special=lightgray,default:errors=red,default:reverse=green,default:gauge=green,default:input=white,gray:dnormal=green,gray:dfocus=brightgreen,gray:dhotnormal=cyan,gray:dhotfocus=brightcyan,gray:menu=green,default:menuhot=cyan,default:menusel=green,gray:menuhotsel=cyan,default:helpnormal=cyan,default:editnormal=green,default:editbold=blue,default:editmarked=gray,blue:stalelink=red,default
Save it as mccolortest.sh, make it executable with the chmod +x mccolortest.sh
command, and run it with the ./mccolortest.sh command. If you want to change a color,
just edit this file. When you done, copy the colors and paste it below the [Colors]
section in the ~/.mc/ini . If it doesn't exists, make it yourself.
For more information of the mc redesigning check its manual page .
Also, in 4.8.3 here, I copied the first example scheme line and my colors are different. I
can't even set the background of the select bar to gray (or "grey"): it gets replaced with
black. Also, the panel headings remain blue here, unlike the (first) screenshot, and I can
see no corresponding tag in the line anyway.
Good intro, regardless. Someone should post a pointer to a more up-to-date one, though, as
Google seems to find this old thread within the top few hits. Király! ;)
The colors are depends on the color settings of your terminal. I don't have those settings
anymore which was when I posted this article, but here is my current. If I'm right, it's
similar to that. Put it into your .Xdefaults
Midnight Commander supports skins starting from 4.7.0-pre3 version. You can download a
skin with black as a main color from here:
http://zool.in.ua/software/bluemoon/
I am using MC on my router ASUS WL-500GP and I am developing php scripts on it. but as I
see MC in openwrt (kmaikaze 8.09) does not use syntax-highlighting and it is very
unconfortable.
Do you know how could I turn it on? I have already downloaded php.syntax file and put it into
/usr/share/syntax dir but it does not seem to work. is it possible that some support is not
compiled into my version or the syntax file must be compiled to another format?
Br Zé.
hei ajnasz, your color theme so very nice, keep my eye on my pc longer than usual. Well, i
don't have much time to do more explore with this tricks. I think your taste so cool. If you
have any kind of theme, i should be try it. :-)
I didn't find anything about it. By the way, since the extension doesn't determinate the
file type in UNIX like systems, it wouldn't make any sense to do it.
Don't be silly. Mp3 is just music, txt is text, doc is document. The only thing, which is
not exactly determinable is the executables, but whatever, it has +x flag.
Also, you should know that most modern terminal applications allow you to redefine the
exact shade of those 16 colors.
Some of them (such as the Gnome or KDE terminals) may have a place under their preferences
where you can redefine the colors.
Older terminals, such as aterm, use ~/.Xdefaults for this. You can edit that file and add
lines like this: "aterm*color1: OrangeRed" (without the quotes). What I've done with that is
tell aterm that the "color1" (which was red) should now be "OrangeRed". See
/usr/share/X11/rgb.txt for valid color names. You can use *color0 through *color15. So when
you'll say "red" in MC's ini file, and if you use aterm, it will get replaced by color1 in
~/.Xdefaults and changed to OrangeRed. (Sorry, I don't remember the mappings between the
names used by MC and 0-15 in Xdefaults by heart.)
You must be aware of process name, before killing and entering a wrong process name may screw you.
# pkill mysqld
Kill more than one process at a time.
# kill PID1 PID2 PID3
or
# kill -9 PID1 PID2 PID3
or
# kill -SIGKILL PID1 PID2 PID3
What if a process have too many instances and a number of child processes, we have a command ' killall '. This is the only command
of this family, which takes process name as argument in-place of process number.
Syntax:
# killall [signal or option] Process Name
To kill all mysql instances along with child processes, use the command as follow.
# killall mysqld
You can always verify the status of the process if it is running or not, using any of the below command.
# service mysql status
# pgrep mysql
# ps -aux | grep mysql
That's all for now, from my side. I will soon be here again with another Interesting and Informative topic. Till Then, stay tuned,
connected to Tecmint and healthy. Don't forget to give your valuable feedback in comment section.
The locate command also accepts patterns containing globbing characters such as
the wildcard character * . When the pattern contains no globbing characters the
command searches for *PATTERN* , that's why in the previous example all files
containing the search pattern in their names were displayed.
The wildcard is a symbol used to represent zero, one or more characters. For example, to
search for all .md files on the system you would use:
locate *.md
To limit the search results use the -n option followed by the number of results
you want to be displayed. For example, the following command will search for all
.py files and display only 10 results:
locate -n 10 *.py
By default, locate performs case-sensitive searches. The -i (
--ignore-case ) option tels locate to ignore case and run
case-insensitive search.
To display the count of all matching entries, use the -c ( --count
) option. The following command would return the number of all files containing
.bashrc in their names:
locate -c .bashrc
6
By default, locate doesn't check whether the found files still exist on the
file system. If you deleted a file after the latest database update if the file matches the
search pattern it will be included in the search results.
To display only the names of the files that exist at the time locate is run use
the -e ( --existing ) option. For example, the following would return
only the existing .json files:
locate -e *.json
If you need to run a more complex search you can use the -r (
--regexp ) option which allows you to search using a basic regexp instead of
patterns. This option can be specified multiple times.
For example, to search for all .mp4 and .avi files on your system and
ignore case you would run:
Yes, just give the full stored path of the file after the tarball name.
Example: suppose you want file etc/apt/sources.list from etc.tar
:
tar -xf etc.tar etc/apt/sources.list
Will extract sources.list and create directories etc/apt under
the current directory.
You can use the -t listing option instead of -x , maybe along
with grep , to find the path of the file you want
You can also extract a single directory
tar has other options like --wildcards , etc. for more advanced
partial extraction scenarios; see man tar
2. Extract it with the Archive Manager
Open the tar in Archive Manager from Nautilus, go down into the folder hierarchy to find
the file you need, and extract it.
On a server or command-line system, use a text-based file manager such as Midnight
Commander ( mc ) to accomplish the same.
3. Using Nautilus/Archive-Mounter
Right-click the tar in Nautilus, and select Open with ArchiveMounter.
The tar will now appear similar to a removable drive on the left, and you can
explore/navigate it like a normal drive and drag/copy/paste any file(s) you need to any
destination.
Midnight Commander uses virtual filesystem ( VFS ) for displying
files, such as contents of a .tar.gz archive, or of .iso image.
This is configured in mc.ext with rules such as this one ( Open is
Enter , View is F3 ):
If you've used an *nix system, at some point you've stumbled
upon Midnight Commander , a
file manager based on the venerable Norton Commander. You're probably familiar with the basic
operations ( F5 for copying, F6 for moving, F8 for
deleting, etc.) and how to switch panels (ummm, the Tab key). But mc
offers so much more than that. This article aims to show all the useful (YMMV) shortcuts and
functionalities that are often overlooked. Most of them can be accessed using the menu (
F9 ), but who has the time to do that?
Before we get started, let's establish some facts. This article was written and tested on
the following software:
Midnight Commander 4.8.13
GNU bash 4.2.53
Oh, and make sure you're running a modern and UTF-8 friendly terminal - for example,
rxvt-unicode.
Hold your horses
There's actually one thing I'd recommend doing before you run mc .
mc has the ability to exit to its current directory. Meaning, you can navigate the
filesystem using mc (sometimes it's easier than cd ing into that one
directory buried deep down somewhere ) and when you quit mc (
F10 ), your shell will automagically cd to that directory. This is
done thanks to the mc-wrapper script that should be bundled with your installation
of mc . The exact location is dependent on your distribution - in mine (Gentoo)
it's /usr/libexec/mc/ , in Ubuntu supposedly it's in
/usr/share/mc/bin/ . Once found, modify your ~/.bashrc :
alias mc='. /usr/libexec/mc/mc-wrapper.sh'
Restart your shell, launch mc , change to another directory, exit and your
shell should be set to that new directory.
Selecting files
Insert ( Ctrl + t alternatively) - select files (for example,
for copying, moving or deleting).
+ - select files based on a pattern.
\ - un select files based on a pattern.
* - reverse selection. If nothing was selected, all files will get
selected.
Accessing the shell
There's a shell awaiting your command at the bottom of the screen - just start typing
(when no other command dialog is open, of course).
Since Tab is bound to switching panels (or moving the focus in dialogs), you
have to use Esc Tab to use autocompletion. Hit it twice to get all the possible
completions (just like in a shell). This works in dialogs too.
If you want inspect the output of the command, do some input or just prefer a bigger
console, no need to quit mc . Just hit Ctrl + o - the effect will
be similar to putting mc in the background but with a nice perk. Your current
working directory from mc will be passed on to the shell and vice versa! Hit
Ctrl + o again to return to mc .
Ctrl + Enter or Alt + Enter - copy the currently selected
file's name to the shell.
Ctrl + Shift + Enter - same as above, but the full path is copied.
Internal viewer ( F3 ) and editor ( F4 )
The internal viewer has many built in modes for "previewing" the content of the file. Try
"viewing" a binary, an archive, a DOC document or an image. In some cases, external programs
are needed in order for this "previewing" to work.
If you want to preview the "raw" contents of the file, hit Shift + F3 .
While the internal viewer and editor are powerful, sometimes you want to use your
preferred software ( cough vim cough ). You can do so by setting the
PAGER (for viewer) and EDITOR (for editor) variables (for example,
in your ~/.bashrc file). Then toggle the Options -> Configuration ->
Use interal edit/view option (access the top menu by pressing F9 ).
Panels
Alt + , - switch mc 's layout from left-right to top-bottom.
Mind = blown. Useful for operating on files with long names.
Alt + t - switch the panel's listing mode in a loop: default, brief, long,
user-defined. "long" is especially useful, because it maximises one panel so that it takes
full width of the window and longer filenames fit on screen.
Alt + i - synchronize the active panel with the other panel. That is, show
the current directory in the other panel.
Ctrl + u - swap panels.
Alt + o - if the currently selected file is a directory, load that directory
on the other panel and move the selection to the next file. If the currently selected file is
not a directory, load the parent directory on the other panel and moves the selection to the
next file. This is useful for quick checking the contents of a list of directories.
Ctrl + PgUp (or just left arrow, if you've enabled Lynx-like
motion , see later) - move to the parent directory.
Alt + Shift + h - show the directory history. Might be easier to navigate
than going back one entry at a time.
Alt + y - move to the previous directory in history.
Alt + u - move to the next directory in history.
Searching files
Alt + ? - shows the full Find dialog.
Alt + s or Ctrl + s - quick search mode. Start typing and the
selection will move to the first matching file. Press the shortcut again to jump to another
match. Use wildcards ( * , ? ) for easier matching.
Common actions
Ctrl + Space - calculate the size of the selected directories. Press this
shortcut when the selection is on .. to calculate the size of all the
directories in the current directory.
Ctrl + x s (that is press Ctrl + x , let it go and then press s ) -
create a symbolic link (change s to l for a hardlink). I find it
very useful and intuitive - the link will, of course, be created in the other panel. You can
change it's destination and name, like with any other file operation.
Ctrl + x c - open the chmod dialog.
Ctrl + x o - open the chown dialog.
Virtual File System (VFS)
mc has a concept known as Virtual File System. Try "entering" an archive (
*.tar.gz , *.rpm or even *.jar ) - you'll be able to
browse the contents of the archive like a normal folder, without unpacking it first. You
extract selected files from the archive by just copying them to the other panel. Bonus points:
try "entering" a *.patch file.
This concept is even more powerful when you realize that remote locations can be viewed the
same way. A quick way to browse an FTP location is to just cd to it: cd
ftp://mirrors.tera-byte.com/pub/gentoo (first Gentoo FTP mirror I found). You'll be able
to interact with files as you normally do. To exit this remote location, cd to a
local directory. Just typing cd will suffice as it will take you to your home
directory.
VFS works for SFTP and Samba shares too. Check the manpages for more information on how to
specify user/pass, etc.
Useful options
Configuration
Verbose operation and Compute totals - so that operations
like copy/move have a more detailed progress dialogs.
Layout
Equal split - uncheck to define your own ratio for panels. Maybe you
prefer one panel bigger than the other? Useful especially if you keep one of the panels
in tree mode (or maybe info/quick view, too).
Uncheck Hintbar visible - one more line available, one less line of
noise.
Panel options
Show backup files and Show hidden files - I keep both
enabled, as I often work with configuration files, etc.
Lynx-like motion - mentioned above, makes left arrow go to parent
directory, while the right arrow enters the directory under selection. Faster than
Home , Enter , Home , Enter , etc.
This options is quite smart, that is if the shell command line is not empty, the arrows
work as usual and allow moving the cursor in the command line.
File highlight -> File types is useful, as it uses a
different color for example for executable files. Permissions , for me, is
not that useful, but I can definitely see it's use, for example, for sysadmins.
Appearance
Only one option here, Skins . You can check out different skins shipped
with mc - just select one from the list. I prefer gotar ,
because it plays well with my solarized terminal colors.
Useful tip - set up a different skin when logged in as the root user.
It'll be easier to differentiate between root's and normal user's session, when you're
swapping between them (as is often the case).
Bonus assignments
Define your own listing mode ( Right/Left -> Listing mode...
-> User defined ). Hit F1 to see available columns and
options.
Play around in tree mode: Right/Left -> Tree or
Command -> Directory tree .
Compare directories ( Ctrl + x d )
Fill up the directory hotlist ( Ctrl + \ )
Well, that was a lot to take in. Of course, this list is not complete (that's what man
mc is there for), but I've selected the commands and functionalities that are the most
useful to me . Embrace the ones you find useful, forget the rest and learn about the
other ones I've missed!
Midnight Commander how to compress a file/directory; Make a tar archive with midnight
commander
To compress a file in Midnight Commader (e.g. to make a tar.gz archive) navigate
to the directory you want to pack and press 'F2'. This will bring up the 'User menu'. Choose
the option 'Compress the current subdirectory'. This will compress the WHOLE directory you're
currently in - not the highlighted directory.
Type the following vmstat command: # vmstat
# vmstat 1 5
... ... ...
Vivek Gite is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a
trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on
SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.
The choice of shell as a programming language is strange, but the idea is good...
Notable quotes:
"... The tool is developed by Igor Chubin, also known for its console-oriented weather forecast service wttr.in , which can be used to retrieve the weather from the console using only cURL or Wget. ..."
While it does have its own cheat sheet repository too, the project is actually concentrated around the creation of a unified mechanism
to access well developed and maintained cheat sheet repositories.
The tool is developed by Igor Chubin, also known for its
console-oriented weather forecast
service wttr.in , which can be used to retrieve the weather from the console using
only cURL or Wget.
It's worth noting that cheat.sh is not new. In fact it had its initial commit around May, 2017, and is a very popular repository
on GitHub. But I personally only came across it recently, and I found it very useful, so I figured there must be some Linux Uprising
readers who are not aware of this cool gem.
cheat.sh features & more
cheat.sh major features:
Supports 58 programming
languages , several DBMSes, and more than 1000 most important UNIX/Linux commands
Very fast, returns answers within 100ms
Simple curl / browser interface
An optional command line client (cht.sh) is available, which allows you to quickly search cheat sheets and easily copy
snippets without leaving the terminal
Can be used from code editors, allowing inserting code snippets without having to open a web browser, search for the code,
copy it, then return to your code editor and paste it. It supports Vim, Emacs, Visual Studio Code, Sublime Text and IntelliJ Idea
Comes with a special stealth mode in which any text you select (adding it into the selection buffer of X Window System
or into the clipboard) is used as a search query by cht.sh, so you can get answers without touching any other keys
The command line client features a special shell mode with a persistent queries context and readline support. It also has a query
history, it integrates with the clipboard, supports tab completion for shells like Bash, Fish and Zsh, and it includes the stealth
mode I mentioned in the cheat.sh features.
The web, curl and cht.sh (command line) interfaces all make use of https://cheat.sh/
but if you prefer, you can self-host it .
It should be noted that each editor plugin supports a different feature set (configurable server, multiple answers, toggle comments,
and so on). You can view a feature comparison of each cheat.sh editor plugin on the
Editors integration section of the project's
GitHub page.
Want to contribute a cheat sheet? See the cheat.sh guide on
editing or adding a new cheat sheet.
cheat.sh curl / command line client usage examples Examples of using cheat.sh using the curl interface (this requires having curl installed as you'd expect) from the command
line:
Show the tar command cheat sheet:
curl cheat.sh/tar
Example with output:
$ curl cheat.sh/tar
# To extract an uncompressed archive:
tar -xvf /path/to/foo.tar
# To create an uncompressed archive:
tar -cvf /path/to/foo.tar /path/to/foo/
# To extract a .gz archive:
tar -xzvf /path/to/foo.tgz
# To create a .gz archive:
tar -czvf /path/to/foo.tgz /path/to/foo/
# To list the content of an .gz archive:
tar -ztvf /path/to/foo.tgz
# To extract a .bz2 archive:
tar -xjvf /path/to/foo.tgz
# To create a .bz2 archive:
tar -cjvf /path/to/foo.tgz /path/to/foo/
# To extract a .tar in specified Directory:
tar -xvf /path/to/foo.tar -C /path/to/destination/
# To list the content of an .bz2 archive:
tar -jtvf /path/to/foo.tgz
# To create a .gz archive and exclude all jpg,gif,... from the tgz
tar czvf /path/to/foo.tgz --exclude=\*.{jpg,gif,png,wmv,flv,tar.gz,zip} /path/to/foo/
# To use parallel (multi-threaded) implementation of compression algorithms:
tar -z ... -> tar -Ipigz ...
tar -j ... -> tar -Ipbzip2 ...
tar -J ... -> tar -Ipixz ...
cht.sh also works instead of cheat.sh:
curl cht.sh/tar
Want to search for a keyword in all cheat sheets? Use:
curl cheat.sh/~keyword
List the Python programming language cheat sheet for random list :
curl cht.sh/python/random+list
Example with output:
$ curl cht.sh/python/random+list
# python - How to randomly select an item from a list?
#
# Use random.choice
# (https://docs.python.org/2/library/random.htmlrandom.choice):
import random
foo = ['a', 'b', 'c', 'd', 'e']
print(random.choice(foo))
# For cryptographically secure random choices (e.g. for generating a
# passphrase from a wordlist), use random.SystemRandom
# (https://docs.python.org/2/library/random.htmlrandom.SystemRandom)
# class:
import random
foo = ['battery', 'correct', 'horse', 'staple']
secure_random = random.SystemRandom()
print(secure_random.choice(foo))
# [Pēteris Caune] [so/q/306400] [cc by-sa 3.0]
Replace python with some other programming language supported by cheat.sh, and random+list with the cheat
sheet you want to show.
Want to eliminate the comments from your answer? Add ?Q at the end of the query (below is an example using the same
/python/random+list):
For more flexibility and tab completion you can use cht.sh, the command line cheat.sh client; you'll find instructions for how to
install it further down this article. Examples of using the cht.sh command line client:
Show the tar command cheat sheet:
cht.sh tar
List the Python programming language cheat sheet for random list :
cht.sh python random list
There is no need to use quotes with multiple keywords.
You can start the cht.sh client in a special shell mode using:
cht.sh --shell
And then you can start typing your queries. Example:
$ cht.sh --shell
cht.sh> bash loop
If all your queries are about the same programming language, you can start the client in the special shell mode, directly in that
context. As an example, start it with the Bash context using:
cht.sh --shell bash
Example with output:
$ cht.sh --shell bash
cht.sh/bash> loop
...........
cht.sh/bash> switch case
Want to copy the previously listed answer to the clipboard? Type c , then press Enter to copy the whole
answer, or type C and press Enter to copy it without comments.
Type help in the cht.sh interactive shell mode to see all available commands. Also look under the
Usage section from the cheat.sh GitHub project page for more
options and advanced usage.
How to install cht.sh command line client
You can use cheat.sh in a web browser, from the command line with the help of curl and without having to install anything else, as
explained above, as a code editor plugin, or using its command line client which has some extra features, which I already mentioned.
The steps below are for installing this cht.sh command line client.
If you'd rather install a code editor plugin for cheat.sh, see the
Editors integration page.
1. Install dependencies.
To install the cht.sh command line client, the curl command line tool will be used, so this needs to be installed
on your system. Another dependency is rlwrap , which is required by the cht.sh special shell mode. Install these dependencies
as follows.
Debian, Ubuntu, Linux Mint, Pop!_OS, and any other Linux distribution based on Debian or Ubuntu:
sudo apt install curl rlwrap
Fedora:
sudo dnf install curl rlwrap
Arch Linux, Manjaro:
sudo pacman -S curl rlwrap
openSUSE:
sudo zypper install curl rlwrap
The packages seem to be named the same on most (if not all) Linux distributions, so if your Linux distribution is not on this list,
just install the curl and rlwrap packages using your distro's package manager.
2. Download and install the cht.sh command line interface.
You can install this either for your user only (so only you can run it), or for all users:
Install it for your user only. The command below assumes you have a ~/.bin folder added to your PATH
(and the folder exists). If you have some other local folder in your PATH where you want to install cht.sh, change
install path in the commands:
Install it for all users (globally, in /usr/local/bin ):
curl https://cht.sh/:cht.sh | sudo tee /usr/local/bin/cht.sh
sudo chmod +x /usr/local/bin/cht.sh
If the first command appears to have frozen displaying only the cURL output, press the Enter key and you'll be prompted
to enter your password in order to save the file to /usr/local/bin .
You may also download and install the cheat.sh command completion for Bash or Zsh:
In technical terms, "/dev/null" is a virtual device file. As far as programs are concerned, these are treated just like real files.
Utilities can request data from this kind of source, and the operating system feeds them data. But, instead of reading from disk,
the operating system generates this data dynamically. An example of such a file is "/dev/zero."
In this case, however, you will write to a device file. Whatever you write to "/dev/null" is discarded, forgotten, thrown into
the void. To understand why this is useful, you must first have a basic understanding of standard output and standard error in Linux
or *nix type operating systems.
A command-line utility can generate two types of output. Standard output is sent to stdout. Errors are sent to stderr.
By default, stdout and stderr are associated with your terminal window (or console). This means that anything sent to stdout and
stderr is normally displayed on your screen. But through shell redirections, you can change this behavior. For example, you can redirect
stdout to a file. This way, instead of displaying output on the screen, it will be saved to a file for you to read later – or you
can redirect stdout to a physical device, say, a digital LED or LCD display.
Since there are two types of output, standard output and standard error, the first use case is to filter out one type or the other.
It's easier to understand through a practical example. Let's say you're looking for a string in "/sys" to find files that refer to
power settings.
grep -r power /sys/
There will be a lot of files that a regular, non-root user cannot read. This will result in many "Permission denied" errors.
These clutter the output and make it harder to spot the results that you're looking for. Since "Permission denied"
errors are part of stderr, you can redirect them to "/dev/null."
grep -r power /sys/ 2>/dev/null
As you can see, this is much easier to read.
In other cases, it might be useful to do the reverse: filter out standard output so you can only see errors.
ping google.com 1>/dev/null
The screenshot above shows that, without redirecting, ping displays its normal output when it can reach the destination
machine. In the second command, nothing is displayed while the network is online, but as soon as it gets disconnected, only error
messages are displayed.
You can redirect both stdout and stderr to two different locations.
ping google.com 1>/dev/null 2>error.log
In this case, stdout messages won't be displayed at all, and error messages will be saved to the "error.log" file.
Redirect All Output to /dev/null
Sometimes it's useful to get rid of all output. There are two ways to do this.
grep -r power /sys/ >/dev/null 2>&1
The string >/dev/null means "send stdout to /dev/null," and the second part, 2>&1 , means send stderr
to stdout. In this case you have to refer to stdout as "&1" instead of simply "1." Writing "2>1" would just redirect stdout to a
file named "1."
What's important to note here is that the order is important. If you reverse the redirect parameters like this:
grep -r power /sys/ 2>&1 >/dev/null
it won't work as intended. That's because as soon as 2>&1 is interpreted, stderr is sent to stdout and displayed
on screen. Next, stdout is supressed when sent to "/dev/null." The final result is that you will see errors on the screen instead
of suppressing all output. If you can't remember the correct order, there's a simpler redirect that is much easier to type:
grep -r power /sys/ &>/dev/null
In this case, &>/dev/null is equivalent to saying "redirect both stdout and stderr to this location."
Other Examples Where It Can Be Useful to Redirect to /dev/null
Say you want to see how fast your disk can read sequential data. The test is not extremely accurate but accurate enough. You can
use dd for this, but dd either outputs to stdout or can be instructed to write to a file. With of=/dev/null
you can tell dd to write to this virtual file. You don't even have to use shell redirections here. if= specifies
the location of the input file to be read; of= specifies the name of the output file, where to write.
In some scenarios, you may want to see how fast you can download from a server. But you don't want to write to your disk unnecessarily.
Simply enough, don't write to a regular file, write to "/dev/null."
Type the following netstat command sudo netstat -tulpn | grep LISTEN
... ... ...
For example, TCP port 631 opened by cupsd process and cupsd only listing on the loopback
address (127.0.0.1). Similarly, TCP port 22 opened by sshd process and sshd listing on all IP
address for ssh connections:
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 0 43385 1821/cupsd
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 0 44064 1823/sshd
Where,
-t : All TCP ports
-u : All UDP ports
-l : Display listening server sockets
-p : Show the PID and name of the program to which each socket belongs
-n : Don't resolve names
| grep LISTEN : Only display open ports by applying grep command filter.
Use ss to list open ports
The ss command is used to dump socket statistics. It allows showing information similar to
netstat. It can display more TCP and state information than other tools. The syntax is: sudo ss -tulpn
... ... ...
Vivek Gite is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a
trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on
SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.
"... There is effectively no CPU time spent tarring, so it wouldn't help much. The tar format is just a copy of the input file with header blocks in between files. ..."
"... You can also use the tar flag "--use-compress-program=" to tell tar what compression program to use. ..."
I normally compress using tar zcvf and decompress using tar zxvf
(using gzip due to habit).
I've recently gotten a quad core CPU with hyperthreading, so I have 8 logical cores, and I
notice that many of the cores are unused during compression/decompression.
Is there any way I can utilize the unused cores to make it faster?
The solution proposed by Xiong Chiamiov above works beautifully. I had just backed up my
laptop with .tar.bz2 and it took 132 minutes using only one cpu thread. Then I compiled and
installed tar from source: gnu.org/software/tar I included the options mentioned
in the configure step: ./configure --with-gzip=pigz --with-bzip2=lbzip2 --with-lzip=plzip I
ran the backup again and it took only 32 minutes. That's better than 4X improvement! I
watched the system monitor and it kept all 4 cpus (8 threads) flatlined at 100% the whole
time. THAT is the best solution. – Warren Severin
Nov 13 '17 at 4:37
You can use pigz instead of gzip, which
does gzip compression on multiple cores. Instead of using the -z option, you would pipe it
through pigz:
tar cf - paths-to-archive | pigz > archive.tar.gz
By default, pigz uses the number of available cores, or eight if it could not query that.
You can ask for more with -p n, e.g. -p 32. pigz has the same options as gzip, so you can
request better compression with -9. E.g.
pigz does use multiple cores for decompression, but only with limited improvement over a
single core. The deflate format does not lend itself to parallel decompression.
The
decompression portion must be done serially. The other cores for pigz decompression are used
for reading, writing, and calculating the CRC. When compressing on the other hand, pigz gets
close to a factor of n improvement with n cores.
There is effectively no CPU time spent tarring, so it wouldn't help much. The tar format is just a copy of the input file
with header blocks in between files.
Unfortunately by doing so the concurrent feature of pigz is lost. You can see for yourself by
executing that command and monitoring the load on each of the cores. – Valerio
Schiavoni
Aug 5 '14 at 22:38
I prefer tar - dir_to_zip | pv | pigz > tar.file pv helps me estimate, you
can skip it. But still it easier to write and remember. – Offenso
Jan 11 '17 at 17:26
-I, --use-compress-program PROG
filter through PROG (must accept -d)
You can use multithread version of archiver or compressor utility.
Most popular multithread archivers are pigz (instead of gzip) and pbzip2 (instead of bzip2). For instance:
$ tar -I pbzip2 -cf OUTPUT_FILE.tar.bz2 paths_to_archive
$ tar --use-compress-program=pigz -cf OUTPUT_FILE.tar.gz paths_to_archive
Archiver must accept -d. If your replacement utility hasn't this parameter and/or you need
specify additional parameters, then use pipes (add parameters if necessary):
$ tar cf - paths_to_archive | pbzip2 > OUTPUT_FILE.tar.gz
$ tar cf - paths_to_archive | pigz > OUTPUT_FILE.tar.gz
Input and output of singlethread and multithread are compatible. You can compress using
multithread version and decompress using singlethread version and vice versa.
p7zip
For p7zip for compression you need a small shell script like the following:
#!/bin/sh
case $1 in
-d) 7za -txz -si -so e;;
*) 7za -txz -si -so a .;;
esac 2>/dev/null
Save it as 7zhelper.sh. Here the example of usage:
$ tar -I 7zhelper.sh -cf OUTPUT_FILE.tar.7z paths_to_archive
$ tar -I 7zhelper.sh -xf OUTPUT_FILE.tar.7z
xz
Regarding multithreaded XZ support. If you are running version 5.2.0 or above of XZ Utils,
you can utilize multiple cores for compression by setting -T or
--threads to an appropriate value via the environmental variable XZ_DEFAULTS
(e.g. XZ_DEFAULTS="-T 0" ).
This is a fragment of man for 5.1.0alpha version:
Multithreaded compression and decompression are not implemented yet, so this option has
no effect for now.
However this will not work for decompression of files that haven't also been compressed
with threading enabled. From man for version 5.2.2:
Threaded decompression hasn't been implemented yet. It will only work on files that
contain multiple blocks with size information in block headers. All files compressed in
multi-threaded mode meet this condition, but files compressed in single-threaded mode don't
even if --block-size=size is used.
Recompiling with replacement
If you build tar from sources, then you can recompile with parameters
After recompiling tar with these options you can check the output of tar's help:
$ tar --help | grep "lbzip2\|plzip\|pigz"
-j, --bzip2 filter the archive through lbzip2
--lzip filter the archive through plzip
-z, --gzip, --gunzip, --ungzip filter the archive through pigz
I just found pbzip2 and
mpibzip2 . mpibzip2 looks very
promising for clusters or if you have a laptop and a multicore desktop computer for instance.
– user1985657
Apr 28 '15 at 20:57
find /my/path/ -type f -name "*.sql" -o -name "*.log" -exec
This command will look for the files you want to archive, in this case
/my/path/*.sql and /my/path/*.log . Add as many -o -name
"pattern" as you want.
-exec will execute the next command using the results of find :
tar
Step 2: tar
tar -P --transform='s@/my/path/@@g' -cf - {} +
--transform is a simple string replacement parameter. It will strip the path
of the files from the archive so the tarball's root becomes the current directory when
extracting. Note that you can't use -C option to change directory as you'll lose
benefits of find : all files of the directory would be included.
-P tells tar to use absolute paths, so it doesn't trigger the
warning "Removing leading `/' from member names". Leading '/' with be removed by
--transform anyway.
-cf - tells tar to use the tarball name we'll specify later
{} + uses everyfiles that find found previously
Step 3:
pigz
pigz -9 -p 4
Use as many parameters as you want. In this case -9 is the compression level
and -p 4 is the number of cores dedicated to compression. If you run this on a
heavy loaded webserver, you probably don't want to use all available cores.
/run is home to a wide assortment of data. For example, if you take a look at /run/user, you
will notice a group of directories with numeric names.
$ ls /run/user
1000 1002 121
A long file listing will clarify the significance of these numbers.
$ ls -l
total 0
drwx------ 5 shs shs 120 Jun 16 12:44 1000
drwx------ 5 dory dory 120 Jun 16 16:14 1002
drwx------ 8 gdm gdm 220 Jun 14 12:18 121
This allows us to see that each directory is related to a user who is currently logged in or
to the display manager, gdm. The numbers represent their UIDs. The content of each of these
directories are files that are used by running processes.
The /run/user files represent only a very small portion of what you'll find in /run. There
are lots of other files, as well. A handful contain the process IDs for various system
processes.
Cat can also number a file's lines during output. There are two commands to do this, as shown in the help documentation: -b, --number-nonblank
number nonempty output lines, overrides -n
-n, --number number all output lines
If I use the -b command with the hello.world file, the output will be numbered like this:
$ cat -b hello.world
1 Hello World !
In the example above, there is an empty line. We can determine why this empty line appears by using the -n argument:
$ cat -n hello.world
1 Hello World !
2
$
Now we see that there is an extra empty line. These two arguments are operating on the final output rather than the file contents,
so if we were to use the -n option with both files, numbering will count lines as follows:
$ cat -n hello.world goodbye.world
1 Hello World !
2
3 Good Bye World !
4
$
One other option that can be useful is -s for squeeze-blank . This argument tells cat to reduce repeated empty line output
down to one line. This is helpful when reviewing files that have a lot of empty lines, because it effectively fits more text on the
screen. Suppose I have a file with three lines that are spaced apart by several empty lines, such as in this example, greetings.world
:
$ cat greetings.world
Greetings World !
Take me to your Leader !
We Come in Peace !
$
Using the -s option saves screen space:
$ cat -s greetings.world
Cat is often used to copy contents of one file to another file. You may be asking, "Why not just use cp ?" Here is how I could
create a new file, called both.files , that contains the contents of the hello and goodbye files:
$ cat hello.world goodbye.world > both.files
$ cat both.files
Hello World !
Good Bye World !
$
zcat
There is another variation on the cat command known as zcat . This command is capable of displaying files that have been compressed
with Gzip without needing to uncompress the files with the gunzip
command. As an aside, this also preserves disk space, which is the entire reason files are compressed!
The zcat command is a bit more exciting because it can be a huge time saver for system administrators who spend a lot of time
reviewing system log files. Where can we find compressed log files? Take a look at /var/log on most Linux systems. On my system,
/var/log contains several files, such as syslog.2.gz and syslog.3.gz . These files are the result of the log
management system, which rotates and compresses log files to save disk space and prevent logs from growing to unmanageable file sizes.
Without zcat , I would have to uncompress these files with the gunzip command before viewing them. Thankfully, I can use zcat :
$ cd / var / log
$ ls * .gz
syslog.2.gz syslog.3.gz
$
$ zcat syslog.2.gz | more
Jan 30 00:02: 26 workstation systemd [ 1850 ] : Starting GNOME Terminal Server...
Jan 30 00:02: 26 workstation dbus-daemon [ 1920 ] : [ session uid = 2112 pid = 1920 ] Successful
ly activated service 'org.gnome.Terminal'
Jan 30 00:02: 26 workstation systemd [ 1850 ] : Started GNOME Terminal Server.
Jan 30 00:02: 26 workstation org.gnome.Terminal.desktop [ 2059 ] : # watch_fast: "/org/gno
me / terminal / legacy / " (establishing: 0, active: 0)
Jan 30 00:02:26 workstation org.gnome.Terminal.desktop[2059]: # unwatch_fast: " / org / g
nome / terminal / legacy / " (active: 0, establishing: 1)
Jan 30 00:02:26 workstation org.gnome.Terminal.desktop[2059]: # watch_established: " /
org / gnome / terminal / legacy / " (establishing: 0)
--More--
We can also pass both files to zcat if we want to review both of them uninterrupted. Due to how log rotation works, you need to
pass the filenames in reverse order to preserve the chronological order of the log contents:
$ ls -l * .gz
-rw-r----- 1 syslog adm 196383 Jan 31 00:00 syslog.2.gz
-rw-r----- 1 syslog adm 1137176 Jan 30 00:00 syslog.3.gz
$ zcat syslog.3.gz syslog.2.gz | more
The cat command seems simple but is very useful. I use it regularly. You also don't need to feed or pet it like a real cat. As
always, I suggest you review the man pages ( man cat ) for the cat and zcat commands to learn more about how it can be used. You
can also use the --help argument for a quick synopsis of command line arguments.
Interesting article but please don't misuse cat to pipe to more......
I am trying to teach people to use less pipes and here you go abusing cat to pipe to other commands. IMHO, 99.9% of the time
this is not necessary!
In stead of "cat file | command" most of the time, you can use "command file" (yes, I am an old dinosaur
from a time where memory was very expensive and forking multiple commands could fill it all up)
I use Tilda (drop-down terminal) on Ubuntu as my "command central" - pretty much the way
others might use GNOME Do, Quicksilver or Launchy.
However, I'm struggling with how to completely detach a process (e.g. Firefox) from the
terminal it's been launched from - i.e. prevent that such a (non-)child process
is terminated when closing the originating terminal
"pollutes" the originating terminal via STDOUT/STDERR
For example, in order to start Vim in a "proper" terminal window, I have tried a simple
script like the following:
exec gnome-terminal -e "vim $@" &> /dev/null &
However, that still causes pollution (also, passing a file name doesn't seem to work).
First of all; once you've started a process, you can background it by first stopping it (hit
Ctrl - Z ) and then typing bg to let it resume in the
background. It's now a "job", and its stdout / stderr /
stdin are still connected to your terminal.
You can start a process as backgrounded immediately by appending a "&" to the end of
it:
firefox &
To run it in the background silenced, use this:
firefox </dev/null &>/dev/null &
Some additional info:
nohup is a program you can use to run your application with such that its
stdout/stderr can be sent to a file instead and such that closing the parent script won't
SIGHUP the child. However, you need to have had the foresight to have used it before you
started the application. Because of the way nohup works, you can't just apply
it to a running process .
disown is a bash builtin that removes a shell job from the shell's job list.
What this basically means is that you can't use fg , bg on it
anymore, but more importantly, when you close your shell it won't hang or send a
SIGHUP to that child anymore. Unlike nohup , disown is
used after the process has been launched and backgrounded.
What you can't do, is change the stdout/stderr/stdin of a process after having
launched it. At least not from the shell. If you launch your process and tell it that its
stdout is your terminal (which is what you do by default), then that process is configured to
output to your terminal. Your shell has no business with the processes' FD setup, that's
purely something the process itself manages. The process itself can decide whether to close
its stdout/stderr/stdin or not, but you can't use your shell to force it to do so.
To manage a background process' output, you have plenty of options from scripts, "nohup"
probably being the first to come to mind. But for interactive processes you start but forgot
to silence ( firefox < /dev/null &>/dev/null & ) you can't do
much, really.
I recommend you get GNU screen . With screen you can just close your running
shell when the process' output becomes a bother and open a new one ( ^Ac ).
Oh, and by the way, don't use " $@ " where you're using it.
$@ means, $1 , $2 , $3 ..., which
would turn your command into:
gnome-terminal -e "vim $1" "$2" "$3" ...
That's probably not what you want because -e only takes one argument. Use $1
to show that your script can only handle one argument.
It's really difficult to get multiple arguments working properly in the scenario that you
gave (with the gnome-terminal -e ) because -e takes only one
argument, which is a shell command string. You'd have to encode your arguments into one. The
best and most robust, but rather cludgy, way is like so:
Reading these answers, I was under the initial impression that issuing nohup
<command> & would be sufficient. Running zsh in gnome-terminal, I found that
nohup <command> & did not prevent my shell from killing child
processes on exit. Although nohup is useful, especially with non-interactive
shells, it only guarantees this behavior if the child process does not reset its handler for
the SIGHUP signal.
In my case, nohup should have prevented hangup signals from reaching the
application, but the child application (VMWare Player in this case) was resetting its
SIGHUP handler. As a result when the terminal emulator exits, it could still
kill your subprocesses. This can only be resolved, to my knowledge, by ensuring that the
process is removed from the shell's jobs table. If nohup is overridden with a
shell builtin, as is sometimes the case, this may be sufficient, however, in the event that
it is not...
disown is a shell builtin in bash , zsh , and
ksh93 ,
<command> &
disown
or
<command> &; disown
if you prefer one-liners. This has the generally desirable effect of removing the
subprocess from the jobs table. This allows you to exit the terminal emulator without
accidentally signaling the child process at all. No matter what the SIGHUP
handler looks like, this should not kill your child process.
After the disown, the process is still a child of your terminal emulator (play with
pstree if you want to watch this in action), but after the terminal emulator
exits, you should see it attached to the init process. In other words, everything is as it
should be, and as you presumably want it to be.
What to do if your shell does not support disown ? I'd strongly advocate
switching to one that does, but in the absence of that option, you have a few choices.
screen and tmux can solve this problem, but they are much
heavier weight solutions, and I dislike having to run them for such a simple task. They are
much more suitable for situations in which you want to maintain a tty, typically on a
remote machine.
For many users, it may be desirable to see if your shell supports a capability like
zsh's setopt nohup . This can be used to specify that SIGHUP
should not be sent to the jobs in the jobs table when the shell exits. You can either apply
this just before exiting the shell, or add it to shell configuration like
~/.zshrc if you always want it on.
Find a way to edit the jobs table. I couldn't find a way to do this in
tcsh or csh , which is somewhat disturbing.
Write a small C program to fork off and exec() . This is a very poor
solution, but the source should only consist of a couple dozen lines. You can then pass
commands as commandline arguments to the C program, and thus avoid a process specific entry
in the jobs table.
I've been using number 2 for a very long time, but number 3 works just as well. Also,
disown has a 'nohup' flag of '-h', can disown all processes with '-a', and can disown all
running processes with '-ar'.
Silencing is accomplished by '$COMMAND &>/dev/null'.
in tcsh (and maybe in other shells as well), you can use parentheses to detach the process.
Compare this:
> jobs # shows nothing
> firefox &
> jobs
[1] + Running firefox
To this:
> jobs # shows nothing
> (firefox &)
> jobs # still shows nothing
>
This removes firefox from the jobs listing, but it is still tied to the terminal; if you
logged in to this node via 'ssh', trying to log out will still hang the ssh process.
,
To disassociate tty shell run command through sub-shell for e.g.
(command)&
When exit used terminal closed but process is still alive.
Have a look at reptyr ,
which does exactly that. The github page has all the information.
reptyr - A tool for "re-ptying" programs.
reptyr is a utility for taking an existing running program and attaching it to a new
terminal. Started a long-running process over ssh, but have to leave and don't want to
interrupt it? Just start a screen, use reptyr to grab it, and then kill the ssh session
and head on home.
USAGE
reptyr PID
"reptyr PID" will grab the process with id PID and attach it to your current
terminal.
After attaching, the process will take input from and write output to the new
terminal, including ^C and ^Z. (Unfortunately, if you background it, you will still have
to run "bg" or "fg" in the old terminal. This is likely impossible to fix in a reasonable
way without patching your shell.)
EDIT : As Stephane Gimenez said, it's not that simple. It's only allowing you to print to a
different terminal.
You can try to write to this process using /proc . It should be located in
/proc/ pid /fd/0 , so a simple :
echo "hello" > /proc/PID/fd/0
should do it. I have not tried it, but it should work, as long as this process still has a
valid stdin file descriptor. You can check it with ls -l on /proc/ pid
/fd/ .
if it's a link to /dev/null => it's closed
if it's a link to /dev/pts/X or a socket => it's open
See nohup for more
details about how to keep processes running.
Just ending the command line with & will not completely detach the process,
it will just run it in the background. (With zsh you can use &!
to actually detach it, otherwise you have do disown it later).
When a process runs in the background, it won't receive input from its controlling
terminal anymore. But you can send it back into the foreground with fg and then
it will read input again.
Otherwise, it's not possible to externally change its filedescriptors (including stdin) or
to reattach a lost controlling terminal unless you use debugging tools (see Ansgar's answer , or have a
look at the retty command).
Since a few days I'm successfully running the new Minecraft Bedrock Edition dedicated
server on my Ubuntu 18.04 LTS home server. Because it should be available 24/7 and
automatically startup after boot I created a systemd service for a detached tmux session:
Everything works as expected but there's one tiny thing that keeps bugging me:
How can I prevent tmux from terminating it's whole session when I press
Ctrl+C ? I just want to terminate the Minecraft server process itself instead of
the whole tmux session. When starting the server from the command line in a manually
created tmux session this does work (session stays alive) but not when the session was
brought up by systemd .
When starting the server from the command line in a manually created tmux session this
does work (session stays alive) but not when the session was brought up by systemd
.
The difference between these situations is actually unrelated to systemd. In one case,
you're starting the server from a shell within the tmux session, and when the server
terminates, control returns to the shell. In the other case, you're starting the server
directly within the tmux session, and when it terminates there's no shell to return to, so
the tmux session also dies.
tmux has an option to keep the session alive after the process inside it dies (look for
remain-on-exit in the manpage), but that's probably not what you want: you want
to be able to return to an interactive shell, to restart the server, investigate why it died,
or perform maintenance tasks, for example. So it's probably better to change your command to
this:
That is, first run the server, and then, after it terminates, replace the process (the
shell which tmux implicitly spawns to run the command, but which will then exit) with
another, interactive shell. (For some other ways to get an interactive shell after the
command exits, see e. g. this question – but note that the
<(echo commands) syntax suggested in the top answer is not available in
systemd unit files.)
In theory, you could reduce the size of sda1, increase the size of the extended partition, shift the contents
of the extended partition down, then increase the size of the PV on the extended partition and you'd have the extra
room.
However, the number of possible things that can go wrong there is just astronomical
So I'd recommend either buying a second
hard drive (and possibly transferring everything onto it in a more sensible layout, then repartitioning your current drive better)
or just making some bind mounts of various bits and pieces out of /home into / to free up a bit more space.
"... A very common way of using pdsh is to set the environment variable WCOLL to point to the file that contains the list of hosts you want to use in the pdsh command. For example, I created a subdirectory PDSH where I create a file hosts that lists the hosts I want to use ..."
The -w option means I am specifying the node(s) that will run the command. In this case, I specified the IP address
of the node (192.168.1.250). After the list of nodes, I add the command I want to run, which is uname -r in this case.
Notice that pdsh starts the output line by identifying the node name.
If you need to mix rcmd modules in a single command, you can specify which module to use in the command line,
by putting the rcmd module before the node name. In this case, I used ssh and typical ssh syntax.
A very common way of using pdsh is to set the environment variable WCOLL to point to the file that contains the list
of hosts you want to use in the pdsh command. For example, I created a subdirectory PDSH where I create a file
hosts that lists the hosts I want to use:
[laytonjb@home4 ~]$ mkdir PDSH
[laytonjb@home4 ~]$ cd PDSH
[laytonjb@home4 PDSH]$ vi hosts
[laytonjb@home4 PDSH]$ more hosts
192.168.1.4
192.168.1.250
I'm only using two nodes: 192.168.1.4 and 192.168.1.250. The first is my test system (like a cluster head node), and the second
is my test compute node. You can put hosts in the file as you would on the command line separated by commas. Be sure not to put a
blank line at the end of the file because pdsh will try to connect to it. You can put the environment variable WCOLL
in your .bashrc file:
export WCOLL=/home/laytonjb/PDSH/hosts
As before, you can source your .bashrc file, or you can log out and log back in. Specifying Hosts
I won't list all the several other ways to specify a list of nodes, because the pdsh website
[9] discusses virtually
all of them; however, some of the methods are pretty handy. The simplest way is to specify the nodes on the command line is to use
the -w option:
In this case, I specified the node names separated by commas. You can also use a range of hosts as follows:
pdsh -w host[1-11]
pdsh -w host[1-4,8-11]
In the first case, pdsh expands the host range to host1, host2, host3, , host11. In the second case, it expands the hosts similarly
(host1, host2, host3, host4, host8, host9, host10, host11). You can go to the pdsh website for more information on hostlist expressions
[10] .
Another option is to have pdsh read the hosts from a file other than the one to which WCOLL points. The command shown in
Listing 2 tells
pdsh to take the hostnames from the file /tmp/hosts , which is listed after -w ^ (with no space between
the "^" and the filename). You can also use several host files,
Listing 2 Read Hosts from File
$ more /tmp/hosts
192.168.1.4
$ more /tmp/hosts2
192.168.1.250
$ pdsh -w ^/tmp/hosts,^/tmp/hosts2 uname -r
192.168.1.4: 2.6.32-431.17.1.el6.x86_64
192.168.1.250: 2.6.32-431.11.2.el6.x86_64
The option -w -192.168.1.250 excluded node 192.168.1.250 from the list and only output the information for 192.168.1.4.
You can also exclude nodes using a node file:
or a list of hostnames to be excluded from the command to run also works.
More Useful pdsh Commands
Now I can shift into second gear and try some fancier pdsh tricks. First, I want to run a more complicated command on all of the
nodes ( Listing 3
). Notice that I put the entire command in quotes. This means the entire command is run on each node, including the first (
cat /proc/cpuinfo ) and second ( grep bogomips ) parts.
Listing 3 Quotation Marks 1
In the output, the node precedes the command results, so you can tell what output is associated with which node. Notice that the
BogoMips values are different on the two nodes, which is perfectly understandable because the systems are different. The first node
has eight cores (four cores and four Hyper-Thread cores), and the second node has four cores.
You can use this command across a homogeneous cluster to make sure all the nodes are reporting back the same BogoMips value. If
the cluster is truly homogeneous, this value should be the same. If it's not, then I would take the offending node out of production
and check it.
A slightly different command shown in
Listing 4 runs
the first part contained in quotes, cat /proc/cpuinfo , on each node and the second part of the command, grep
bogomips , on the node on which you issue the pdsh command.
Listing 4 Quotation Marks 2
The point here is that you need to be careful on the command line. In this example, the differences are trivial, but other commands
could have differences that might be difficult to notice.
One very important thing to note is that pdsh does not guarantee a return of output in any particular order. If you have a list
of 20 nodes, the output does not necessarily start with node 1 and increase incrementally to node 20. For example, in
Listing 5 , I run
vmstat on each node and get three lines of output from each node.
How do I find out running processes were associated with each open port? How do I find out what process has open tcp port 111
or udp port 7000 under Linux?
You can the following programs to find out about port numbers and its associated process:
netstat – a command-line tool that displays network connections, routing tables, and a number of network interface statistics.
fuser – a command line tool to identify processes using files or sockets.
lsof – a command line tool to list open files under Linux / UNIX to report a list of all open files and the processes that
opened them.
/proc/$pid/ file system – Under Linux /proc includes a directory for each running process (including kernel processes) at
/proc/PID, containing information about that process, notably including the processes name that opened port.
You must run above command(s) as the root user.
netstat example
Type the following command: # netstat -tulpn
Sample outputs:
OR try the following ps command: # ps -eo pid,user,group,args,etime,lstart | grep '[3]813'
Sample outputs:
3813 vivek vivek transmission 02:44:05 Fri Oct 29 10:58:40 2010
Another option is /proc/$PID/environ, enter: # cat /proc/3813/environ
OR # grep --color -w -a USER /proc/3813/environ
Sample outputs (note –colour option): Fig.01: grep output
Now, you get more information about pid # 1607 or 1616 and so on: # ps aux | grep '[1]616'
Sample outputs: www-data 1616 0.0 0.0 35816 3880 ? S 10:20 0:00 /usr/sbin/apache2 -k start
I recommend the following command to grab info about pid # 1616: # ps -eo pid,user,group,args,etime,lstart | grep '[1]616'
Sample outputs:
/usr/sbin/apache2 -k start : The command name and its args
03:16:22 : Elapsed time since the process was started, in the form [[dd-]hh:]mm:ss.
Fri Oct 29 10:20:17 2010 : Time the command started.
Help: I Discover an Open Port Which I Don't Recognize At All
The file /etc/services is used to map port numbers and protocols to service names. Try matching port numbers: $ grep port /etc/services
$ grep 443 /etc/services
Sample outputs:
https 443/tcp # http protocol over TLS/SSL
https 443/udp
Check For rootkit
I strongly recommend that you find out which processes are really running, especially servers connected to the high speed Internet
access. You can look for rootkit which is a program designed to take fundamental control (in Linux / UNIX terms "root" access, in
Windows terms "Administrator" access) of a computer system, without authorization by the system's owners and legitimate managers.
See how to detecting
/ checking rootkits under Linux .
Keep an Eye On Your Bandwidth Graphs
Usually, rooted servers are used to send a large number of spam or malware or DoS style attacks on other computers.
See also:
See the following man pages for more information: $ man ps
$ man grep
$ man lsof
$ man netstat
$ man fuser
In this series of blog posts I'm taking a look at a few very useful tools that can make your
life as the sysadmin of a cluster of Linux machines easier. This may be a Hadoop cluster, or
just a plain simple set of 'normal' machines on which you want to run the same commands and
monitoring.
Previously we looked at using SSH keys for
intra-machine authorisation , which is a pre-requisite what we'll look at here -- executing
the same command across multiple machines using PDSH. In the next post of the series we'll see
how we can monitor OS metrics across a cluster with colmux.
PDSH is a very smart little tool that enables you to issue the same command on multiple
hosts at once, and see the output. You need to have set up ssh key authentication from the
client to host on all of them, so if you followed the steps in the first section of this
article you'll be good to go.
The syntax for using it is nice and simple:
-w specifies the addresses. You can use numerical ranges [1-4]
and/or comma-separated lists of hosts. If you want to connect as a user other than the
current user on the calling machine, you can specify it here (or as a separate
-l argument)
After that is the command to run.
For example run against a small cluster of four machines that I have:
robin@RNMMBP $ pdsh -w root@rnmcluster02-node0[1-4] date
rnmcluster02-node01: Fri Nov 28 17:26:17 GMT 2014
rnmcluster02-node02: Fri Nov 28 17:26:18 GMT 2014
rnmcluster02-node03: Fri Nov 28 17:26:18 GMT 2014
rnmcluster02-node04: Fri Nov 28 17:26:18 GMT 2014
... ... ...
Example - install and start collectl on all nodes
I started looking into pdsh when it came to setting up a cluster of machines from scratch.
One of the must-have tools I like to have on any machine that I work with is the excellent
collectl .
This is an OS resource monitoring tool that I initially learnt of through Kevin Closson and Greg Rahn , and provides the kind of information you'd get
from top etc – and then some! It can run interactively, log to disk, run as a service
– and it also happens to integrate
very nicely with graphite , making it a no-brainer choice for any server.
So, instead of logging into each box individually I could instead run this:
pdsh -w root@rnmcluster02-node0[1-4] yum install -y collectl
pdsh -w root@rnmcluster02-node0[1-4] service collectl start
pdsh -w root@rnmcluster02-node0[1-4] chkconfig collectl on
Yes, I know there are tools out there like puppet and chef that are designed for doing this
kind of templated build of multiple servers, but the point I want to illustrate here is that
pdsh enables you to do ad-hoc changes to a set of servers at once. Sure, once I have my cluster
built and want to create an image/template for future builds, then it would be daft if
I were building the whole lot through pdsh-distributed yum commands.
Example - setting up
the date/timezone/NTPD
Often the accuracy of the clock on each server in a cluster is crucial, and we can easily do
this with pdsh:
robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] ntpdate pool.ntp.org
rnmcluster02-node03: 30 Nov 20:46:22 ntpdate[27610]: step time server 176.58.109.199 offset -2.928585 sec
rnmcluster02-node02: 30 Nov 20:46:22 ntpdate[28527]: step time server 176.58.109.199 offset -2.946021 sec
rnmcluster02-node04: 30 Nov 20:46:22 ntpdate[27615]: step time server 129.250.35.250 offset -2.915713 sec
rnmcluster02-node01: 30 Nov 20:46:25 ntpdate[29316]: 178.79.160.57 rate limit response from server.
rnmcluster02-node01: 30 Nov 20:46:22 ntpdate[29316]: step time server 176.58.109.199 offset -2.925016 sec
Set NTPD to start automatically at boot:
robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] chkconfig ntpd on
Start NTPD:
robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] service ntpd start
Example - using a HEREDOC (here-document) and sending quotation marks in a command with
PDSH
Here documents
(heredocs) are a nice way to embed multi-line content in a single command, enabling the
scripting of a file creation rather than the clumsy instruction to " open an editor and
paste the following lines into it and save the file as /foo/bar ".
Fortunately heredocs work just fine with pdsh, so long as you remember to enclose the whole
command in quotation marks. And speaking of which, if you need to include quotation marks in
your actual command, you need to escape them with a backslash. Here's an example of both,
setting up the configuration file for my ever-favourite gnu screen on all the nodes of the
cluster:
Now when I login to each individual node and run screen, I get a nice toolbar at the
bottom:
Combining
commands
To combine commands together that you send to each host you can use the standard bash
operator semicolon ;
robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] "date;sleep 5;date"
rnmcluster02-node01: Sun Nov 30 20:57:06 GMT 2014
rnmcluster02-node03: Sun Nov 30 20:57:06 GMT 2014
rnmcluster02-node04: Sun Nov 30 20:57:06 GMT 2014
rnmcluster02-node02: Sun Nov 30 20:57:06 GMT 2014
rnmcluster02-node01: Sun Nov 30 20:57:11 GMT 2014
rnmcluster02-node03: Sun Nov 30 20:57:11 GMT 2014
rnmcluster02-node04: Sun Nov 30 20:57:11 GMT 2014
rnmcluster02-node02: Sun Nov 30 20:57:11 GMT 2014
Note the use of the quotation marks to enclose the entire command string. Without them the
bash interpretor will take the ; as the delineator of the local commands,
and try to run the subsequent commands locally:
robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node0[1-4] date;sleep 5;date
rnmcluster02-node03: Sun Nov 30 20:57:53 GMT 2014
rnmcluster02-node04: Sun Nov 30 20:57:53 GMT 2014
rnmcluster02-node02: Sun Nov 30 20:57:53 GMT 2014
rnmcluster02-node01: Sun Nov 30 20:57:53 GMT 2014
Sun 30 Nov 2014 20:58:00 GMT
You can also use && and || to run subsequent commands
conditionally if the previous one succeeds or fails respectively:
robin@RNMMBP $ pdsh -w root@rnmcluster02-node[01-4] "chkconfig collectl on && service collectl start"
rnmcluster02-node03: Starting collectl: [ OK ]
rnmcluster02-node02: Starting collectl: [ OK ]
rnmcluster02-node04: Starting collectl: [ OK ]
rnmcluster02-node01: Starting collectl: [ OK ]
Piping and file redirects
Similar to combining commands above, you can pipe the output of commands, and you need to
use quotation marks to enclose the whole command string.
The difference is that you'll be shifting the whole of the pipe across the network in order
to process it locally, so if you're just grepping etc this doesn't make any sense. For use of
utilities held locally and not on the remote server though, this might make sense.
File redirects work the same way – within quotation marks and the redirect will be to
a file on the remote server, outside of them it'll be local:
robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] "chkconfig>/tmp/pdsh.out"
robin@RNMMBP ~ $ ls -l /tmp/pdsh.out
ls: /tmp/pdsh.out: No such file or directory
robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] chkconfig>/tmp/pdsh.out
robin@RNMMBP ~ $ ls -l /tmp/pdsh.out
-rw-r--r-- 1 robin wheel 7608 30 Nov 19:23 /tmp/pdsh.out
Cancelling PDSH operations
As you can see from above, the precise syntax of pdsh calls can be hugely important. If you
run a command and it appears 'stuck', or if you have that heartstopping realisation that the
shutdown -h now you meant to run locally you ran across the cluster, you can press
Ctrl-C once to see the status of your commands:
robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] sleep 30
^Cpdsh@RNMMBP: interrupt (one more within 1 sec to abort)
pdsh@RNMMBP: (^Z within 1 sec to cancel pending threads)
pdsh@RNMMBP: rnmcluster02-node01: command in progress
pdsh@RNMMBP: rnmcluster02-node02: command in progress
pdsh@RNMMBP: rnmcluster02-node03: command in progress
pdsh@RNMMBP: rnmcluster02-node04: command in progress
and press it twice (or within a second of the first) to cancel:
robin@RNMMBP ~ $ pdsh -w root@rnmcluster02-node[01-4] sleep 30
^Cpdsh@RNMMBP: interrupt (one more within 1 sec to abort)
pdsh@RNMMBP: (^Z within 1 sec to cancel pending threads)
pdsh@RNMMBP: rnmcluster02-node01: command in progress
pdsh@RNMMBP: rnmcluster02-node02: command in progress
pdsh@RNMMBP: rnmcluster02-node03: command in progress
pdsh@RNMMBP: rnmcluster02-node04: command in progress
^Csending SIGTERM to ssh rnmcluster02-node01
sending signal 15 to rnmcluster02-node01 [ssh] pid 26534
sending SIGTERM to ssh rnmcluster02-node02
sending signal 15 to rnmcluster02-node02 [ssh] pid 26535
sending SIGTERM to ssh rnmcluster02-node03
sending signal 15 to rnmcluster02-node03 [ssh] pid 26533
sending SIGTERM to ssh rnmcluster02-node04
sending signal 15 to rnmcluster02-node04 [ssh] pid 26532
pdsh@RNMMBP: interrupt, aborting.
If you've got threads yet to run on the remote hosts, but want to keep running whatever has
already started, you can use Ctrl-C, Ctrl-Z:
robin@RNMMBP ~ $ pdsh -f 2 -w root@rnmcluster02-node[01-4] "sleep 5;date"
^Cpdsh@RNMMBP: interrupt (one more within 1 sec to abort)
pdsh@RNMMBP: (^Z within 1 sec to cancel pending threads)
pdsh@RNMMBP: rnmcluster02-node01: command in progress
pdsh@RNMMBP: rnmcluster02-node02: command in progress
^Zpdsh@RNMMBP: Canceled 2 pending threads.
rnmcluster02-node01: Mon Dec 1 21:46:35 GMT 2014
rnmcluster02-node02: Mon Dec 1 21:46:35 GMT 2014
NB the above example illustrates the use of the -f argument to limit how many
threads are run against remote hosts at once. We can see the command is left running on the
first two nodes and returns the date, whilst the Ctrl-C - Ctrl-Z stops it from being executed
on the remaining nodes.
PDSH_SSH_ARGS_APPEND
By default, when you ssh to new host for the first time you'll be prompted to validate the
remote host's SSH key fingerprint.
The authenticity of host 'rnmcluster02-node02 (172.28.128.9)' can't be established.
RSA key fingerprint is 00:c0:75:a8:bc:30:cb:8e:b3:8e:e4:29:42:6a:27:1c.
Are you sure you want to continue connecting (yes/no)?
This is one of those prompts that the majority of us just hit enter at and ignore; if that
includes you then you will want to make sure that your PDSH call doesn't fall in a heap because
you're connecting to a bunch of new servers all at once. PDSH is not an interactive tool, so if
it requires input from the hosts it's connecting to it'll just fail. To avoid this SSH prompt,
you can set up the environment variable PDSH SSH ARGS_APPEND as follows:
The -q makes failures less verbose, and the -o passes in a couple
of options, StrictHostKeyChecking to disable the above check, and
UserKnownHostsFile to stop SSH keeping a list of host IP/hostnames and
corresponding SSH fingerprints (by pointing it at /dev/null ). You'll want this if
you're working with VMs that are sharing a pool of IPs and get re-used, otherwise you get this
scary failure:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
00:c0:75:a8:bc:30:cb:8e:b3:8e:e4:29:42:6a:27:1c.
Please contact your system administrator.
For both of these above options, make sure you're aware of the security implications that
you're opening yourself up to. For a sandbox environment I just ignore them; for anything where
security is of importance make sure you are aware of quite which server you are connecting to
by SSH, and protecting yourself from MitM attacks.
When working with multiple Linux machines I would first and foremost make sure SSH
keys are set up in order to ease management through password-less logins.
After SSH keys, I would recommend pdsh for parallel execution of the same SSH command across
the cluster. It's a big time saver particularly when initially setting up the cluster given the
installation and configuration changes that are inevitably needed.
In the next article of this series we'll see how the tool colmux is a powerful way to
monitor OS metrics across a cluster.
So now your turn – what particular tools or tips do you have for working with a
cluster of Linux machines? Leave your answers in the comments below, or tweet them to me at
@rmoff .
How can I find which process is constantly writing to disk?
I like my workstation to be close to silent and I just build a new system (P8B75-M + Core i5 3450s -- the 's' because it has
a lower max TDP) with quiet fans etc. and installed Debian Wheezy 64-bit on it.
And something is getting on my nerve: I can hear some kind of pattern like if the hard disk was writing or seeking someting
( tick...tick...tick...trrrrrr rinse and repeat every second or so).
In the past I had a similar issue in the past (many, many years ago) and it turned out it was some CUPS log or something and
I simply redirected that one (not important) logging to a (real) RAM disk.
But here I'm not sure.
I tried the following:
ls -lR /var/log > /tmp/a.tmp && sleep 5 && ls -lR /var/log > /tmp/b.tmp && diff /tmp/?.tmp
but nothing is changing there.
Now the strange thing is that I also hear the pattern when the prompt asking me to enter my LVM decryption passphrase is showing.
Could it be something in the kernel/system I just installed or do I have a faulty harddisk?
hdparm -tT /dev/sda report a correct HD speed (130 GB/s non-cached, sata 6GB) and I've already installed and compiled
from big sources (Emacs) without issue so I don't think the system is bad.
Are you sure it's a hard drive making that noise, and not something else? (Check the fans, including PSU fan. Had very strange
clicking noises once when a very thin cable was too close to a fan and would sometimes very slightly touch the blades and bounce
for a few "clicks"...) – Mat
Jul 27 '12 at 6:03
@Mat: I'll take the hard drive outside of the case (the connectors should be long enough) to be sure and I'll report back ; )
– Cedric Martin
Jul 27 '12 at 7:02
Make sure your disk filesystems are mounted relatime or noatime. File reads can be causing writes to inodes to record the access
time. – camh
Jul 27 '12 at 9:48
thanks for that tip. I didn't know about iotop . On Debian I did an apt-cache search iotop to find out that I had
to apt-get iotop . Very cool command! –
Cedric Martin
Aug 2 '12 at 15:56
I use iotop -o -b -d 10 which every 10secs prints a list of processes that read/wrote to disk and the amount of IO
bandwidth used. – ndemou
Jun 20 '16 at 15:32
You can enable IO debugging via echo 1 > /proc/sys/vm/block_dump and then watch the debugging messages in /var/log/syslog
. This has the advantage of obtaining some type of log file with past activities whereas iotop only shows the current
activity.
It is absolutely crazy to leave sysloging enabled when block_dump is active. Logging causes disk activity, which causes logging,
which causes disk activity etc. Better stop syslog before enabling this (and use dmesg to read the messages) –
dan3
Jul 15 '13 at 8:32
You are absolutely right, although the effect isn't as dramatic as you describe it. If you just want to have a short peek at the
disk activity there is no need to stop the syslog daemon. –
scai
Jul 16 '13 at 6:32
I've tried it about 2 years ago and it brought my machine to a halt. One of these days when I have nothing important running I'll
try it again :) – dan3
Jul 16 '13 at 7:22
I tried it, nothing really happened. Especially because of file system buffering. A write to syslog doesn't immediately trigger
a write to disk. – scai
Jul 16 '13 at 10:50
auditctl -S sync -S fsync -S fdatasync -a exit,always
Watch the logs in /var/log/audit/audit.log . Be careful not to do this if the audit logs themselves are flushed!
Check in /etc/auditd.conf that the flush option is set to none .
If files are being flushed often, a likely culprit is the system logs. For example, if you log failed incoming connection attempts
and someone is probing your machine, that will generate a lot of entries; this can cause a disk to emit machine gun-style noises.
With the basic log daemon sysklogd, check /etc/syslog.conf : if a log file name is not be preceded by -
, then that log is flushed to disk after each write.
It might be your drives automatically spinning down, lots of consumer-grade drives do that these days. Unfortunately on even a
lightly loaded system, this results in the drives constantly spinning down and then spinning up again, especially if you're running
hddtemp or similar to monitor the drive temperature (most drives stupidly don't let you query the SMART temperature value without
spinning up the drive - cretinous!).
I disable idle-spindown on all my drives with the following bit of shell code. you could put it in an /etc/rc.boot script,
or in /etc/rc.local or similar.
for disk in /dev/sd? ; do
/sbin/hdparm -q -S 0 "/dev/$disk"
done
that you can't query SMART readings without spinning up the drive leaves me speechless :-/ Now obviously the "spinning down" issue
can become quite complicated. Regarding disabling the spinning down: wouldn't that in itself cause the HD to wear out faster?
I mean: it's never ever "resting" as long as the system is on then? –
Cedric Martin
Aug 2 '12 at 16:03
IIRC you can query some SMART values without causing the drive to spin up, but temperature isn't one of them on any of the drives
i've tested (incl models from WD, Seagate, Samsung, Hitachi). Which is, of course, crazy because concern over temperature is one
of the reasons for idling a drive. re: wear: AIUI 1. constant velocity is less wearing than changing speed. 2. the drives have
to park the heads in a safe area and a drive is only rated to do that so many times (IIRC up to a few hundred thousand - easily
exceeded if the drive is idling and spinning up every few seconds) –
cas
Aug 2 '12 at 21:42
It's a long debate regarding whether it's better to leave drives running or to spin them down. Personally I believe it's best
to leave them running - I turn my computer off at night and when I go out but other than that I never spin my drives down. Some
people prefer to spin them down, say, at night if they're leaving the computer on or if the computer's idle for a long time, and
in such cases the advantage of spinning them down for a few hours versus leaving them running is debatable. What's never good
though is when the hard drive repeatedly spins down and up again in a short period of time. –
Micheal Johnson
Mar 12 '16 at 20:48
Note also that spinning the drive down after it's been idle for a few hours is a bit silly, because if it's been idle for
a few hours then it's likely to be used again within an hour. In that case, it would seem better to spin the drive down promptly
if it's idle (like, within 10 minutes), but it's also possible for the drive to be idle for a few minutes when someone is using
the computer and is likely to need the drive again soon. –
Micheal Johnson
Mar 12 '16 at 20:51
,
I just found that s.m.a.r.t was causing an external USB disk to spin up again and again on my raspberry pi. Although SMART is
generally a good thing, I decided to disable it again and since then it seems that unwanted disk activity has stopped
(or some variant of parameters with lsof) I can determine which process is bound to a particular port. This is useful say if
I'm trying to start something that wants to bind to 8080 and some else is already using that port, but I don't know what.
Is there an easy way to do this without using lsof? I spend time working on many systems and lsof is often not installed.
netstat -lnp will list the pid and process name next to each listening port. This will work under Linux, but not
all others (like AIX.) Add -t if you want TCP only.
# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:24800 0.0.0.0:* LISTEN 27899/synergys
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 3361/python
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 2264/mysqld
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 22964/apache2
tcp 0 0 192.168.99.1:53 0.0.0.0:* LISTEN 3389/named
tcp 0 0 192.168.88.1:53 0.0.0.0:* LISTEN 3389/named
etc.
xxx , Mar 14, 2011 at 21:01
Cool, thanks. Looks like that that works under RHEL, but not under Solaris (as you indicated). Anybody know if there's something
similar for Solaris? – user5721
Mar 14 '11 at 21:01
Thanks for this! Is there a way, however, to just display what process listen on the socket (instead of using rmsock which attempt
to remove it) ? – Olivier Dulac
Sep 18 '13 at 4:05
@vitor-braga: Ah thx! I thought it was trying but just said which process holds in when it couldn't remove it. Apparently it doesn't
even try to remove it when a process holds it. That's cool! Thx! –
Olivier Dulac
Sep 26 '13 at 16:00
Another tool available on Linux is ss . From the ss man page on Fedora:
NAME
ss - another utility to investigate sockets
SYNOPSIS
ss [options] [ FILTER ]
DESCRIPTION
ss is used to dump socket statistics. It allows showing information
similar to netstat. It can display more TCP and state informations
than other tools.
Example output below - the final column shows the process binding:
I was once faced with trying to determine what process was behind a particular port (this time it was 8000). I tried a variety
of lsof and netstat, but then took a chance and tried hitting the port via a browser (i.e.
http://hostname:8000/ ). Lo and behold, a splash screen greeted me, and it
became obvious what the process was (for the record, it was Splunk ).
One more thought: "ps -e -o pid,args" (YMMV) may sometimes show the port number in the arguments list. Grep is your friend!
In the same vein, you could telnet hostname 8000 and see if the server prints a banner. However, that's mostly useful
when the server is running on a machine where you don't have shell access, and then finding the process ID isn't relevant. –
Gilles
May 8 '11 at 14:45
How can I find which process is constantly writing to disk?
I like my workstation to be close to silent and I just build a new system (P8B75-M + Core i5 3450s -- the 's' because it has
a lower max TDP) with quiet fans etc. and installed Debian Wheezy 64-bit on it.
And something is getting on my nerve: I can hear some kind of pattern like if the hard disk was writing or seeking someting
( tick...tick...tick...trrrrrr rinse and repeat every second or so).
In the past I had a similar issue in the past (many, many years ago) and it turned out it was some CUPS log or something and
I simply redirected that one (not important) logging to a (real) RAM disk.
But here I'm not sure.
I tried the following:
ls -lR /var/log > /tmp/a.tmp && sleep 5 && ls -lR /var/log > /tmp/b.tmp && diff /tmp/?.tmp
but nothing is changing there.
Now the strange thing is that I also hear the pattern when the prompt asking me to enter my LVM decryption passphrase is showing.
Could it be something in the kernel/system I just installed or do I have a faulty harddisk?
hdparm -tT /dev/sda report a correct HD speed (130 GB/s non-cached, sata 6GB) and I've already installed and compiled
from big sources (Emacs) without issue so I don't think the system is bad.
Are you sure it's a hard drive making that noise, and not something else? (Check the fans, including PSU fan. Had very strange
clicking noises once when a very thin cable was too close to a fan and would sometimes very slightly touch the blades and bounce
for a few "clicks"...) – Mat
Jul 27 '12 at 6:03
@Mat: I'll take the hard drive outside of the case (the connectors should be long enough) to be sure and I'll report back ; )
– Cedric Martin
Jul 27 '12 at 7:02
Make sure your disk filesystems are mounted relatime or noatime. File reads can be causing writes to inodes to record the access
time. – camh
Jul 27 '12 at 9:48
thanks for that tip. I didn't know about iotop . On Debian I did an apt-cache search iotop to find out that I had
to apt-get iotop . Very cool command! –
Cedric Martin
Aug 2 '12 at 15:56
I use iotop -o -b -d 10 which every 10secs prints a list of processes that read/wrote to disk and the amount of IO
bandwidth used. – ndemou
Jun 20 '16 at 15:32
You can enable IO debugging via echo 1 > /proc/sys/vm/block_dump and then watch the debugging messages in /var/log/syslog
. This has the advantage of obtaining some type of log file with past activities whereas iotop only shows the current
activity.
It is absolutely crazy to leave sysloging enabled when block_dump is active. Logging causes disk activity, which causes logging,
which causes disk activity etc. Better stop syslog before enabling this (and use dmesg to read the messages) –
dan3
Jul 15 '13 at 8:32
You are absolutely right, although the effect isn't as dramatic as you describe it. If you just want to have a short peek at the
disk activity there is no need to stop the syslog daemon. –
scai
Jul 16 '13 at 6:32
I've tried it about 2 years ago and it brought my machine to a halt. One of these days when I have nothing important running I'll
try it again :) – dan3
Jul 16 '13 at 7:22
I tried it, nothing really happened. Especially because of file system buffering. A write to syslog doesn't immediately trigger
a write to disk. – scai
Jul 16 '13 at 10:50
auditctl -S sync -S fsync -S fdatasync -a exit,always
Watch the logs in /var/log/audit/audit.log . Be careful not to do this if the audit logs themselves are flushed!
Check in /etc/auditd.conf that the flush option is set to none .
If files are being flushed often, a likely culprit is the system logs. For example, if you log failed incoming connection attempts
and someone is probing your machine, that will generate a lot of entries; this can cause a disk to emit machine gun-style noises.
With the basic log daemon sysklogd, check /etc/syslog.conf : if a log file name is not be preceded by -
, then that log is flushed to disk after each write.
It might be your drives automatically spinning down, lots of consumer-grade drives do that these days. Unfortunately on even a
lightly loaded system, this results in the drives constantly spinning down and then spinning up again, especially if you're running
hddtemp or similar to monitor the drive temperature (most drives stupidly don't let you query the SMART temperature value without
spinning up the drive - cretinous!).
I disable idle-spindown on all my drives with the following bit of shell code. you could put it in an /etc/rc.boot script,
or in /etc/rc.local or similar.
for disk in /dev/sd? ; do
/sbin/hdparm -q -S 0 "/dev/$disk"
done
that you can't query SMART readings without spinning up the drive leaves me speechless :-/ Now obviously the "spinning down" issue
can become quite complicated. Regarding disabling the spinning down: wouldn't that in itself cause the HD to wear out faster?
I mean: it's never ever "resting" as long as the system is on then? –
Cedric Martin
Aug 2 '12 at 16:03
IIRC you can query some SMART values without causing the drive to spin up, but temperature isn't one of them on any of the drives
i've tested (incl models from WD, Seagate, Samsung, Hitachi). Which is, of course, crazy because concern over temperature is one
of the reasons for idling a drive. re: wear: AIUI 1. constant velocity is less wearing than changing speed. 2. the drives have
to park the heads in a safe area and a drive is only rated to do that so many times (IIRC up to a few hundred thousand - easily
exceeded if the drive is idling and spinning up every few seconds) –
cas
Aug 2 '12 at 21:42
It's a long debate regarding whether it's better to leave drives running or to spin them down. Personally I believe it's best
to leave them running - I turn my computer off at night and when I go out but other than that I never spin my drives down. Some
people prefer to spin them down, say, at night if they're leaving the computer on or if the computer's idle for a long time, and
in such cases the advantage of spinning them down for a few hours versus leaving them running is debatable. What's never good
though is when the hard drive repeatedly spins down and up again in a short period of time. –
Micheal Johnson
Mar 12 '16 at 20:48
Note also that spinning the drive down after it's been idle for a few hours is a bit silly, because if it's been idle for
a few hours then it's likely to be used again within an hour. In that case, it would seem better to spin the drive down promptly
if it's idle (like, within 10 minutes), but it's also possible for the drive to be idle for a few minutes when someone is using
the computer and is likely to need the drive again soon. –
Micheal Johnson
Mar 12 '16 at 20:51
,
I just found that s.m.a.r.t was causing an external USB disk to spin up again and again on my raspberry pi. Although SMART is
generally a good thing, I decided to disable it again and since then it seems that unwanted disk activity has stopped
To change two vertically split windows to horizontal split: Ctrl - WtCtrl - WK
Horizontally to vertically: Ctrl - WtCtrl - WH
Explanations:
Ctrl - W t -- makes the first (topleft) window current
Ctrl - W K -- moves the current window to full-width at the very
top
Ctrl - W H -- moves the current window to full-height at far
left
Note that the t is lowercase, and the K and H are uppercase.
Also, with only two windows, it seems like you can drop the Ctrl - Wt part because if you're already in one of only two windows, what's the point of
making it current?
Just toggle your NERDTree panel closed before 'rotating' the splits, then toggle it
back open. :NERDTreeToggle (I have it mapped to a function key for convenience).
The command ^W-o is great! I did not know it. –
Masi Aug 13 '09 at 2:20
add a comment | up vote 6
down vote The following ex commands will (re-)split
any number of windows:
To split vertically (e.g. make vertical dividers between windows), type :vertical
ball
To split horizontally, type :ball
If there are hidden buffers, issuing these commands will also make the hidden buffers
visible.
This is very ugly, but hey, it seems to do in one step exactly what I asked for (I tried). +1, and accepted. I was looking for
a native way to do this quickly but since there does not seem to be one, yours will do just fine. Thanks! –
greg0ireJan 23
'13 at 15:27
You're right, "very ugly" shoud have been "very unfamiliar". Your command is very handy, and I think I definitely going to carve
it in my .vimrc – greg0ireJan 23
'13 at 16:21
By "move a piece of text to a new file" I assume you mean cut that piece of text from the current file and create a new file containing
only that text.
Various examples:
:1,1 w new_file to create a new file containing only the text from line number 1
:5,50 w newfile to create a new file containing the text from line 5 to line 50
:'a,'b w newfile to create a new file containing the text from mark a to mark b
set your marks by using ma and mb where ever you like
The above only copies the text and creates a new file containing that text. You will then need to delete afterward.
This can be done using the same range and the d command:
:5,50 d to delete the text from line 5 to line 50
:'a,'b d to delete the text from mark a to mark b
Or by using dd for the single line case.
If you instead select the text using visual mode, and then hit : while the text is selected, you will see the
following on the command line:
:'<,'>
Which indicates the selected text. You can then expand the command to:
:'<,'>w >> old_file
Which will append the text to an existing file. Then delete as above.
One liner:
:2,3 d | new +put! "
The breakdown:
:2,3 d - delete lines 2 through 3
| - technically this redirects the output of the first command to the second command but since the first command
doesn't output anything, we're just chaining the commands together
new - opens a new buffer
+put! " - put the contents of the unnamed register ( " ) into the buffer
The bang ( ! ) is there so that the contents are put before the current line. This causes and
empty line at the end of the file. Without it, there is an empty line at the top of the file.
Your assumption is right. This looks good, I'm going to test. Could you explain 2. a bit more? I'm not very familiar with ranges.
EDIT: If I try this on the second line, it writes the first line to the other file, not the second line. –
greg0ireJan 23
'13 at 14:09
Ok, if I understand well, the trick is to use ranges to select and write in the same command. That's very similar to what I did.
+1 for the detailed explanation, but I don't think this is more efficient, since the trick with hitting ':' is what I do for the
moment. – greg0ireJan 23
'13 at 14:41
I have 4 steps for the moment: select, write, select, delete. With your method, I have 6 steps: select, delete, split, paste,
write, close. I asked for something more efficient :P – greg0ireJan 23
'13 at 13:42
That's better, but 5 still > 4 :P – greg0ireJan 23
'13 at 13:46
Based on @embedded.kyle's answer and this Q&A , I ended
up with this one liner to append a selection to a file and delete from current file. After selecting some lines with Shift+V
, hit : and run:
'<,'>w >> test | normal gvd
The first part appends selected lines. The second command enters normal mode and runs gvd to select the last selection
and then deletes.
Visual selection is a common feature in applications, but Vim's visual selection has several
benefits.
To cut-and-paste or copy-and-paste:
Position the cursor at the beginning of the text you want to cut/copy.
Press v to begin character-based visual selection, or V to select whole
lines, or Ctrl-v or Ctrl-q to select a block.
Move the cursor to the end of the text to be cut/copied. While selecting text, you can
perform searches and other advanced movement.
Press d (delete) to cut, or y (yank) to copy.
Move the cursor to the desired paste location.
Press p to paste after the cursor, or P to paste before.
Visual selection (steps 1-3) can be performed using a mouse.
If you want to change the selected text, press c instead of d or y in
step 4. In a visual selection, pressing c performs a change by deleting the selected
text and entering insert mode so you can type the new text.
Pasting over a block of text
You can copy a block of text by pressing Ctrl-v (or Ctrl-q if you use Ctrl-v for paste),
then moving the cursor to select, and pressing y to yank. Now you can move
elsewhere and press p to paste the text after the cursor (or P to
paste before). The paste inserts a block (which might, for example, be 4 rows by 3
columns of text).
Instead of inserting the block, it is also possible to replace (paste over) the
destination. To do this, move to the target location then press 1vp (
1v selects an area equal to the original, and p pastes over it).
When a count is used before v , V , or ^V (character,
line or block selection), an area equal to the previous area, multiplied by the count, is
selected. See the paragraph after :help
<LeftRelease> .
Note that this will only work if you actually did something to the previous visual
selection, such as a yank, delete, or change operation. It will not work after visually
selecting an area and leaving visual mode without taking any actions.
NOTE: after selecting the visual copy mode, you can hold the shift key while selection
the region to get a multiple line copy. For example, to copy three lines, press V, then hold
down the Shift key while pressing the down arrow key twice. Then do your action on the
buffer.
I have struck out the above new comment because I think it is talking about something
that may apply to those who have used :behave mswin . To visually select
multiple lines, you type V , then press j (or cursor down). You
hold down Shift only to type the uppercase V . Do not press Shift after that. If
I am wrong, please explain here. JohnBeckett 10:48, October 7, 2010
(UTC)
If you just want to copy (yank) the visually marked text, you do not need to 'y'ank it.
Marking it will already copy it.
Using a mouse, you can insert it at another position by clicking the middle mouse
button.
This also works in across Vim applications on Windows systems (clipboard is inserted)
This is a really useful thing in Vim. I feel lost without it in any other editor. I have
some more points I'd like to add to this tip:
While in (any of the three) Visual mode(s), pressing 'o' will move the cursor to the
opposite end of the selection. In Visual Block mode, you can also press 'O', allowing you to
position the cursor in any of the four corners.
If you have some yanked text, pressing 'p' or 'P' while in Visual mode will replace the
selected text with the already yanked text. (After this, the previously selected text will be
yanked.)
Press 'gv' in Normal mode to restore your previous selection.
It's really worth it to check out the register functionality in Vim: ':help
registers'.
If you're still eager to use the mouse-juggling middle-mouse trick of common unix
copy-n-paste, or are into bending space and time with i_CTRL-R<reg>, consider checking
out ':set paste' and ':set pastetoggle'. (Or in the latter case, try with
i_CTRL-R_CTRL-O..)
You can replace a set of text in a visual block very easily by selecting a block, press c
and then make changes to the first line. Pressing <Esc> twice replaces all the text of
the original selection. See :help v_b_c .
On Windows the <mswin.vim> script seems to be getting sourced for many users.
Result: more Windows like behavior (ctrl-v is "paste", instead of visual-block selection).
Hunt down your system vimrc and remove sourcing thereof if you don't like that behavior (or
substitute <mrswin.vim> in its place, see VimTip63 .
With VimTip588 one can
sort lines or blocks based on visual-block selection.
With reference to the earlier post asking how to paste an inner block
Select the inner block to copy usint ctrl-v and highlighting with the hjkl keys
yank the visual region (y)
Select the inner block you want to overwrite (Ctrl-v then hightlight with hjkl keys)
paste the selection P (that is shift P) , this will overwrite keeping the block
formation
The "yank" buffers in Vim are not the same as the Windows clipboard (i.e., cut-and-paste)
buffers. If you're using the yank, it only puts it in a Vim buffer - that buffer is not
accessible to the Windows paste command. You'll want to use the Edit | Copy and Edit | Paste
(or their keyboard equivalents) if you're using the Windows GUI, or select with your mouse and
use your X-Windows cut-n-paste mouse buttons if you're running UNIX.
Double-quote and star gives one access to windows clippboard or the unix equivalent. as an
example if I wanted to yank the current line into the clipboard I would type "*yy
If I wanted to paste the contents of the clippboard into Vim at my current curser location I
would type "*p
The double-qoute and start trick work well with visual mode as well. ex: visual select text
to copy to clippboard and then type "*y
I find this very useful and I use it all the time but it is a bit slow typing "* all the
time so I am thinking about creating a macro to speed it up a bit.
Copy and Paste using the System Clipboard
There are some caveats regarding how the "*y (copy into System Clipboard) command works. We
have to be sure that we are using vim-full (sudo aptitude install vim-full on debian-based
systems) or a Vim that has X11 support enabled. Only then will the "*y command work.
For our convenience as we are all familiar with using Ctrl+c to copy a block of text in most
other GUI applications, we can also map Ctrl+c to "*y so that in Vim Visual Mode, we can simply
Ctrl+c to copy the block of text we want into our system buffer. To do that, we simply add this
line in our .vimrc file:
map <C-c> "+y<CR>
Restart our shell and we are good. Now whenever we are in Visual Mode, we can Ctrl+c to grab
what we want and paste it into another application or another editor in a convenient and
intuitive manner.
I have two files, say a.txt and b.txt , in the same session of vim
and I split the screen so I have file a.txt in the upper window and
b.txt in the lower window.
I want to move lines here and there from a.txt to b.txt : I
select a line with Shift + v , then I move to b.txt in the
lower window with Ctrl + w↓ , paste with p
, get back to a.txt with Ctrl + w↑ and I
can repeat the operation when I get to another line I want to move.
My question: is there a quicker way to say vim "send the line I am on (or the test I
selected) to the other window" ?
I presume that you're deleting the line that you've selected in a.txt . If not,
you'd be pasting something else into b.txt . If so, there's no need to select
the line first. – Anthony Geoghegan
Nov 24 '15 at 13:00
This sounds like a good use case for a macro. Macros are commands that can be recorded and
stored in a Vim register. Each register is identified by a letter from a to z.
Recording
To start recording, press q in Normal mode followed by a letter (a to z).
That starts recording keystrokes to the specified register. Vim displays
"recording" in the status line. Type any Normal mode commands, or enter Insert
mode and type text. To stop recording, again press q while in Normal mode.
For this particular macro, I chose the m (for move) register to store it.
I pressed qm to record the following commands:
dd to delete the current line (and save it to the default register)
CtrlWj to move to the window below
p to paste the contents of the default register
and CtrlWk to return to the window above.
When I typed q to finish recording the macro, the contents of the
m register were:
dd^Wjp^Wk
Usage
To move the current line, simply type @m in Normal mode.
To repeat the macro on a different line, @@ can be used to execute the most
recently used macro.
To execute the macro 5 times (i.e., move the current line with the following four lines
below it), use 5@m or 5@@ .
I asked to see if there is a command unknown to me that does the job: it seems there is none.
In absence of such a command, this can be a good solution. – brad
Nov 24 '15 at 14:26
@brad, you can find all the commands available to you in the documentation. If it's not there
it doesn't exist no need to ask random strangers. – romainl
Nov 26 '15 at 9:54
@romainl, yes, I know this but vim documentation is really huge and, although it doesn't
scare me, there is always the possibility to miss something. Moreover, it could also be that
you can obtain the effect using the combination of 2 commands and in this case it would be
hardly documented – brad
Nov 26 '15 at 10:17
I normally work with more than 5 files at a time. I use buffers to open different files. I
use commands such as :buf file1, :buf file2 etc. Is there a faster way to move to different
files?
Below I describe some excerpts from sections of my .vimrc . It includes mapping
the leader key, setting wilds tab completion, and finally my buffer nav key choices (all
mostly inspired by folks on the interweb, including romainl). Edit: Then I ramble on about my
shortcuts for windows and tabs.
" easier default keys {{{1
let mapleader=','
nnoremap <leader>2 :@"<CR>
The leader key is a prefix key for mostly user-defined key commands (some
plugins also use it). The default is \ , but many people suggest the easier to
reach , .
The second line there is a command to @ execute from the "
clipboard, in case you'd like to quickly try out various key bindings (without relying on
:so % ). (My nmeumonic is that Shift - 2 is @
.)
" wilds {{{1
set wildmenu wildmode=list:full
set wildcharm=<C-z>
set wildignore+=*~ wildignorecase
For built-in completion, wildmenu is probably the part that shows up yellow
on your Vim when using tab completion on command-line. wildmode is set to a
comma-separated list, each coming up in turn on each tab completion (that is, my list is
simply one element, list:full ). list shows rows and columns of
candidates. full 's meaning includes maintaining existence of the
wildmenu . wildcharm is the way to include Tab presses
in your macros. The *~ is for my use in :edit and
:find commands.
The ,3 is for switching between the "two" last buffers (Easier to reach than
built-in Ctrl - 6 ). Nmeuonic is Shift - 3 is
# , and # is the register symbol for last buffer. (See
:marks .)
,bh is to select from hidden buffers ( ! ).
,bw is to bwipeout buffers by number or name. For instance, you
can wipeout several while looking at the list, with ,bw 1 3 4 8 10 <CR> .
Note that wipeout is more destructive than :bdelete . They have their pros and
cons. For instance, :bdelete leaves the buffer in the hidden list, while
:bwipeout removes global marks (see :help marks , and the
description of uppercase marks).
I haven't settled on these keybindings, I would sort of prefer that my ,bb
was simply ,b (simply defining while leaving the others defined makes Vim pause
to see if you'll enter more).
Those shortcuts for :BufExplorer are actually the defaults for that plugin,
but I have it written out so I can change them if I want to start using ,b
without a hang.
You didn't ask for this:
If you still find Vim buffers a little awkward to use, try to combine the functionality
with tabs and windows (until you get more comfortable?).
Notice how nice ,w is for a prefix. Also, I reserve Ctrl key for
resizing, because Alt ( M- ) is hard to realize in all
environments, and I don't have a better way to resize. I'm fine using ,w to
switch windows.
" tabs {{{3
nnoremap <leader>t :tab
nnoremap <M-n> :tabn<cr>
nnoremap <M-p> :tabp<cr>
nnoremap <C-Tab> :tabn<cr>
nnoremap <C-S-Tab> :tabp<cr>
nnoremap tn :tabe<CR>
nnoremap te :tabe<Space><C-z><S-Tab>
nnoremap tf :tabf<Space>
nnoremap tc :tabc<CR>
nnoremap to :tabo<CR>
nnoremap tm :tabm<CR>
nnoremap ts :tabs<CR>
nnoremap th :tabr<CR>
nnoremap tj :tabn<CR>
nnoremap tk :tabp<CR>
nnoremap tl :tabl<CR>
" or, it may make more sense to use
" nnoremap th :tabp<CR>
" nnoremap tj :tabl<CR>
" nnoremap tk :tabr<CR>
" nnoremap tl :tabn<CR>
In summary of my window and tabs keys, I can navigate both of them with Alt ,
which is actually pretty easy to reach. In other words:
" (modifier) key choice explanation {{{3
"
" KEYS CTRL ALT
" hjkl resize windows switch windows
" np switch buffer switch tab
"
" (resize windows is hard to do otherwise, so we use ctrl which works across
" more environments. i can use ',w' for windowcmds o.w.. alt is comfortable
" enough for fast and gui nav in tabs and windows. we use np for navs that
" are more linear, hjkl for navs that are more planar.)
"
This way, if the Alt is working, you can actually hold it down while you find
your "open" buffer pretty quickly, amongst the tabs and windows.
,
There are many ways to solve. The best is the best that WORKS for YOU. You have lots of fuzzy
match plugins that help you navigate. The 2 things that impress me most are
The NERD tree allows you to explore your filesystem and to open files and directories. It presents the filesystem to you in
the form of a tree which you manipulate with the keyboard and/or mouse. It also allows you to perform simple filesystem operations.
The tree can be toggled easily with :NERDTreeToggle which can be mapped to a more suitable key. The keyboard shortcuts in the
NERD tree are also easy and intuitive.
For those of us not wanting to follow every link to find out about each plugin, care to furnish us with a brief synopsis? –
SpoonMeiserSep 17 '08
at 19:32
Pathogen is the FIRST plugin you have to install on every Vim installation! It resolves the plugin management problems every Vim
developer has. – Patrizio RulloSep 26
'11 at 12:11
A very nice grep replacement for GVim is Ack . A search plugin written
in Perl that beats Vim's internal grep implementation and externally invoked greps, too. It also by default skips any CVS directories
in the project directory, e.g. '.svn'.
This blog shows a way to integrate Ack with vim.
A.vim is a great little plugin. It allows you
to quickly switch between header and source files with a single command. The default is :A , but I remapped it to
F2 reduce keystrokes.
I really like the SuperTab plugin, it allows
you to use the tab key to do all your insert completions.
community wiki Greg Hewgill, Aug 25, 2008
at 19:23
I have recently started using a plugin that highlights differences in your buffer from a previous version in your RCS system (Subversion,
git, whatever). You just need to press a key to toggle the diff display on/off. You can find it here:
http://github.com/ghewgill/vim-scmdiff . Patches welcome!
It doesn't explicitly support bitkeeper at the moment, but as long as bitkeeper has a "diff" command that outputs a normal patch
file, it should be easy enough to add. – Greg HewgillSep 16 '08
at 9:26
@Yogesh: No, it doesn't support ClearCase at this time. However, if you can add ClearCase support, a patch would certainly be
accepted. – Greg HewgillMar 10
'10 at 1:39
Elegant (mini) buffer explorer - This
is the multiple file/buffer manager I use. Takes very little screen space. It looks just like most IDEs where you have a top
tab-bar with the files you've opened. I've tested some other similar plugins before, and this is my pick.
TagList - Small file explorer, without
the "extra" stuff the other file explorers have. Just lets you browse directories and open files with the "enter" key. Note
that this has already been noted by
previouscommenters
to your questions.
SuperTab - Already noted by
WMR in this
post, looks very promising. It's an auto-completion replacement key for Ctrl-P.
Moria color scheme - Another good, dark
one. Note that it's gVim only.
Enahcned Python syntax - If you're using
Python, this is an enhanced syntax version. Works better than the original. I'm not sure, but this might be already included
in the newest version. Nonetheless, it's worth adding to your syntax folder if you need it.
Not a plugin, but I advise any Mac user to switch to the MacVim
distribution which is vastly superior to the official port.
As for plugins, I used VIM-LaTeX for my thesis and was very
satisfied with the usability boost. I also like the Taglist
plugin which makes use of the ctags library.
clang complete - the best c++ code completion
I have seen so far. By using an actual compiler (that would be clang) the plugin is able to complete complex expressions including
STL and smart pointers.
With version 7.3, undo branches was added to vim. A very powerful feature, but hard to use, until
Steve Losh made
Gundo which makes this feature possible to use with a ascii
representation of the tree and a diff of the change. A must for using undo branches.
My latest favourite is Command-T . Granted, to install it
you need to have Ruby support and you'll need to compile a C extension for Vim. But oy-yoy-yoy does this plugin make a difference
in opening files in Vim!
Definitely! Let not the ruby + c compiling stop you, you will be amazed on how well this plugin enhances your toolset. I have
been ignoring this plugin for too long, installed it today and already find myself using NERDTree lesser and lesser. –
Victor FarazdagiApr 19
'11 at 19:16
just my 2 cents.. being a naive user of both plugins, with a few first characters of file name i saw a much better result with
commandt plugin and a lots of false positives for ctrlp. –
FUDDec
26 '12 at 4:48
Conque Shell : Run interactive commands inside a Vim buffer
Conque is a Vim plugin which allows you to run interactive programs, such as bash on linux or powershell.exe on Windows, inside
a Vim buffer. In other words it is a terminal emulator which uses a Vim buffer to display the program output.
The vcscommand plugin provides global ex commands
for manipulating version-controlled source files and it supports CVS,SVN and some other repositories.
You can do almost all repository related tasks from with in vim:
* Taking the diff of current buffer with repository copy
* Adding new files
* Reverting the current buffer to the repository copy by nullifying the local changes....
Just gonna name a few I didn't see here, but which I still find extremely helpful:
Gist plugin - Github Gists (Kind of
Githubs answer to Pastebin, integrated with Git for awesomeness!)
Mustang color scheme (Can't link directly due to low reputation, Google it!) - Dark, and beautiful color scheme. Looks
really good in the terminal, and even better in gVim! (Due to 256 color support)
One Plugin that is missing in the answers is NERDCommenter
, which let's you do almost anything with comments. For example {add, toggle, remove} comments. And more. See
this blog entry for some examples.
This script is based on the eclipse Task List. It will search the file for FIXME, TODO, and XXX (or a custom list) and put
them in a handy list for you to browse which at the same time will update the location in the document so you can see exactly
where the tag is located. Something like an interactive 'cw'
I really love the snippetsEmu Plugin. It emulates
some of the behaviour of Snippets from the OS X editor TextMate, in particular the variable bouncing and replacement behaviour.
For vim I like a little help with completions.
Vim has tons of completion modes, but really, I just want vim to complete anything it can, whenver it can.
I hate typing ending quotes, but fortunately
this plugin obviates the need for such misery.
Those two are my heavy hitters.
This one may step up to roam my code like
an unquiet shade, but I've yet to try it.
The Txtfmt plugin gives you a sort of "rich text" highlighting capability, similar to what is provided by RTF editors and word
processors. You can use it to add colors (foreground and background) and formatting attributes (all combinations of bold, underline,
italic, etc...) to your plain text documents in Vim.
The advantage of this plugin over something like Latex is that with Txtfmt, your highlighting changes are visible "in real
time", and as with a word processor, the highlighting is WYSIWYG. Txtfmt embeds special tokens directly in the file to accomplish
the highlighting, so the highlighting is unaffected when you move the file around, even from one computer to another. The special
tokens are hidden by the syntax; each appears as a single space. For those who have applied Vince Negri's conceal/ownsyntax patch,
the tokens can even be made "zero-width".
To copy two lines, it's even faster just to go yj or yk,
especially since you don't double up on one character. Plus, yk is a backwards
version that 2yy can't do, and you can put the number of lines to reach
backwards in y9j or y2k, etc.. Only difference is that your count
has to be n-1 for a total of n lines, but your head can learn that
anyway. – zelk
Mar 9 '14 at 13:29
If you would like to duplicate a line and paste it right away below the current like, just
like in Sublime Ctrl + Shift + D, then you can add this to
your .vimrc file.
y7yp (or 7yyp) is rarely useful; the cursor remains on the first line copied so that p pastes
the copied lines between the first and second line of the source. To duplicate a block of
lines use 7yyP – Nefrubyr
Jul 29 '14 at 14:09
For someone who doesn't know vi, some answers from above might mislead him with phrases like
"paste ... after/before current line ".
It's actually "paste ... after/before cursor ".
yy or Y to copy the line
or dd to delete the line
then
p to paste the copied or deleted text after the cursor
or P to paste the copied or deleted text before the cursor
For those starting to learn vi, here is a good introduction to vi by listing side by side vi
commands to typical Windows GUI Editor cursor movement and shortcut keys. It lists all the
basic commands including yy (copy line) and p (paste after) or
P (paste before).
When you press : in visual mode, it is transformed to '<,'>
so it pre-selects the line range the visual selection spanned over. So, in visual mode,
:t0 will copy the lines at the beginning. – Benoit
Jun 30 '12 at 14:17
For the record: when you type a colon (:) you go into command line mode where you can enter
Ex commands. vimdoc.sourceforge.net/htmldoc/cmdline.html
Ex commands can be really powerful and terse. The yyp solutions are "Normal mode" commands.
If you want to copy/move/delete a far-away line or range of lines an Ex command can be a lot
faster. – Niels Bom
Jul 31 '12 at 8:21
Y is usually remapped to y$ (yank (copy) until end of line (from
current cursor position, not beginning of line)) though. With this line in
.vimrc : :nnoremap Y y$ – Aaron Thoma
Aug 22 '13 at 23:31
gives you the advantage of preserving the cursor position.
,Sep 18, 2008 at 20:32
You can also try <C-x><C-l> which will repeat the last line from insert mode and
brings you a completion window with all of the lines. It works almost like <C-p>
This is very useful, but to avoid having to press many keys I have mapped it to just CTRL-L,
this is my map: inoremap ^L ^X^L – Jorge Gajon
May 11 '09 at 6:38
1 gotcha: when you use "p" to put the line, it puts it after the line your cursor is
on, so if you want to add the line after the line you're yanking, don't move the cursor down
a line before putting the new line.
Use the > command. To indent 5 lines, 5>> . To mark a block of lines and indent it, Vjj>
to indent 3 lines (vim only). To indent a curly-braces block, put your cursor on one of the curly braces and use >%
.
If you're copying blocks of text around and need to align the indent of a block in its new location, use ]p
instead of just p . This aligns the pasted block with the surrounding text.
Also, the shiftwidth
setting allows you to control how many spaces to indent.
My problem(in gVim) is that the command > indents much more than 2 blanks (I want just two blanks but > indent something like
5 blanks) – Kamran Bigdely
Feb 28 '11 at 23:25
The problem with . in this situation is that you have to move your fingers. With @mike's solution (same one i use) you've already
got your fingers on the indent key and can just keep whacking it to keep indenting rather than switching and doing something else.
Using period takes longer because you have to move your hands and it requires more thought because it's a second, different, operation.
– masukomi
Dec 6 '13 at 21:24
I've an XML file and turned on syntax highlighting. Typing gg=G just puts every line starting from position 1. All
the white spaces have been removed. Is there anything else specific to XML? –
asgs
Jan 28 '14 at 21:57
This is cumbersome, but is the way to go if you do formatting outside of core VIM (for instance, using vim-prettier
instead of the default indenting engine). Using > will otherwise royally scew up the formatting done by Prettier.
– oligofren
Mar 27 at 15:23
I find it better than the accepted answer, as I can see what is happening, the lines I'm selecting and the action I'm doing, and
not just type some sort of vim incantation. – user4052054
Aug 17 at 17:50
Suppose | represents the position of the cursor in Vim. If the text to be indented is enclosed in a code block like:
int main() {
line1
line2|
line3
}
you can do >i{ which means " indent ( > ) inside ( i ) block ( { )
" and get:
int main() {
line1
line2|
line3
}
Now suppose the lines are contiguous but outside a block, like:
do
line2|
line3
line4
done
To indent lines 2 thru 4 you can visually select the lines and type > . Or even faster you can do >2j
to get:
do
line2|
line3
line4
done
Note that >Nj means indent from current line to N lines below. If the number of lines to be indented
is large, it could take some seconds for the user to count the proper value of N . To save valuable seconds you can
activate the option of relative number with set relativenumber (available since Vim version 7.3).
Not on my Solaris or AIX boxes it doesn't. The equals key has always been one of my standard ad hoc macro assignments. Are you
sure you're not looking at a vim that's been linked to as vi ? –
rojomoke
Jul 31 '14 at 10:09
In ex mode you can use :left or :le to align lines a specified amount. Specifically,
:left will Left align lines in the [range]. It sets the indent in the lines to [indent] (default 0).
:%le3 or :%le 3 or :%left3 or :%left 3 will align the entire file by padding
with three spaces.
:5,7 le 3 will align lines 5 through 7 by padding them with 3 spaces.
:le without any value or :le 0 will left align with a padding of 0.
Awesome, just what I was looking for (a way to insert a specific number of spaces -- 4 spaces for markdown code -- to override
my normal indent). In my case I wanted to indent a specific number of lines in visual mode, so shift-v to highlight the lines,
then :'<,'>le4 to insert the spaces. Thanks! –
Subfuzion
Aug 11 '17 at 22:02
There is one more way that hasn't been mentioned yet - you can use norm i command to insert given text at the beginning
of the line. To insert 10 spaces before lines 2-10:
:2,10norm 10i
Remember that there has to be space character at the end of the command - this will be the character we want to have inserted.
We can also indent line with any other text, for example to indent every line in file with 5 underscore characters:
:%norm 5i_
Or something even more fancy:
:%norm 2i[ ]
More practical example is commenting Bash/Python/etc code with # character:
:1,20norm i#
To re-indent use x instead of i . For example to remove first 5 characters from every line:
...what? 'indent by 4 spaces'? No, this jumps to line 4 and then indents everything from there to the end of the file, using the
currently selected indent mode (if any). – underscore_d
Oct 17 '15 at 19:35
There are clearly a lot of ways to solve this, but this is the easiest to implement, as line numbers show by default in vim and
it doesn't require math. – HoldOffHunger
Dec 5 '17 at 15:50
How to indent highlighted code in vi immediately by a # of spaces:
Option 1: Indent a block of code in vi to three spaces with Visual Block mode:
Select the block of code you want to indent. Do this using Ctrl+V in normal mode and arrowing down to select
text. While it is selected, enter : to give a command to the block of selected text.
The following will appear in the command line: :'<,'>
To set indent to 3 spaces, type le 3 and press enter. This is what appears: :'<,'>le 3
The selected text is immediately indented to 3 spaces.
Option 2: Indent a block of code in vi to three spaces with Visual Line mode:
Open your file in VI.
Put your cursor over some code
Be in normal mode press the following keys:
Vjjjj:le 3
Interpretation of what you did:
V means start selecting text.
jjjj arrows down 4 lines, highlighting 4 lines.
: tells vi you will enter an instruction for the highlighted text.
le 3 means indent highlighted text 3 lines.
The selected code is immediately increased or decreased to three spaces indentation.
Option 3: use Visual Block mode and special insert mode to increase indent:
Open your file in VI.
Put your cursor over some code
Be in normal mode press the following keys:
Ctrl+V
jjjj
(press spacebar 5 times)
EscShift+i
All the highlighted text is indented an additional 5 spaces.
This answer summarises the other answers and comments of this question, and adds extra information based on the
Vim documentation and the
Vim wiki . For conciseness, this answer doesn't distinguish between Vi and
Vim-specific commands.
In the commands below, "re-indent" means "indent lines according to your
indentation settings ."
shiftwidth is the
primary variable that controls indentation.
General Commands
>> Indent line by shiftwidth spaces
<< De-indent line by shiftwidth spaces
5>> Indent 5 lines
5== Re-indent 5 lines
>% Increase indent of a braced or bracketed block (place cursor on brace first)
=% Reindent a braced or bracketed block (cursor on brace)
<% Decrease indent of a braced or bracketed block (cursor on brace)
]p Paste text, aligning indentation with surroundings
=i{ Re-indent the 'inner block', i.e. the contents of the block
=a{ Re-indent 'a block', i.e. block and containing braces
=2a{ Re-indent '2 blocks', i.e. this block and containing block
>i{ Increase inner block indent
<i{ Decrease inner block indent
You can replace { with } or B, e.g. =iB is a valid block indent command.
Take a look at "Indent a Code Block" for a nice example
to try these commands out on.
Also, remember that
. Repeat last command
, so indentation commands can be easily and conveniently repeated.
Re-indenting complete files
Another common situation is requiring indentation to be fixed throughout a source file:
gg=G Re-indent entire buffer
You can extend this idea to multiple files:
" Re-indent all your c source code:
:args *.c
:argdo normal gg=G
:wall
Or multiple buffers:
" Re-indent all open buffers:
:bufdo normal gg=G:wall
In Visual Mode
Vjj> Visually mark and then indent 3 lines
In insert mode
These commands apply to the current line:
CTRL-t insert indent at start of line
CTRL-d remove indent at start of line
0 CTRL-d remove all indentation from line
Ex commands
These are useful when you want to indent a specific range of lines, without moving your cursor.
:< and :> Given a range, apply indentation e.g.
:4,8> indent lines 4 to 8, inclusive
set expandtab "Use softtabstop spaces instead of tab characters for indentation
set shiftwidth=4 "Indent by 4 spaces when using >>, <<, == etc.
set softtabstop=4 "Indent by 4 spaces when pressing <TAB>
set autoindent "Keep indentation from previous line
set smartindent "Automatically inserts indentation in some cases
set cindent "Like smartindent, but stricter and more customisable
Vim has intelligent indentation based on filetype. Try adding this to your .vimrc:
if has ("autocmd")
" File type detection. Indent based on filetype. Recommended.
filetype plugin indent on
endif
Both this answer and the one above it were great. But I +1'd this because it reminded me of the 'dot' operator, which repeats
the last command. This is extremely useful when needing to indent an entire block several shiftspaces (or indentations)
without needing to keep pressing >} . Thanks a long –
Amit
Aug 10 '11 at 13:26
5>> Indent 5 lines : This command indents the fifth line, not 5 lines. Could this be due to my VIM settings, or is your
wording incorrect? – Wipqozn
Aug 24 '11 at 16:00
Great summary! Also note that the "indent inside block" and "indent all block" (<i{ >a{ etc.) also works with parentheses and
brackets: >a( <i] etc. (And while I'm at it, in addition to <>'s, they also work with d,c,y etc.) –
aqn
Mar 6 '13 at 4:42
Using Python a lot, I find myself needing frequently needing to shift blocks by more than one indent. You can do this by using
any of the block selection methods, and then just enter the number of indents you wish to jump right before the >
Eg. V5j3> will indent 5 lines 3 times - which is 12 spaces if you use 4 spaces for indents
The beauty of vim's UI is that it's consistent. Editing commands are made up of the command and a cursor move. The cursor moves
are always the same:
H to top of screen, L to bottom, M to middle
n G to go to line n, G alone to bottom of file, gg to top
n to move to next search match, N to previous
} to end of paragraph
% to next matching bracket, either of the parentheses or the tag kind
enter to the next line
'x to mark x where x is a letter or another '
many more, including w and W for word, $ or 0 to tips of the
line, etc, that don't apply here because are not line movements.
So, in order to use vim you have to learn to move the cursor and remember a repertoire of commands like, for example, >
to indent (and < to "outdent").
Thus, for indenting the lines from the cursor position to the top of the screen you do >H, >G to indent
to the bottom of the file.
If, instead of typing >H, you type dH then you are deleting the same block of lines, cH
for replacing it, etc.
Some cursor movements fit better with specific commands. In particular, the % command is handy to indent a whole
HTML or XML block.
If the file has syntax highlighted ( :syn on ) then setting the cursor in the text of a tag (like, in the "i" of
<div> and entering >% will indent up to the closing </div> tag.
This is how vim works: one has to remember only the cursor movements and the commands, and how to mix them.
So my answer to this question would be "go to one end of the block of lines you want to indent, and then type the >
command and a movement to the other end of the block" if indent is interpreted as shifting the lines, =
if indent is interpreted as in pretty-printing.
When the 'expandtab' option is off (this is the default) Vim uses <Tab>s as much as possible to make the indent. ( :help :> )
– Kent Fredric
Mar 16 '11 at 8:36
The only tab/space related vim setting I've changed is :set tabstop=3. It's actually inserting this every time I use >>: "<tab><space><space>".
Same with indenting a block. Any ideas? – Shane
Reustle
Dec 2 '12 at 3:17
The three settings you want to look at for "spaces vs tabs" are 1. tabstop 2. shiftwidth 3. expandtab. You probably have "shiftwidth=5
noexpandtab", so a "tab" is 3 spaces, and an indentation level is "5" spaces, so it makes up the 5 with 1 tab, and 2 spaces. –
Kent Fredric
Dec 2 '12 at 17:08
For me, the MacVim (Visual) solution was, select with mouse and press ">", but after putting the following lines in "~/.vimrc"
since I like spaces instead of tabs:
set expandtab
set tabstop=2
set shiftwidth=2
Also it's useful to be able to call MacVim from the command-line (Terminal.app), so since I have the following helper directory
"~/bin", where I place a script called "macvim":
#!/usr/bin/env bash
/usr/bin/open -a /Applications/MacPorts/MacVim.app $@
And of course in "~/.bashrc":
export PATH=$PATH:$HOME/bin
Macports messes with "~/.profile" a lot, so the PATH environment variable can get quite long.
A quick way to do this using VISUAL MODE uses the same process as commenting a block of code.
This is useful if you would prefer not to change your shiftwidth or use any set directives and is
flexible enough to work with TABS or SPACES or any other character.
Position cursor at the beginning on the block
v to switch to -- VISUAL MODE --
Select the text to be indented
Type : to switch to the prompt
Replacing with 3 leading spaces:
:'<,'>s/^/ /g
Or replacing with leading tabs:
:'<,'>s/^/\t/g
Brief Explanation:
'<,'> - Within the Visually Selected Range
s/^/ /g - Insert 3 spaces at the beginning of every line within the whole range
(or)
s/^/\t/g - Insert Tab at the beginning of every line within the whole range
Yup, and this is why one of my big peeves is white spaces on an otherwise empty line: they messes up vim's notion of a "paragraph".
– aqn
Mar 6 '13 at 4:47
In addition to the answer already given and accepted, it is also possible to place a marker and then indent everything from the
current cursor to the marker. Thus, enter ma where you want the top of your indented block, cursor down as far as
you need and then type >'a (note that " a " can be substituted for any valid marker name). This is sometimes
easier than 5>> or vjjj> .
And you can select the lines in visual mode, then press : to get :'<,'> (equivalent to the :1,3
part in your answer), and add mo N . If you want to move a single line, just :mo N . If you are really
lazy, you can omit the space (e.g. :mo5 ). Use marks with mo '{a-zA-Z} . –
Júda Ronén
Jan 18 '17 at 21:20
I've heard a lot about Vim, both pros and
cons. It really seems you should be (as a developer) faster with Vim than with any other
editor. I'm using Vim to do some basic stuff and I'm at best 10 times less
productive with Vim.
The only two things you should care about when you talk about speed (you may not care
enough about them, but you should) are:
Using alternatively left and right hands is the fastest way to use the keyboard.
Never touching the mouse is the second way to be as fast as possible. It takes ages for
you to move your hand, grab the mouse, move it, and bring it back to the keyboard (and you
often have to look at the keyboard to be sure you returned your hand properly to the right
place)
Here are two examples demonstrating why I'm far less productive with Vim.
Copy/Cut & paste. I do it all the time. With all the contemporary editors you press
Shift with the left hand, and you move the cursor with your right hand to select
text. Then Ctrl + C copies, you move the cursor and Ctrl +
V pastes.
With Vim it's horrible:
yy to copy one line (you almost never want the whole line!)
[number xx]yy to copy xx lines into the buffer. But you never
know exactly if you've selected what you wanted. I often have to do [number
xx]dd then u to undo!
Another example? Search & replace.
In PSPad :
Ctrl + f then type what you want you search for, then press
Enter .
In Vim: /, then type what you want to search for, then if there are some
special characters put \ before each special character, then press
Enter .
And everything with Vim is like that: it seems I don't know how to handle it the right
way.
You mention cutting with yy and complain that you almost never want to cut
whole lines. In fact programmers, editing source code, very often want to work on whole
lines, ranges of lines and blocks of code. However, yy is only one of many way
to yank text into the anonymous copy buffer (or "register" as it's called in vi ).
The "Zen" of vi is that you're speaking a language. The initial y is a verb.
The statement yy is a synonym for y_ . The y is
doubled up to make it easier to type, since it is such a common operation.
This can also be expressed as ddP (delete the current line and
paste a copy back into place; leaving a copy in the anonymous register as a side effect). The
y and d "verbs" take any movement as their "subject." Thus
yW is "yank from here (the cursor) to the end of the current/next (big) word"
and y'a is "yank from here to the line containing the mark named ' a
'."
If you only understand basic up, down, left, and right cursor movements then vi will be no
more productive than a copy of "notepad" for you. (Okay, you'll still have syntax
highlighting and the ability to handle files larger than a piddling ~45KB or so; but work
with me here).
vi has 26 "marks" and 26 "registers." A mark is set to any cursor location using the
m command. Each mark is designated by a single lower case letter. Thus
ma sets the ' a ' mark to the current location, and mz
sets the ' z ' mark. You can move to the line containing a mark using the
' (single quote) command. Thus 'a moves to the beginning of the
line containing the ' a ' mark. You can move to the precise location of any mark
using the ` (backquote) command. Thus `z will move directly to the
exact location of the ' z ' mark.
Because these are "movements" they can also be used as subjects for other
"statements."
So, one way to cut an arbitrary selection of text would be to drop a mark (I usually use '
a ' as my "first" mark, ' z ' as my next mark, ' b ' as another,
and ' e ' as yet another (I don't recall ever having interactively used more than
four marks in 15 years of using vi ; one creates one's own conventions regarding how marks
and registers are used by macros that don't disturb one's interactive context). Then we go to
the other end of our desired text; we can start at either end, it doesn't matter. Then we can
simply use d`a to cut or y`a to copy. Thus the whole process has a
5 keystrokes overhead (six if we started in "insert" mode and needed to Esc out
command mode). Once we've cut or copied then pasting in a copy is a single keystroke:
p .
I say that this is one way to cut or copy text. However, it is only one of many.
Frequently we can more succinctly describe the range of text without moving our cursor around
and dropping a mark. For example if I'm in a paragraph of text I can use { and
} movements to the beginning or end of the paragraph respectively. So, to move a
paragraph of text I cut it using {d} (3 keystrokes). (If I happen
to already be on the first or last line of the paragraph I can then simply use
d} or d{ respectively.
The notion of "paragraph" defaults to something which is usually intuitively reasonable.
Thus it often works for code as well as prose.
Frequently we know some pattern (regular expression) that marks one end or the other of
the text in which we're interested. Searching forwards or backwards are movements in vi .
Thus they can also be used as "subjects" in our "statements." So I can use d/foo
to cut from the current line to the next line containing the string "foo" and
y?bar to copy from the current line to the most recent (previous) line
containing "bar." If I don't want whole lines I can still use the search movements (as
statements of their own), drop my mark(s) and use the `x commands as described
previously.
In addition to "verbs" and "subjects" vi also has "objects" (in the grammatical sense of
the term). So far I've only described the use of the anonymous register. However, I can use
any of the 26 "named" registers by prefixing the "object" reference with
" (the double quote modifier). Thus if I use "add I'm cutting the
current line into the ' a ' register and if I use "by/foo then I'm
yanking a copy of the text from here to the next line containing "foo" into the ' b
' register. To paste from a register I simply prefix the paste with the same modifier
sequence: "ap pastes a copy of the ' a ' register's contents into the
text after the cursor and "bP pastes a copy from ' b ' to before the
current line.
This notion of "prefixes" also adds the analogs of grammatical "adjectives" and "adverbs'
to our text manipulation "language." Most commands (verbs) and movement (verbs or objects,
depending on context) can also take numeric prefixes. Thus 3J means "join the
next three lines" and d5} means "delete from the current line through the end of
the fifth paragraph down from here."
This is all intermediate level vi . None of it is Vim specific and there are far more
advanced tricks in vi if you're ready to learn them. If you were to master just these
intermediate concepts then you'd probably find that you rarely need to write any macros
because the text manipulation language is sufficiently concise and expressive to do most
things easily enough using the editor's "native" language.
A sampling of more advanced tricks:
There are a number of : commands, most notably the :%
s/foo/bar/g global substitution technique. (That's not advanced but other
: commands can be). The whole : set of commands was historically
inherited by vi 's previous incarnations as the ed (line editor) and later the ex (extended
line editor) utilities. In fact vi is so named because it's the visual interface to ex .
: commands normally operate over lines of text. ed and ex were written in an
era when terminal screens were uncommon and many terminals were "teletype" (TTY) devices. So
it was common to work from printed copies of the text, using commands through an extremely
terse interface (common connection speeds were 110 baud, or, roughly, 11 characters per
second -- which is slower than a fast typist; lags were common on multi-user interactive
sessions; additionally there was often some motivation to conserve paper).
So the syntax of most : commands includes an address or range of addresses
(line number) followed by a command. Naturally one could use literal line numbers:
:127,215 s/foo/bar to change the first occurrence of "foo" into "bar" on each
line between 127 and 215. One could also use some abbreviations such as . or
$ for current and last lines respectively. One could also use relative prefixes
+ and - to refer to offsets after or before the curent line,
respectively. Thus: :.,$j meaning "from the current line to the last line, join
them all into one line". :% is synonymous with :1,$ (all the
lines).
The :... g and :... v commands bear some explanation as they are
incredibly powerful. :... g is a prefix for "globally" applying a subsequent
command to all lines which match a pattern (regular expression) while :... v
applies such a command to all lines which do NOT match the given pattern ("v" from
"conVerse"). As with other ex commands these can be prefixed by addressing/range references.
Thus :.,+21g/foo/d means "delete any lines containing the string "foo" from the
current one through the next 21 lines" while :.,$v/bar/d means "from here to the
end of the file, delete any lines which DON'T contain the string "bar."
It's interesting that the common Unix command grep was actually inspired by this ex
command (and is named after the way in which it was documented). The ex command
:g/re/p (grep) was the way they documented how to "globally" "print" lines
containing a "regular expression" (re). When ed and ex were used, the :p command
was one of the first that anyone learned and often the first one used when editing any file.
It was how you printed the current contents (usually just one page full at a time using
:.,+25p or some such).
Note that :% g/.../d or (its reVerse/conVerse counterpart: :%
v/.../d are the most common usage patterns. However there are couple of other
ex commands which are worth remembering:
We can use m to move lines around, and j to join lines. For
example if you have a list and you want to separate all the stuff matching (or conversely NOT
matching some pattern) without deleting them, then you can use something like: :%
g/foo/m$ ... and all the "foo" lines will have been moved to the end of the file.
(Note the other tip about using the end of your file as a scratch space). This will have
preserved the relative order of all the "foo" lines while having extracted them from the rest
of the list. (This would be equivalent to doing something like: 1G!GGmap!Ggrep
foo<ENTER>1G:1,'a g/foo'/d (copy the file to its own tail, filter the tail
through grep, and delete all the stuff from the head).
To join lines usually I can find a pattern for all the lines which need to be joined to
their predecessor (all the lines which start with "^ " rather than "^ * " in some bullet
list, for example). For that case I'd use: :% g/^ /-1j (for every matching line,
go up one line and join them). (BTW: for bullet lists trying to search for the bullet lines
and join to the next doesn't work for a couple reasons ... it can join one bullet line to
another, and it won't join any bullet line to all of its continuations; it'll only
work pairwise on the matches).
Almost needless to mention you can use our old friend s (substitute) with the
g and v (global/converse-global) commands. Usually you don't need
to do so. However, consider some case where you want to perform a substitution only on lines
matching some other pattern. Often you can use a complicated pattern with captures and use
back references to preserve the portions of the lines that you DON'T want to change. However,
it will often be easier to separate the match from the substitution: :%
g/foo/s/bar/zzz/g -- for every line containing "foo" substitute all "bar" with "zzz."
(Something like :% s/\(.*foo.*\)bar\(.*\)/\1zzz\2/g would only work for the
cases those instances of "bar" which were PRECEDED by "foo" on the same line; it's ungainly
enough already, and would have to be mangled further to catch all the cases where "bar"
preceded "foo")
The point is that there are more than just p, s, and
d lines in the ex command set.
The : addresses can also refer to marks. Thus you can use:
:'a,'bg/foo/j to join any line containing the string foo to its subsequent line,
if it lies between the lines between the ' a ' and ' b ' marks. (Yes, all
of the preceding ex command examples can be limited to subsets of the file's
lines by prefixing with these sorts of addressing expressions).
That's pretty obscure (I've only used something like that a few times in the last 15
years). However, I'll freely admit that I've often done things iteratively and interactively
that could probably have been done more efficiently if I'd taken the time to think out the
correct incantation.
Another very useful vi or ex command is :r to read in the contents of another
file. Thus: :r foo inserts the contents of the file named "foo" at the current
line.
More powerful is the :r! command. This reads the results of a command. It's
the same as suspending the vi session, running a command, redirecting its output to a
temporary file, resuming your vi session, and reading in the contents from the temp.
file.
Even more powerful are the ! (bang) and :... ! ( ex bang)
commands. These also execute external commands and read the results into the current text.
However, they also filter selections of our text through the command! This we can sort all
the lines in our file using 1G!Gsort ( G is the vi "goto" command;
it defaults to going to the last line of the file, but can be prefixed by a line number, such
as 1, the first line). This is equivalent to the ex variant :1,$!sort . Writers
often use ! with the Unix fmt or fold utilities for reformating or "word
wrapping" selections of text. A very common macro is {!}fmt (reformat the
current paragraph). Programmers sometimes use it to run their code, or just portions of it,
through indent or other code reformatting tools.
Using the :r! and ! commands means that any external utility or
filter can be treated as an extension of our editor. I have occasionally used these with
scripts that pulled data from a database, or with wget or lynx commands that pulled data off
a website, or ssh commands that pulled data from remote systems.
Another useful ex command is :so (short for :source ). This
reads the contents of a file as a series of commands. When you start vi it normally,
implicitly, performs a :source on ~/.exinitrc file (and Vim usually
does this on ~/.vimrc, naturally enough). The use of this is that you can
change your editor profile on the fly by simply sourcing in a new set of macros,
abbreviations, and editor settings. If you're sneaky you can even use this as a trick for
storing sequences of ex editing commands to apply to files on demand.
For example I have a seven line file (36 characters) which runs a file through wc, and
inserts a C-style comment at the top of the file containing that word count data. I can apply
that "macro" to a file by using a command like: vim +'so mymacro.ex'
./mytarget
(The + command line option to vi and Vim is normally used to start the
editing session at a given line number. However it's a little known fact that one can follow
the + by any valid ex command/expression, such as a "source" command as I've
done here; for a simple example I have scripts which invoke: vi +'/foo/d|wq!'
~/.ssh/known_hosts to remove an entry from my SSH known hosts file non-interactively
while I'm re-imaging a set of servers).
Usually it's far easier to write such "macros" using Perl, AWK, sed (which is, in fact,
like grep a utility inspired by the ed command).
The @ command is probably the most obscure vi command. In occasionally
teaching advanced systems administration courses for close to a decade I've met very few
people who've ever used it. @ executes the contents of a register as if it were
a vi or ex command.
Example: I often use: :r!locate ... to find some file on my system and read its
name into my document. From there I delete any extraneous hits, leaving only the full path to
the file I'm interested in. Rather than laboriously Tab -ing through each
component of the path (or worse, if I happen to be stuck on a machine without Tab completion
support in its copy of vi ) I just use:
0i:r (to turn the current line into a valid :r command),
"cdd (to delete the line into the "c" register) and
@c execute that command.
That's only 10 keystrokes (and the expression "cdd@c is
effectively a finger macro for me, so I can type it almost as quickly as any common six
letter word).
A sobering thought
I've only scratched to surface of vi 's power and none of what I've described here is even
part of the "improvements" for which vim is named! All of what I've described here should
work on any old copy of vi from 20 or 30 years ago.
There are people who have used considerably more of vi 's power than I ever will.
@Wahnfieden -- grok is exactly what I meant: en.wikipedia.org/wiki/Grok (It's apparently even in
the OED --- the closest we anglophones have to a canonical lexicon). To "grok" an editor is
to find yourself using its commands fluently ... as if they were your natural language.
– Jim
Dennis
Feb 12 '10 at 4:08
wow, a very well written answer! i couldn't agree more, although i use the @
command a lot (in combination with q : record macro) – knittl
Feb 27 '10 at 13:15
Superb answer that utterly redeems a really horrible question. I am going to upvote this
question, that normally I would downvote, just so that this answer becomes easier to find.
(And I'm an Emacs guy! But this way I'll have somewhere to point new folks who want a good
explanation of what vi power users find fun about vi. Then I'll tell them about Emacs and
they can decide.) – Brandon Rhodes
Mar 29 '10 at 15:26
Can you make a website and put this tutorial there, so it doesn't get burried here on
stackoverflow. I have yet to read better introduction to vi then this. – Marko
Apr 1 '10 at 14:47
You are talking about text selecting and copying, I think that you should give a look to the
Vim Visual Mode .
In the visual mode, you are able to select text using Vim commands, then you can do
whatever you want with the selection.
Consider the following common scenarios:
You need to select to the next matching parenthesis.
You could do:
v% if the cursor is on the starting/ending parenthesis
vib if the cursor is inside the parenthesis block
You want to select text between quotes:
vi" for double quotes
vi' for single quotes
You want to select a curly brace block (very common on C-style languages):
viB
vi{
You want to select the entire file:
ggVG
Visual
block selection is another really useful feature, it allows you to select a rectangular
area of text, you just have to press Ctrl - V to start it, and then
select the text block you want and perform any type of operation such as yank, delete, paste,
edit, etc. It's great to edit column oriented text.
Yes, but it was a specific complaint of the poster. Visual mode is Vim's best method of
direct text-selection and manipulation. And since vim's buffer traversal methods are superb,
I find text selection in vim fairly pleasurable. – guns
Aug 2 '09 at 9:54
I think it is also worth mentioning Ctrl-V to select a block - ie an arbitrary rectangle of
text. When you need it it's a lifesaver. – Hamish Downer
Mar 16 '10 at 13:34
Also, if you've got a visual selection and want to adjust it, o will hop to the
other end. So you can move both the beginning and the end of the selection as much as you
like. – Nathan Long
Mar 1 '11 at 19:05
* and # search for the word under the cursor
forward/backward.
w to the next word
W to the next space-separated word
b / e to the begin/end of the current word. ( B
/ E for space separated only)
gg / G jump to the begin/end of the file.
% jump to the matching { .. } or ( .. ), etc..
{ / } jump to next paragraph.
'. jump back to last edited line.
g; jump back to last edited position.
Quick editing commands
I insert at the begin.
A append to end.
o / O open a new line after/before the current.
v / V / Ctrl+V visual mode (to select
text!)
Shift+R replace text
C change remaining part of line.
Combining commands
Most commands accept a amount and direction, for example:
cW = change till end of word
3cW = change 3 words
BcW = to begin of full word, change full word
ciW = change inner word.
ci" = change inner between ".."
ci( = change text between ( .. )
ci< = change text between < .. > (needs set
matchpairs+=<:> in vimrc)
4dd = delete 4 lines
3x = delete 3 characters.
3s = substitute 3 characters.
Useful programmer commands
r replace one character (e.g. rd replaces the current char
with d ).
~ changes case.
J joins two lines
Ctrl+A / Ctrl+X increments/decrements a number.
. repeat last command (a simple macro)
== fix line indent
> indent block (in visual mode)
< unindent block (in visual mode)
Macro recording
Press q[ key ] to start recording.
Then hit q to stop recording.
The macro can be played with @[ key ] .
By using very specific commands and movements, VIM can replay those exact actions for the
next lines. (e.g. A for append-to-end, b / e to move the cursor to
the begin or end of a word respectively)
Example of well built settings
# reset to vim-defaults
if &compatible # only if not set before:
set nocompatible # use vim-defaults instead of vi-defaults (easier, more user friendly)
endif
# display settings
set background=dark # enable for dark terminals
set nowrap # dont wrap lines
set scrolloff=2 # 2 lines above/below cursor when scrolling
set number # show line numbers
set showmatch # show matching bracket (briefly jump)
set showmode # show mode in status bar (insert/replace/...)
set showcmd # show typed command in status bar
set ruler # show cursor position in status bar
set title # show file in titlebar
set wildmenu # completion with menu
set wildignore=*.o,*.obj,*.bak,*.exe,*.py[co],*.swp,*~,*.pyc,.svn
set laststatus=2 # use 2 lines for the status bar
set matchtime=2 # show matching bracket for 0.2 seconds
set matchpairs+=<:> # specially for html
# editor settings
set esckeys # map missed escape sequences (enables keypad keys)
set ignorecase # case insensitive searching
set smartcase # but become case sensitive if you type uppercase characters
set smartindent # smart auto indenting
set smarttab # smart tab handling for indenting
set magic # change the way backslashes are used in search patterns
set bs=indent,eol,start # Allow backspacing over everything in insert mode
set tabstop=4 # number of spaces a tab counts for
set shiftwidth=4 # spaces for autoindents
#set expandtab # turn a tabs into spaces
set fileformat=unix # file mode is unix
#set fileformats=unix,dos # only detect unix file format, displays that ^M with dos files
# system settings
set lazyredraw # no redraws in macros
set confirm # get a dialog when :q, :w, or :wq fails
set nobackup # no backup~ files.
set viminfo='20,\"500 # remember copy registers after quitting in the .viminfo file -- 20 jump links, regs up to 500 lines'
set hidden # remember undo after quitting
set history=50 # keep 50 lines of command history
set mouse=v # use mouse in visual mode (not normal,insert,command,help mode
# color settings (if terminal/gui supports it)
if &t_Co > 2 || has("gui_running")
syntax on # enable colors
set hlsearch # highlight search (very useful!)
set incsearch # search incremently (search while typing)
endif
# paste mode toggle (needed when using autoindent/smartindent)
map <F10> :set paste<CR>
map <F11> :set nopaste<CR>
imap <F10> <C-O>:set paste<CR>
imap <F11> <nop>
set pastetoggle=<F11>
# Use of the filetype plugins, auto completion and indentation support
filetype plugin indent on
# file type specific settings
if has("autocmd")
# For debugging
#set verbose=9
# if bash is sh.
let bash_is_sh=1
# change to directory of current file automatically
autocmd BufEnter * lcd %:p:h
# Put these in an autocmd group, so that we can delete them easily.
augroup mysettings
au FileType xslt,xml,css,html,xhtml,javascript,sh,config,c,cpp,docbook set smartindent shiftwidth=2 softtabstop=2 expandtab
au FileType tex set wrap shiftwidth=2 softtabstop=2 expandtab
# Confirm to PEP8
au FileType python set tabstop=4 softtabstop=4 expandtab shiftwidth=4 cinwords=if,elif,else,for,while,try,except,finally,def,class
augroup END
augroup perl
# reset (disable previous 'augroup perl' settings)
au!
au BufReadPre,BufNewFile
\ *.pl,*.pm
\ set formatoptions=croq smartindent shiftwidth=2 softtabstop=2 cindent cinkeys='0{,0},!^F,o,O,e' " tags=./tags,tags,~/devel/tags,~/devel/C
# formatoption:
# t - wrap text using textwidth
# c - wrap comments using textwidth (and auto insert comment leader)
# r - auto insert comment leader when pressing <return> in insert mode
# o - auto insert comment leader when pressing 'o' or 'O'.
# q - allow formatting of comments with "gq"
# a - auto formatting for paragraphs
# n - auto wrap numbered lists
#
augroup END
# Always jump to the last known cursor position.
# Don't do it when the position is invalid or when inside
# an event handler (happens when dropping a file on gvim).
autocmd BufReadPost *
\ if line("'\"") > 0 && line("'\"") <= line("$") |
\ exe "normal g`\"" |
\ endif
endif # has("autocmd")
The settings can be stored in ~/.vimrc, or system-wide in
/etc/vimrc.local and then by read from the /etc/vimrc file
using:
source /etc/vimrc.local
(you'll have to replace the # comment character with " to make
it work in VIM, I wanted to give proper syntax highlighting here).
The commands I've listed here are pretty basic, and the main ones I use so far. They
already make me quite more productive, without having to know all the fancy stuff.
Better than '. is g;, which jumps back through the
changelist . Goes to the last edited position, instead of last edited line
– naught101
Apr 28 '12 at 2:09
The Control + R mechanism is very useful :-) In either insert mode or
command mode (i.e. on the : line when typing commands), continue with a numbered
or named register:
a - z the named registers
" the unnamed register, containing the text of the last delete or
yank
% the current file name
# the alternate file name
* the clipboard contents (X11: primary selection)
+ the clipboard contents
/ the last search pattern
: the last command-line
. the last inserted text
- the last small (less than a line) delete
=5*5 insert 25 into text (mini-calculator)
See :help i_CTRL-R and :help c_CTRL-R for more details, and
snoop around nearby for more CTRL-R goodness.
+1 for current/alternate file name. Control-A also works in insert mode for last
inserted text, and Control-@ to both insert last inserted text and immediately
switch to normal mode. – Aryeh Leib Taurog
Feb 26 '12 at 19:06
There are a lot of good answers here, and one amazing one about the zen of vi. One thing I
don't see mentioned is that vim is extremely extensible via plugins. There are scripts and
plugins to make it do all kinds of crazy things the original author never considered. Here
are a few examples of incredibly handy vim plugins:
Rails.vim is a plugin written by tpope. It's an incredible tool for people doing rails
development. It does magical context-sensitive things that allow you to easily jump from a
method in a controller to the associated view, over to a model, and down to unit tests for
that model. It has saved dozens if not hundreds of hours as a rails
developer.
This plugin allows you to select a region of text in visual mode and type a quick command
to post it to gist.github.com . This
allows for easy pastebin access, which is incredibly handy if you're collaborating with
someone over IRC or IM.
This plugin provides special functionality to the spacebar. It turns the spacebar into
something analogous to the period, but instead of repeating actions it repeats motions. This
can be very handy for moving quickly through a file in a way you define on the
fly.
This plugin gives you the ability to work with text that is delimited in some fashion. It
gives you objects which denote things inside of parens, things inside of quotes, etc. It can
come in handy for manipulating delimited text.
This script brings fancy tab completion functionality to vim. The autocomplete stuff is
already there in the core of vim, but this brings it to a quick tab rather than multiple
different multikey shortcuts. Very handy, and incredibly fun to use. While it's not VS's
intellisense, it's a great step and brings a great deal of the functionality you'd like to
expect from a tab completion tool.
This tool brings external syntax checking commands into vim. I haven't used it personally,
but I've heard great things about it and the concept is hard to beat. Checking syntax without
having to do it manually is a great time saver and can help you catch syntactic bugs as you
introduce them rather than when you finally stop to test.
Direct access to git from inside of vim. Again, I haven't used this plugin, but I can see
the utility. Unfortunately I'm in a culture where svn is considered "new", so I won't likely
see git at work for quite some time.
A tree browser for vim. I started using this recently, and it's really handy. It lets you
put a treeview in a vertical split and open files easily. This is great for a project with a
lot of source files you frequently jump between.
This is an unmaintained plugin, but still incredibly useful. It provides the ability to
open files using a "fuzzy" descriptive syntax. It means that in a sparse tree of files you
need only type enough characters to disambiguate the files you're interested in from the rest
of the cruft.
Conclusion
There are a lot of incredible tools available for vim. I'm sure I've only scratched the
surface here, and it's well worth searching for tools applicable to your domain. The
combination of traditional vi's powerful toolset, vim's improvements on it, and plugins which
extend vim even further, it's one of the most powerful ways to edit text ever conceived. Vim
is easily as powerful as emacs, eclipse, visual studio, and textmate.
Thanks
Thanks to duwanis for his
vim configs from which I
have learned much and borrowed most of the plugins listed here.
The magical tests-to-class navigation in rails.vim is one of the more general things I wish
Vim had that TextMate absolutely nails across all languages: if I am working on Person.scala
and I do Cmd+T, usually the first thing in the list is PersonTest.scala. – Tom Morris
Apr 1 '10 at 8:50
@Benson Great list! I'd toss in snipMate as well. Very helpful
automation of common coding stuff. if<tab> instant if block, etc. – AlG
Sep 13 '11 at 17:37
Visual mode was mentioned previously, but block visual mode has saved me a lot of time
when editing fixed size columns in text file. (accessed with Ctrl-V).
Additionally, if you use a concise command (e.g. A for append-at-end) to edit the text, vim
can repeat that exact same action for the next line you press the . key at.
– vdboor
Apr 1 '10 at 8:34
Go to last edited location (very useful if you performed some searching and than want go
back to edit)
^P and ^N
Complete previous (^P) or next (^N) text.
^O and ^I
Go to previous ( ^O - "O" for old) location or to the next (
^I - "I" just near to "O" ). When you perform
searches, edit files etc., you can navigate through these "jumps" forward and back.
@Kungi `. will take you to the last edit `` will take you back to the position you were in
before the last 'jump' - which /might/ also be the position of the last edit. –
Grant
McLean
Aug 23 '11 at 8:21
It's pretty new and really really good. The guy who is running the site switched from
textmate to vim and hosts very good and concise casts on specific vim topics. Check it
out!
@SolutionYogi: Consider that you want to add line number to the beginning of each line.
Solution: ggI1<space><esc>0qqyawjP0<c-a>0q9999@q – hcs42
Feb 27 '10 at 19:05
Extremely useful with Vimperator, where it increments (or decrements, Ctrl-X) the last number
in the URL. Useful for quickly surfing through image galleries etc. – blueyed
Apr 1 '10 at 14:47
Whoa, I didn't know about the * and # (search forward/back for word under cursor) binding.
That's kinda cool. The f/F and t/T and ; commands are quick jumps to characters on the
current line. f/F put the cursor on the indicated character while t/T puts it just up "to"
the character (the character just before or after it according to the direction chosen. ;
simply repeats the most recent f/F/t/T jump (in the same direction). – Jim Dennis
Mar 14 '10 at 6:38
:) The tagline at the top of the tips page at vim.org: "Can you imagine how many keystrokes
could have been saved, if I only had known the "*" command in time?" - Juergen Salk,
1/19/2001" – Steve K
Apr 3 '10 at 23:50
As Jim mentioned, the "t/T" combo is often just as good, if not better, for example,
ct( will erase the word and put you in insert mode, but keep the parantheses!
– puk
Feb 24 '12 at 6:45
CTRL-A ;Add [count] to the number or alphabetic character at or after the cursor. {not
in Vi
CTRL-X ;Subtract [count] from the number or alphabetic character at or after the cursor.
{not in Vi}
b. Window key unmapping
In window, Ctrl-A already mapped for whole file selection you need to unmap in rc file.
mark mswin.vim CTRL-A mapping part as comment or add your rc file with unmap
c. With Macro
The CTRL-A command is very useful in a macro. Example: Use the following steps to make a
numbered list.
Create the first list entry, make sure it starts with a number.
Last week at work our project inherited a lot of Python code from another project.
Unfortunately the code did not fit into our existing architecture - it was all done with
global variables and functions, which would not work in a multi-threaded environment.
We had ~80 files that needed to be reworked to be object oriented - all the functions
moved into classes, parameters changed, import statements added, etc. We had a list of about
20 types of fix that needed to be done to each file. I would estimate that doing it by hand
one person could do maybe 2-4 per day.
So I did the first one by hand and then wrote a vim script to automate the changes. Most
of it was a list of vim commands e.g.
" delete an un-needed function "
g/someFunction(/ d
" add wibble parameter to function foo "
%s/foo(/foo( wibble,/
" convert all function calls bar(thing) into method calls thing.bar() "
g/bar(/ normal nmaf(ldi(`aPa.
The last one deserves a bit of explanation:
g/bar(/ executes the following command on every line that contains "bar("
normal execute the following text as if it was typed in in normal mode
n goes to the next match of "bar(" (since the :g command leaves the cursor position at the start of the line)
ma saves the cursor position in mark a
f( moves forward to the next opening bracket
l moves right one character, so the cursor is now inside the brackets
di( delete all the text inside the brackets
`a go back to the position saved as mark a (i.e. the first character of "bar")
P paste the deleted text before the current cursor position
a. go into insert mode and add a "."
For a couple of more complex transformations such as generating all the import statements
I embedded some python into the vim script.
After a few hours of working on it I had a script that will do at least 95% of the
conversion. I just open a file in vim then run :source fixit.vim and the file is
transformed in a blink of the eye.
We still have the work of changing the remaining 5% that was not worth automating and of
testing the results, but by spending a day writing this script I estimate we have saved weeks
of work.
Of course it would have been possible to automate this with a scripting language like
Python or Ruby, but it would have taken far longer to write and would be less flexible - the
last example would have been difficult since regex alone would not be able to handle nested
brackets, e.g. to convert bar(foo(xxx)) to foo(xxx).bar() . Vim was
perfect for the task.
@lpsquiggle: your suggestion would not handle complex expressions with more than one set of
brackets. e.g. if bar(foo(xxx)) or wibble(xxx): becomes if foo(xxx)) or
wibble(xxx.bar(): which is completely wrong. – Dave Kirby
Mar 23 '10 at 17:16
Use the builtin file explorer! The command is :Explore and it allows you to
navigate through your source code very very fast. I have these mapping in my
.vimrc :
I always thought the default methods for browsing kinda sucked for most stuff. It's just slow
to browse, if you know where you wanna go. LustyExplorer from vim.org's script section is a
much needed improvement. – Svend
Aug 2 '09 at 8:48
I recommend NERDtree instead of the built-in explorer. It has changed the way I used vim for
projects and made me much more productive. Just google for it. – kprobst
Apr 1 '10 at 3:53
I never feel the need to explore the source tree, I just use :find,
:tag and the various related keystrokes to jump around. (Maybe this is because
the source trees I work on are big and organized differently than I would have done? :) )
– dash-tom-bang
Aug 24 '11 at 0:35
I am a member of the American Cryptogram Association. The bimonthly magazine includes over
100 cryptograms of various sorts. Roughly 15 of these are "cryptarithms" - various types of
arithmetic problems with letters substituted for the digits. Two or three of these are
sudokus, except with letters instead of numbers. When the grid is completed, the nine
distinct letters will spell out a word or words, on some line, diagonal, spiral, etc.,
somewhere in the grid.
Rather than working with pencil, or typing the problems in by hand, I download the
problems from the members area of their website.
When working with these sudokus, I use vi, simply because I'm using facilities that vi has
that few other editors have. Mostly in converting the lettered grid into a numbered grid,
because I find it easier to solve, and then the completed numbered grid back into the
lettered grid to find the solution word or words.
The problem is formatted as nine groups of nine letters, with - s
representing the blanks, written in two lines. The first step is to format these into nine
lines of nine characters each. There's nothing special about this, just inserting eight
linebreaks in the appropriate places.
So, first step in converting this into numbers is to make a list of the distinct letters.
First, I make a copy of the block. I position the cursor at the top of the block, then type
:y}}p . : puts me in command mode, y yanks the next
movement command. Since } is a move to the end of the next paragraph,
y} yanks the paragraph. } then moves the cursor to the end of the
paragraph, and p pastes what we had yanked just after the cursor. So
y}}p creates a copy of the next paragraph, and ends up with the cursor between
the two copies.
Next, I to turn one of those copies into a list of distinct letters. That command is a bit
more complex:
: again puts me in command mode. ! indicates that the content of
the next yank should be piped through a command line. } yanks the next
paragraph, and the command line then uses the tr command to strip out everything
except for upper-case letters, the sed command to print each letter on a single
line, and the sort command to sort those lines, removing duplicates, and then
tr strips out the newlines, leaving the nine distinct letters in a single line,
replacing the nine lines that had made up the paragraph originally. In this case, the letters
are: ACELNOPST .
Next step is to make another copy of the grid. And then to use the letters I've just
identified to replace each of those letters with a digit from 1 to 9. That's simple:
:!}tr ACELNOPST 0-9 . The result is:
This can then be solved in the usual way, or entered into any sudoku solver you might
prefer. The completed solution can then be converted back into letters with :!}tr 1-9
ACELNOPST .
There is power in vi that is matched by very few others. The biggest problem is that only
a very few of the vi tutorial books, websites, help-files, etc., do more than barely touch
the surface of what is possible.
and an irritation is that some distros such as ubuntu has aliases from the word "vi" to "vim"
so people won't really see vi. Excellent example, have to try... +1 – hhh
Jan 14 '11 at 17:12
I'm baffled by this repeated error: you say you need : to go into command mode,
but then invariably you specify normal mode commands (like y}}p ) which
cannot possibly work from the command mode?! – sehe
Mar 4 '12 at 20:47
My take on the unique chars challenge: :se tw=1 fo= (preparation)
VG:s/./& /g (insert spaces), gvgq (split onto separate lines),
V{:sort u (sort and remove duplicates) – sehe
Mar 4 '12 at 20:56
I find the following trick increasingly useful ... for cases where you want to join lines
that match (or that do NOT match) some pattern to the previous line: :%
g/foo/-1j or :'a,'z v/bar/-1j for example (where the former is "all lines
and matching the pattern" while the latter is "lines between mark a and mark z which fail to
match the pattern"). The part after the patter in a g or v ex
command can be any other ex commmands, -1j is just a relative line movement and join command.
– Jim
Dennis
Feb 12 '10 at 4:15
of course, if you name your macro '2', then when it comes time to use it, you don't even have
to move your finger from the '@' key to the 'q' key. Probably saves 50 to 100 milliseconds
every time right there. =P – JustJeff
Feb 27 '10 at 12:54
I recently discovered q: . It opens the "command window" and shows your most
recent ex-mode (command-mode) commands. You can move as usual within the window, and pressing
<CR> executes the command. You can edit, etc. too. Priceless when you're
messing around with some complex command or regex and you don't want to retype the whole
thing, or if the complex thing you want to do was 3 commands back. It's almost like bash's
set -o vi, but for vim itself (heh!).
See :help q: for more interesting bits for going back and forth.
I just discovered Vim's omnicompletion the other day, and while I'll admit I'm a bit hazy on
what does which, I've had surprisingly good results just mashing either Ctrl +
xCtrl + u or Ctrl + n /
Ctrl + p in insert mode. It's not quite IntelliSense, but I'm still learning it.
<Ctrl> + W and j/k will let you navigate absolutely (j up, k down, as with normal vim).
This is great when you have 3+ splits. – Andrew Scagnelli
Apr 1 '10 at 2:58
after bashing my keyboard I have deduced that <C-w>n or
<C-w>s is new horizontal window, <C-w>b is bottom right
window, <C-w>c or <C-w>q is close window,
<C-w>x is increase and then decrease window width (??),
<C-w>p is last window, <C-w>backspace is move left(ish)
window – puk
Feb 24 '12 at 7:00
As several other people have said, visual mode is the answer to your copy/cut & paste
problem. Vim gives you 'v', 'V', and C-v. Lower case 'v' in vim is essentially the same as
the shift key in notepad. The nice thing is that you don't have to hold it down. You can use
any movement technique to navigate efficiently to the starting (or ending) point of your
selection. Then hit 'v', and use efficient movement techniques again to navigate to the other
end of your selection. Then 'd' or 'y' allows you to cut or copy that selection.
The advantage vim's visual mode has over Jim Dennis's description of cut/copy/paste in vi
is that you don't have to get the location exactly right. Sometimes it's more efficient to
use a quick movement to get to the general vicinity of where you want to go and then refine
that with other movements than to think up a more complex single movement command that gets
you exactly where you want to go.
The downside to using visual mode extensively in this manner is that it can become a
crutch that you use all the time which prevents you from learning new vi(m) commands that
might allow you to do things more efficiently. However, if you are very proactive about
learning new aspects of vi(m), then this probably won't affect you much.
I'll also re-emphasize that the visual line and visual block modes give you variations on
this same theme that can be very powerful...especially the visual block mode.
On Efficient Use of the Keyboard
I also disagree with your assertion that alternating hands is the fastest way to use the
keyboard. It has an element of truth in it. Speaking very generally, repeated use of the same
thing is slow. This most significant example of this principle is that consecutive keystrokes
typed with the same finger are very slow. Your assertion probably stems from the natural
tendency to use the s/finger/hand/ transformation on this pattern. To some extent it's
correct, but at the extremely high end of the efficiency spectrum it's incorrect.
Just ask any pianist. Ask them whether it's faster to play a succession of a few notes
alternating hands or using consecutive fingers of a single hand in sequence. The fastest way
to type 4 keystrokes is not to alternate hands, but to type them with 4 fingers of the same
hand in either ascending or descending order (call this a "run"). This should be self-evident
once you've considered this possibility.
The more difficult problem is optimizing for this. It's pretty easy to optimize for
absolute distance on the keyboard. Vim does that. It's much harder to optimize at the "run"
level, but vi(m) with it's modal editing gives you a better chance at being able to do it
than any non-modal approach (ahem, emacs) ever could.
On Emacs
Lest the emacs zealots completely disregard my whole post on account of that last
parenthetical comment, I feel I must describe the root of the difference between the emacs
and vim religions. I've never spoken up in the editor wars and I probably won't do it again,
but I've never heard anyone describe the differences this way, so here it goes. The
difference is the following tradeoff:
Vim gives you unmatched raw text editing efficiency Emacs gives you unmatched ability to
customize and program the editor
The blind vim zealots will claim that vim has a scripting language. But it's an obscure,
ad-hoc language that was designed to serve the editor. Emacs has Lisp! Enough said. If you
don't appreciate the significance of those last two sentences or have a desire to learn
enough about functional programming and Lisp to develop that appreciation, then you should
use vim.
The emacs zealots will claim that emacs has viper mode, and so it is a superset of vim.
But viper mode isn't standard. My understanding is that viper mode is not used by the
majority of emacs users. Since it's not the default, most emacs users probably don't develop
a true appreciation for the benefits of the modal paradigm.
In my opinion these differences are orthogonal. I believe the benefits of vim and emacs as
I have stated them are both valid. This means that the ultimate editor doesn't exist yet.
It's probably true that emacs would be the easiest platform on which to base the ultimate
editor. But modal editing is not entrenched in the emacs mindset. The emacs community could
move that way in the future, but that doesn't seem very likely.
So if you want raw editing efficiency, use vim. If you want the ultimate environment for
scripting and programming your editor use emacs. If you want some of both with an emphasis on
programmability, use emacs with viper mode (or program your own mode). If you want the best
of both worlds, you're out of luck for now.
Spend 30 mins doing the vim tutorial (run vimtutor instead of vim in terminal). You will
learn the basic movements, and some keystrokes, this will make you at least as productive
with vim as with the text editor you used before. After that, well, read Jim Dennis' answer
again :)
This is the first thing I thought of when reading the OP. It's obvious that the poster has
never run this; I ran through it when first learning vim two years ago and it cemented in my
mind the superiority of Vim to any of the other editors I've used (including, for me, Emacs
since the key combos are annoying to use on a Mac). – dash-tom-bang
Aug 24 '11 at 0:47
Use \c anywhere in a search to ignore case (overriding your ignorecase or
smartcase settings). E.g. /\cfoo or /foo\c will match
foo, Foo, fOO, FOO, etc.
Use \C anywhere in a search to force case matching. E.g. /\Cfoo
or /foo\C will only match foo.
Odd nobody's mentioned ctags. Download "exuberant ctags" and put it ahead of the crappy
preinstalled version you already have in your search path. Cd to the root of whatever you're
working on; for example the Android kernel distribution. Type "ctags -R ." to build an index
of source files anywhere beneath that dir in a file named "tags". This contains all tags,
nomatter the language nor where in the dir, in one file, so cross-language work is easy.
Then open vim in that folder and read :help ctags for some commands. A few I use
often:
Put cursor on a method call and type CTRL-] to go to the method definition.
You asked about productive shortcuts, but I think your real question is: Is vim worth it? The
answer to this stackoverflow question is -> "Yes"
You must have noticed two things. Vim is powerful, and vim is hard to learn. Much of it's
power lies in it's expandability and endless combination of commands. Don't feel overwhelmed.
Go slow. One command, one plugin at a time. Don't overdo it.
All that investment you put into vim will pay back a thousand fold. You're going to be
inside a text editor for many, many hours before you die. Vim will be your companion.
Multiple buffers, and in particular fast jumping between them to compare two files with
:bp and :bn (properly remapped to a single Shift +
p or Shift + n )
vimdiff mode (splits in two vertical buffers, with colors to show the
differences)
Area-copy with Ctrl + v
And finally, tab completion of identifiers (search for "mosh_tab_or_complete"). That's a
life changer.
Probably better to set the clipboard option to unnamed ( set
clipboard=unnamed in your .vimrc) to use the system clipboard by default. Or if you
still want the system clipboard separate from the unnamed register, use the appropriately
named clipboard register: "*p . – R. Martinho Fernandes
Apr 1 '10 at 3:17
Love it! After being exasperated by pasting code examples from the web and I was just
starting to feel proficient in vim. That was the command I dreamed up on the spot. This was
when vim totally hooked me. – kevpie
Oct 12 '10 at 22:38
There are a plethora of questions where people talk about common tricks, notably " Vim+ctags
tips and tricks ".
However, I don't refer to commonly used shortcuts that someone new to Vim would find cool.
I am talking about a seasoned Unix user (be they a developer, administrator, both, etc.), who
thinks they know something 99% of us never heard or dreamed about. Something that not only
makes their work easier, but also is COOL and hackish .
After all, Vim resides in
the most dark-corner-rich OS in the world, thus it should have intricacies that only a few
privileged know about and want to share with us.
Might not be one that 99% of Vim users don't know about, but it's something I use daily and
that any Linux+Vim poweruser must know.
Basic command, yet extremely useful.
:w !sudo tee %
I often forget to sudo before editing a file I don't have write permissions on. When I
come to save that file and get a permission error, I just issue that vim command in order to
save the file without the need to save it to a temp file and then copy it back again.
You obviously have to be on a system with sudo installed and have sudo rights.
Something I just discovered recently that I thought was very cool:
:earlier 15m
Reverts the document back to how it was 15 minutes ago. Can take various arguments for the
amount of time you want to roll back, and is dependent on undolevels. Can be reversed with
the opposite command :later
@skinp: If you undo and then make further changes from the undone state, you lose that redo
history. This lets you go back to a state which is no longer in the undo stack. –
ephemient
Apr 8 '09 at 16:15
Also very usefull is g+ and g- to go backward and forward in time. This is so much more
powerfull than an undo/redo stack since you don't loose the history when you do something
after an undo. – Etienne PIERRE
Jul 21 '09 at 13:53
You don't lose the redo history if you make a change after an undo. It's just not easily
accessed. There are plugins to help you visualize this, like Gundo.vim – Ehtesh Choudhury
Nov 29 '11 at 12:09
This is quite similar to :r! The only difference as far as I can tell is that :r! opens a new
line, :.! overwrites the current line. – saffsd
May 6 '09 at 14:41
An alternative to :.!date is to write "date" on a line and then run
!$sh (alternatively having the command followed by a blank line and run
!jsh ). This will pipe the line to the "sh" shell and substitute with the output
from the command. – hlovdal
Jan 25 '10 at 21:11
:.! is actually a special case of :{range}!, which filters a range
of lines (the current line when the range is . ) through a command and replaces
those lines with the output. I find :%! useful for filtering whole buffers.
– Nefrubyr
Mar 25 '10 at 16:24
And also note that '!' is like 'y', 'd', 'c' etc. i.e. you can do: !!, number!!, !motion
(e.g. !Gshell_command<cr> replace from current line to end of file ('G') with output of
shell_command). – aqn
Apr 26 '13 at 20:52
dab "delete arounb brackets", daB for around curly brackets, t for xml type tags,
combinations with normal commands are as expected cib/yaB/dit/vat etc – sjh
Apr 8 '09 at 15:33
This is possibly the biggest reason for me staying with Vim. That and its equivalent "change"
commands: ciw, ci(, ci", as well as dt<space> and ct<space> – thomasrutter
Apr 26 '09 at 11:11
de Delete everything till the end of the word by pressing . at your heart's desire.
ci(xyz[Esc] -- This is a weird one. Here, the 'i' does not mean insert mode. Instead it
means inside the parenthesis. So this sequence cuts the text inside parenthesis you're
standing in and replaces it with "xyz". It also works inside square and figure brackets --
just do ci[ or ci{ correspondingly. Naturally, you can do di (if you just want to delete all
text without typing anything. You can also do a instead of i if you
want to delete the parentheses as well and not just text inside them.
ci" - cuts the text in current quotes
ciw - cuts the current word. This works just like the previous one except that
( is replaced with w .
C - cut the rest of the line and switch to insert mode.
ZZ -- save and close current file (WAY faster than Ctrl-F4 to close the current tab!)
ddp - move current line one row down
xp -- move current character one position to the right
U - uppercase, so viwU upercases the word
~ - switches case, so viw~ will reverse casing of entire word
Ctrl+u / Ctrl+d scroll the page half-a-screen up or down. This seems to be more useful
than the usual full-screen paging as it makes it easier to see how the two screens relate.
For those who still want to scroll entire screen at a time there's Ctrl+f for Forward and
Ctrl+b for Backward. Ctrl+Y and Ctrl+E scroll down or up one line at a time.
Crazy but very useful command is zz -- it scrolls the screen to make this line appear in
the middle. This is excellent for putting the piece of code you're working on in the center
of your attention. Sibling commands -- zt and zb -- make this line the top or the bottom one
on the sreen which is not quite as useful.
% finds and jumps to the matching parenthesis.
de -- delete from cursor to the end of the word (you can also do dE to delete
until the next space)
bde -- delete the current word, from left to right delimiter
df[space] -- delete up until and including the next space
dt. -- delete until next dot
dd -- delete this entire line
ye (or yE) -- yanks text from here to the end of the word
ce - cuts through the end of the word
bye -- copies current word (makes me wonder what "hi" does!)
yy -- copies the current line
cc -- cuts the current line, you can also do S instead. There's also lower
cap s which cuts current character and switches to insert mode.
viwy or viwc . Yank or change current word. Hit w multiple times to keep
selecting each subsequent word, use b to move backwards
vi{ - select all text in figure brackets. va{ - select all text including {}s
vi(p - highlight everything inside the ()s and replace with the pasted text
b and e move the cursor word-by-word, similarly to how Ctrl+Arrows normally do . The
definition of word is a little different though, as several consecutive delmiters are treated
as one word. If you start at the middle of a word, pressing b will always get you to the
beginning of the current word, and each consecutive b will jump to the beginning of the next
word. Similarly, and easy to remember, e gets the cursor to the end of the
current, and each subsequent, word.
similar to b / e, capital B and E
move the cursor word-by-word using only whitespaces as delimiters.
capital D (take a deep breath) Deletes the rest of the line to the right of the cursor,
same as Shift+End/Del in normal editors (notice 2 keypresses -- Shift+D -- instead of 3)
All the things you're calling "cut" is "change". eg: C is change until the end of the line.
Vim's equivalent of "cut" is "delete", done with d/D. The main difference between change and
delete is that delete leaves you in normal mode but change puts you into a sort of insert
mode (though you're still in the change command which is handy as the whole change can be
repeated with . ). – Laurence Gonsalves
Feb 19 '11 at 23:49
One that I rarely find in most Vim tutorials, but it's INCREDIBLY useful (at least to me), is
the
g; and g,
to move (forward, backward) through the changelist.
Let me show how I use it. Sometimes I need to copy and paste a piece of code or string,
say a hex color code in a CSS file, so I search, jump (not caring where the match is), copy
it and then jump back (g;) to where I was editing the code to finally paste it. No need to
create marks. Simpler.
Ctrl-O and Ctrl-I (tab) will work similarly, but not the same. They move backward and forward
in the "jump list", which you can view by doing :jumps or :ju For more information do a :help
jumplist – Kimball Robinson
Apr 16 '10 at 0:29
@JoshLee: If one is careful not to traverse newlines, is it safe to not use the -b option? I
ask because sometimes I want to make a hex change, but I don't want to close and
reopen the file to do so. – dotancohen
Jun 7 '13 at 5:50
Sometimes a setting in your .vimrc will get overridden by a plugin or autocommand. To debug
this a useful trick is to use the :verbose command in conjunction with :set. For example, to
figure out where cindent got set/unset:
:verbose set cindent?
This will output something like:
cindent
Last set from /usr/share/vim/vim71/indent/c.vim
This also works with maps and highlights. (Thanks joeytwiddle for pointing this out.) For
example:
:verbose nmap U
n U <C-R>
Last set from ~/.vimrc
:verbose highlight Normal
Normal xxx guifg=#dddddd guibg=#111111 font=Inconsolata Medium 14
Last set from ~/src/vim-holodark/colors/holodark.vim
:verbose can also be used before nmap l or highlight
Normal to find out where the l keymap or the Normal
highlight were last defined. Very useful for debugging! – joeytwiddle
Jul 5 '14 at 22:08
When you get into creating custom mappings, this will save your ass so many times, probably
one of the most useful ones here (IMO)! – SidOfc
Sep 24 '17 at 11:26
Not sure if this counts as dark-corner-ish at all, but I've only just learnt it...
:g/match/y A
will yank (copy) all lines containing "match" into the "a / @a
register. (The capitalization as A makes vim append yankings instead of
replacing the previous register contents.) I used it a lot recently when making Internet
Explorer stylesheets.
Sometimes it's better to do what tsukimi said and just filter out lines that don't match your
pattern. An abbreviated version of that command though: :v/PATTERN/d
Explanation: :v is an abbreviation for :g!, and the
:g command applies any ex command to lines. :y[ank] works and so
does :normal, but here the most natural thing to do is just
:d[elete] . – pandubear
Oct 12 '13 at 8:39
You can also do :g/match/normal "Ayy -- the normal keyword lets you
tell it to run normal-mode commands (which you are probably more familiar with). –
Kimball
Robinson
Feb 5 '16 at 17:58
Hitting <C-f> after : or / (or any time you're in command mode) will bring up the same
history menu. So you can remap q: if you hit it accidentally a lot and still access this
awesome mode. – idbrii
Feb 23 '11 at 19:07
For me it didn't open the source; instead it apparently used elinks to dump rendered page
into a buffer, and then opened that. – Ivan Vučica
Sep 21 '10 at 8:07
@Vdt: It'd be useful if you posted your error. If it's this one: " error (netrw)
neither the wget nor the fetch command is available" you obviously need to make one of those
tools available from your PATH environment variable. – Isaac Remuant
Jun 3 '13 at 15:23
I find this one particularly useful when people send links to a paste service and forgot to
select a syntax highlighting, I generally just have to open the link in vim after appending
"&raw". – Dettorer
Oct 29 '14 at 13:47
I didn't know macros could repeat themselves. Cool. Note: qx starts recording into register x
(he uses qq for register q). 0 moves to the start of the line. dw delets a word. j moves down
a line. @q will run the macro again (defining a loop). But you forgot to end the recording
with a final "q", then actually run the macro by typing @q. – Kimball Robinson
Apr 16 '10 at 0:39
Another way of accomplishing this is to record a macro in register a that does some
transformation to a single line, then linewise highlight a bunch of lines with V and type
:normal! @a to applyyour macro to every line in your selection. –
Nathan Long
Aug 29 '11 at 15:33
I found this post googling recursive VIM macros. I could find no way to stop the macro other
than killing the VIM process. – dotancohen
May 14 '13 at 6:00
Assuming you have Perl and/or Ruby support compiled in, :rubydo and
:perldo will run a Ruby or Perl one-liner on every line in a range (defaults to
entire buffer), with $_ bound to the text of the current line (minus the
newline). Manipulating $_ will change the text of that line.
You can use this to do certain things that are easy to do in a scripting language but not
so obvious using Vim builtins. For example to reverse the order of the words in a line:
:perldo $_ = join ' ', reverse split
To insert a random string of 8 characters (A-Z) at the end of every line:
Sadly not, it just adds a funky control character to the end of the line. You could then use
a Vim search/replace to change all those control characters to real newlines though. –
Brian Carper
Jul 2 '09 at 17:26
Go to older/newer position. When you are moving through the file (by searching, moving
commands etc.) vim rember these "jumps", so you can repeat these jumps backward (^O - O for
old) and forward (^I - just next to I on keyboard). I find it very useful when writing code
and performing a lot of searches.
gi
Go to position where Insert mode was stopped last. I find myself often editing and then
searching for something. To return to editing place press gi.
gf
put cursor on file name (e.g. include header file), press gf and the file is opened
gF
similar to gf but recognizes format "[file name]:[line number]". Pressing gF will open
[file name] and set cursor to [line number].
^P and ^N
Auto complete text while editing (^P - previous match and ^N next match)
^X^L
While editing completes to the same line (useful for programming). You write code and then
you recall that you have the same code somewhere in file. Just press ^X^L and the full line
completed
^X^F
Complete file names. You write "/etc/pass" Hmm. You forgot the file name. Just press ^X^F
and the filename is completed
^Z or :sh
Move temporary to the shell. If you need a quick bashing:
press ^Z (to put vi in background) to return to original shell and press fg to return
to vim back
press :sh to go to sub shell and press ^D/exit to return to vi back
With ^X^F my pet peeve is that filenames include = signs, making it
do rotten things in many occasions (ini files, makefiles etc). I use se
isfname-== to end that nuisance – sehe
Mar 4 '12 at 21:50
This is a nice trick to reopen the current file with a different encoding:
:e ++enc=cp1250 %:p
Useful when you have to work with legacy encodings. The supported encodings are listed in
a table under encoding-values (see helpencoding-values ). Similar thing also works for ++ff, so that you
can reopen file with Windows/Unix line ends if you get it wrong for the first time (see
helpff ).
>, Apr 7, 2009 at 18:43
Never had to use this sort of a thing, but we'll certainly add to my arsenal of tricks...
– Sasha
Apr 7 '09 at 18:43
I have used this today, but I think I didn't need to specify "%:p"; just opening the file and
:e ++enc=cp1250 was enough. I – Ivan Vučica
Jul 8 '09 at 19:29
This is a terrific answer. Not the bit about creating the IP addresses, but the bit that
implies that VIM can use for loops in commands . – dotancohen
Nov 30 '14 at 14:56
No need, usually, to be exactly on the braces. Thought frequently I'd just =} or
vaBaB= because it is less dependent. Also, v}}:!astyle -bj matches
my code style better, but I can get it back into your style with a simple %!astyle
-aj – sehe
Mar 4 '12 at 22:03
I remapped capslock to esc instead, as it's an otherwise useless key. My mapping was OS wide
though, so it has the added benefit of never having to worry about accidentally hitting it.
The only drawback IS ITS HARDER TO YELL AT PEOPLE. :) – Alex
Oct 5 '09 at 5:32
@ojblass: Not sure how many people ever right matlab code in Vim, but ii and
jj are commonly used for counter variables, because i and
j are reserved for complex numbers. – brianmearns
Oct 3 '12 at 12:45
@rlbond - It comes down to how good is the regex engine in the IDE. Vim's regexes are pretty
powerful; others.. not so much sometimes. – romandas
Jun 19 '09 at 16:58
The * will be greedy, so this regex assumes you have just two columns. If you want it to be
nongreedy use {-} instead of * (see :help non-greedy for more information on the {}
multiplier) – Kimball Robinson
Apr 16 '10 at 0:32
Not exactly a dark secret, but I like to put the following mapping into my .vimrc file, so I
can hit "-" (minus) anytime to open the file explorer to show files adjacent to the one I
just edit . In the file explorer, I can hit another "-" to move up one directory,
providing seamless browsing of a complex directory structures (like the ones used by the MVC
frameworks nowadays):
map - :Explore<cr>
These may be also useful for somebody. I like to scroll the screen and advance the cursor
at the same time:
map <c-j> j<c-e>
map <c-k> k<c-y>
Tab navigation - I love tabs and I need to move easily between them:
I suppose it would override autochdir temporarily (until you switched buffers again).
Basically, it changes directory to the root directory of the current file. It gives me a bit
more manual control than autochdir does. – rampion
May 8 '09 at 2:55
:set autochdir //this also serves the same functionality and it changes the current directory
to that of file in buffer – Naga Kiran
Jul 8 '09 at 13:44
I like to use 'sudo bash', and my sysadmin hates this. He locked down 'sudo' so it could only
be used with a handful of commands (ls, chmod, chown, vi, etc), but I was able to use vim to
get a root shell anyway:
bash$ sudo vi +'silent !bash' +q
Password: ******
root#
yeah... I'd hate you too ;) you should only need a root shell VERY RARELY, unless you're
already in the habit of running too many commands as root which means your permissions are
all screwed up. – jnylen
Feb 22 '11 at 15:58
Don't forget you can prepend numbers to perform an action multiple times in Vim. So to expand
the current window height by 8 lines: 8<C-W>+ – joeytwiddle
Jan 29 '12 at 18:12
well, if you haven't done anything else to the file, you can simply type u for undo.
Otherwise, I haven't figured that out yet. – Grant Limberg
Jun 17 '09 at 19:29
Commented out code is probably one of the worst types of comment you could possibly put in
your code. There are better uses for the awesome block insert. – Braden Best
Feb 4 '16 at 16:23
I use vim for just about any text editing I do, so I often times use copy and paste. The
problem is that vim by default will often times distort imported text via paste. The way to
stop this is to use
:set paste
before pasting in your data. This will keep it from messing up.
Note that you will have to issue :set nopaste to recover auto-indentation.
Alternative ways of pasting pre-formatted text are the clipboard registers ( *
and + ), and :r!cat (you will have to end the pasted fragment with
^D).
It is also sometimes helpful to turn on a high contrast color scheme. This can be done
with
:color blue
I've noticed that it does not work on all the versions of vim I use but it does on
most.
The "distortion" is happening because you have some form of automatic indentation enabled.
Using set paste or specifying a key for the pastetoggle option is a
common way to work around this, but the same effect can be achieved with set
mouse=a as then Vim knows that the flood of text it sees is a paste triggered by the
mouse. – jamessan
Dec 28 '09 at 8:27
If you have gvim installed you can often (though it depends on what your options your distro
compiles vim with) use the X clipboard directly from vim through the * register. For example
"*p to paste from the X xlipboard. (It works from terminal vim, too, it's just
that you might need the gvim package if they're separate) – kyrias
Oct 19 '13 at 12:15
Here's something not obvious. If you have a lot of custom plugins / extensions in your $HOME
and you need to work from su / sudo / ... sometimes, then this might be useful.
In your ~/.bashrc:
export VIMINIT=":so $HOME/.vimrc"
In your ~/.vimrc:
if $HOME=='/root'
if $USER=='root'
if isdirectory('/home/your_typical_username')
let rtuser = 'your_typical_username'
elseif isdirectory('/home/your_other_username')
let rtuser = 'your_other_username'
endif
else
let rtuser = $USER
endif
let &runtimepath = substitute(&runtimepath, $HOME, '/home/'.rtuser, 'g')
endif
It will allow your local plugins to load - whatever way you use to change the user.
You might also like to take the *.swp files out of your current path and into ~/vimtmp
(this goes into .vimrc):
if ! isdirectory(expand('~/vimtmp'))
call mkdir(expand('~/vimtmp'))
endif
if isdirectory(expand('~/vimtmp'))
set directory=~/vimtmp
else
set directory=.,/var/tmp,/tmp
endif
Also, some mappings I use to make editing easier - makes ctrl+s work like escape and
ctrl+h/l switch the tabs:
I prefer never to run vim as root/under sudo - and would just run the command from vim e.g.
:!sudo tee %, :!sudo mv % /etc or even launch a login shell
:!sudo -i – shalomb
Aug 24 '15 at 8:02
Ctrl-n while in insert mode will auto complete whatever word you're typing based on all the
words that are in open buffers. If there is more than one match it will give you a list of
possible words that you can cycle through using ctrl-n and ctrl-p.
Ability to run Vim on a client/server based modes.
For example, suppose you're working on a project with a lot of buffers, tabs and other
info saved on a session file called session.vim.
You can open your session and create a server by issuing the following command:
vim --servername SAMPLESERVER -S session.vim
Note that you can open regular text files if you want to create a server and it doesn't
have to be necessarily a session.
Now, suppose you're in another terminal and need to open another file. If you open it
regularly by issuing:
vim new_file.txt
Your file would be opened in a separate Vim buffer, which is hard to do interactions with
the files on your session. In order to open new_file.txt in a new tab on your server use this
command:
vim --servername SAMPLESERVER --remote-tab-silent new_file.txt
If there's no server running, this file will be opened just like a regular file.
Since providing those flags every time you want to run them is very tedious, you can
create a separate alias for creating client and server.
I placed the followings on my bashrc file:
alias vims='vim --servername SAMPLESERVER'
alias vimc='vim --servername SAMPLESERVER --remote-tab-silent'
HOWTO: Auto-complete Ctags when using Vim in Bash. For anyone else who uses Vim and Ctags,
I've written a small auto-completer function for Bash. Add the following into your
~/.bash_completion file (create it if it does not exist):
Thanks go to stylishpants for his many fixes and improvements.
_vim_ctags() {
local cur prev
COMPREPLY=()
cur="${COMP_WORDS[COMP_CWORD]}"
prev="${COMP_WORDS[COMP_CWORD-1]}"
case "${prev}" in
-t)
# Avoid the complaint message when no tags file exists
if [ ! -r ./tags ]
then
return
fi
# Escape slashes to avoid confusing awk
cur=${cur////\\/}
COMPREPLY=( $(compgen -W "`awk -vORS=" " "/^${cur}/ { print \\$1 }" tags`" ) )
;;
*)
_filedir_xspec
;;
esac
}
# Files matching this pattern are excluded
excludelist='*.@(o|O|so|SO|so.!(conf)|SO.!(CONF)|a|A|rpm|RPM|deb|DEB|gif|GIF|jp?(e)g|JP?(E)G|mp3|MP3|mp?(e)g|MP?(E)G|avi|AVI|asf|ASF|ogg|OGG|class|CLASS)'
complete -F _vim_ctags -f -X "${excludelist}" vi vim gvim rvim view rview rgvim rgview gview
Once you restart your Bash session (or create a new one) you can type:
Code:
~$ vim -t MyC<tab key>
and it will auto-complete the tag the same way it does for files and directories:
Code:
MyClass MyClassFactory
~$ vim -t MyC
I find it really useful when I'm jumping into a quick bug fix.
Auto reloads the current buffer..especially useful while viewing log files and it almost
serves the functionality of "tail" program in unix from within vim.
Checking for compile errors from within vim. set the makeprg variable depending on the
language let's say for perl
:setlocal makeprg = perl\ -c \ %
For PHP
set makeprg=php\ -l\ %
set errorformat=%m\ in\ %f\ on\ line\ %l
Issuing ":make" runs the associated makeprg and displays the compilation errors/warnings
in quickfix window and can easily navigate to the corresponding line numbers.
:make will run the makefile in the current directory, parse the compiler
output, you can then use :cn and :cp to step through the compiler
errors opening each file and seeking to the line number in question.
I was sure someone would have posted this already, but here goes.
Take any build system you please; make, mvn, ant, whatever. In the root of the project
directory, create a file of the commands you use all the time, like this:
mvn install
mvn clean install
... and so forth
To do a build, put the cursor on the line and type !!sh. I.e. filter that line; write it
to a shell and replace with the results.
The build log replaces the line, ready to scroll, search, whatever.
When you're done viewing the log, type u to undo and you're back to your file of
commands.
Why wouldn't you just set makeprg to the proper tool you use for your build (if
it isn't set already) and then use :make ? :copen will show you the
output of the build as well as allowing you to jump to any warnings/errors. –
jamessan
Dec 28 '09 at 8:29
==========================================================
In normal mode
==========================================================
gf ................ open file under cursor in same window --> see :h path
Ctrl-w f .......... open file under cursor in new window
Ctrl-w q .......... close current window
Ctrl-w 6 .......... open alternate file --> see :h #
gi ................ init insert mode in last insertion position
'0 ................ place the cursor where it was when the file was last edited
Due to the latency and lack of colors (I love color schemes :) I don't like programming on
remote machines in PuTTY .
So I developed this trick to work around this problem. I use it on Windows.
You will need
1x gVim
1x rsync on remote and local machines
1x SSH private key auth to the remote machine so you don't need to type the
password
Configure rsync to make your working directory accessible. I use an SSH tunnel and only
allow connections from the tunnel:
address = 127.0.0.1
hosts allow = 127.0.0.1
port = 40000
use chroot = false
[bledge_ce]
path = /home/xplasil/divine/bledge_ce
read only = false
Then start rsyncd: rsync --daemon --config=rsyncd.conf
Setting up local machine
Install rsync from Cygwin. Start Pageant and load your private key for the remote machine.
If you're using SSH tunelling, start PuTTY to create the tunnel. Create a batch file push.bat
in your working directory which will upload changed files to the remote machine using
rsync:
SConstruct is a build file for scons. Modify the list of files to suit your needs. Replace
localhost with the name of remote machine if you don't use SSH tunelling.
Configuring Vim That is now easy. We will use the quickfix feature (:make and error list),
but the compilation will run on the remote machine. So we need to set makeprg:
This will first start the push.bat task to upload the files and then execute the commands
on remote machine using SSH ( Plink from the PuTTY
suite). The command first changes directory to the working dir and then starts build (I use
scons).
The results of build will show conviniently in your local gVim errors list.
I use Vim for everything. When I'm editing an e-mail message, I use:
gqap (or gwap )
extensively to easily and correctly reformat on a paragraph-by-paragraph basis, even with
quote leadin characters. In order to achieve this functionality, I also add:
-c 'set fo=tcrq' -c 'set tw=76'
to the command to invoke the editor externally. One noteworthy addition would be to add '
a ' to the fo (formatoptions) parameter. This will automatically reformat the paragraph as
you type and navigate the content, but may interfere or cause problems with errant or odd
formatting contained in the message.
autocmd FileType mail set tw=76 fo=tcrq in your ~/.vimrc will also
work, if you can't edit the external editor command. – Andrew Ferrier
Jul 14 '14 at 22:22
":e ." does the same thing for your current working directory which will be the same as your
current file's directory if you set autochdir – bpw1621
Feb 19 '11 at 15:13
retab 1. This sets the tab size to one. But it also goes through the code and adds extra
tabs and spaces so that the formatting does not move any of the actual text (ie the text
looks the same after ratab).
% s/^I/ /g: Note the ^I is tthe result of hitting tab. This searches for all tabs and
replaces them with a single space. Since we just did a retab this should not cause the
formatting to change but since putting tabs into a website is hit and miss it is good to
remove them.
% s/^/ /: Replace the beginning of the line with four spaces. Since you cant actually
replace the beginning of the line with anything it inserts four spaces at the beging of the
line (this is needed by SO formatting to make the code stand out).
Note that you can achieve the same thing with cat <file> | awk '{print " "
$line}' . So try :w ! awk '{print " " $line}' | xclip -i . That's
supposed to be four spaces between the "" – Braden Best
Feb 4 '16 at 16:40
When working on a project where the build process is slow I always build in the background
and pipe the output to a file called errors.err (something like make debug 2>&1
| tee errors.err ). This makes it possible for me to continue editing or reviewing the
source code during the build process. When it is ready (using pynotify on GTK to inform me
that it is complete) I can look at the result in vim using quickfix . Start by
issuing :cf[ile] which reads the error file and jumps to the first error. I personally like
to use cwindow to get the build result in a separate window.
A short explanation would be appreciated... I tried it and could be very usefull! You can
even do something like set colorcolumn=+1,+10,+20 :-) – Luc M
Oct 31 '12 at 15:12
colorcolumn allows you to specify columns that are highlighted (it's ideal for
making sure your lines aren't too long). In the original answer, set cc=+1
highlights the column after textwidth . See the documentation for
more information. – mjturner
Aug 19 '15 at 11:16
Yes, but that's like saying yank/paste functions make an editor "a little" more like an IDE.
Those are editor functions. Pretty much everything that goes with the editor that concerns
editing text and that particular area is an editor function. IDE functions would be, for
example, project/files management, connectivity with compiler&linker, error reporting,
building automation tools, debugger ... i.e. the stuff that doesn't actually do nothing with
editing text. Vim has some functions & plugins so he can gravitate a little more towards
being an IDE, but these are not the ones in question. – Rook
May 12 '09 at 21:25
Also, just FYI, vim has an option to set invnumber. That way you don't have to "set nu" and
"set nonu", i.e. remember two functions - you can just toggle. – Rook
May 12 '09 at 21:31
:ls lists all the currently opened buffers. :be opens a file in a
new buffer, :bn goes to the next buffer, :bp to the previous,
:b filename opens buffer filename (it auto-completes too). buffers are distinct
from tabs, which i'm told are more analogous to views. – Nona Urbiz
Dec 20 '10 at 8:25
In insert mode, ctrl + x, ctrl + p will complete
(with menu of possible completions if that's how you like it) the current long identifier
that you are typing.
if (SomeCall(LONG_ID_ <-- type c-x c-p here
[LONG_ID_I_CANT_POSSIBLY_REMEMBER]
LONG_ID_BUT_I_NEW_IT_WASNT_THIS_ONE
LONG_ID_GOSH_FORGOT_THIS
LONG_ID_ETC
∶
Neither of the following is really diehard, but I find it extremely useful.
Trivial bindings, but I just can't live without. It enables hjkl-style movement in insert
mode (using the ctrl key). In normal mode: ctrl-k/j scrolls half a screen up/down and
ctrl-l/h goes to the next/previous buffer. The µ and ù mappings are especially
for an AZERTY-keyboard and go to the next/previous make error.
A small function I wrote to highlight functions, globals, macro's, structs and typedefs.
(Might be slow on very large files). Each type gets different highlighting (see ":help
group-name" to get an idea of your current colortheme's settings) Usage: save the file with
ww (default "\ww"). You need ctags for this.
nmap <Leader>ww :call SaveCtagsHighlight()<CR>
"Based on: http://stackoverflow.com/questions/736701/class-function-names-highlighting-in-vim
function SaveCtagsHighlight()
write
let extension = expand("%:e")
if extension!="c" && extension!="cpp" && extension!="h" && extension!="hpp"
return
endif
silent !ctags --fields=+KS *
redraw!
let list = taglist('.*')
for item in list
let kind = item.kind
if kind == 'member'
let kw = 'Identifier'
elseif kind == 'function'
let kw = 'Function'
elseif kind == 'macro'
let kw = 'Macro'
elseif kind == 'struct'
let kw = 'Structure'
elseif kind == 'typedef'
let kw = 'Typedef'
else
continue
endif
let name = item.name
if name != 'operator=' && name != 'operator ='
exec 'syntax keyword '.kw.' '.name
endif
endfor
echo expand("%")." written, tags updated"
endfunction
I have the habit of writing lots of code and functions and I don't like to write
prototypes for them. So I made some function to generate a list of prototypes within a
C-style sourcefile. It comes in two flavors: one that removes the formal parameter's name and
one that preserves it. I just refresh the entire list every time I need to update the
prototypes. It avoids having out of sync prototypes and function definitions. Also needs
ctags.
"Usage: in normal mode, where you want the prototypes to be pasted:
":call GenerateProptotypes()
function GeneratePrototypes()
execute "silent !ctags --fields=+KS ".expand("%")
redraw!
let list = taglist('.*')
let line = line(".")
for item in list
if item.kind == "function" && item.name != "main"
let name = item.name
let retType = item.cmd
let retType = substitute( retType, '^/\^\s*','','' )
let retType = substitute( retType, '\s*'.name.'.*', '', '' )
if has_key( item, 'signature' )
let sig = item.signature
let sig = substitute( sig, '\s*\w\+\s*,', ',', 'g')
let sig = substitute( sig, '\s*\w\+\(\s)\)', '\1', '' )
else
let sig = '()'
endif
let proto = retType . "\t" . name . sig . ';'
call append( line, proto )
let line = line + 1
endif
endfor
endfunction
function GeneratePrototypesFullSignature()
"execute "silent !ctags --fields=+KS ".expand("%")
let dir = expand("%:p:h");
execute "silent !ctags --fields=+KSi --extra=+q".dir."/* "
redraw!
let list = taglist('.*')
let line = line(".")
for item in list
if item.kind == "function" && item.name != "main"
let name = item.name
let retType = item.cmd
let retType = substitute( retType, '^/\^\s*','','' )
let retType = substitute( retType, '\s*'.name.'.*', '', '' )
if has_key( item, 'signature' )
let sig = item.signature
else
let sig = '(void)'
endif
let proto = retType . "\t" . name . sig . ';'
call append( line, proto )
let line = line + 1
endif
endfor
endfunction
" Pasting in normal mode should append to the right of cursor
nmap <C-V> a<C-V><ESC>
" Saving
imap <C-S> <C-o>:up<CR>
nmap <C-S> :up<CR>
" Insert mode control delete
imap <C-Backspace> <C-W>
imap <C-Delete> <C-O>dw
nmap <Leader>o o<ESC>k
nmap <Leader>O O<ESC>j
" tired of my typo
nmap :W :w
I rather often find it useful to on-the-fly define some key mapping just like one would
define a macro. The twist here is, that the mapping is recursive and is executed
until it fails.
I am completely aware of all the downsides - it just so happens that I found it rather
useful in some occasions. Also it can be interesting to watch it at work ;).
Macros are also allowed to be recursive and work in pretty much the same fashion when they
are, so it's not particularly necessary to use a mapping for this. – 00dani
Aug 2 '13 at 11:25
"... The .vimrc settings should be heavily commented ..."
"... Look also at perl-support.vim (a Perl IDE for Vim/gVim). Comes with suggestions for customizing Vim (.vimrc), gVim (.gvimrc), ctags, perltidy, and Devel:SmallProf beside many other things. ..."
"... Perl Best Practices has an appendix on Editor Configurations . vim is the first editor listed. ..."
"... Andy Lester and others maintain the official Perl, Perl 6 and Pod support files for Vim on Github: https://github.com/vim-perl/vim-perl ..."
There are a lot of threads pertaining to how to configure Vim/GVim for Perl
development on PerlMonks.org .
My purpose in posting this question is to try to create, as much as possible, an ideal configuration for Perl development using
Vim/GVim. Please post your suggestions for .vimrc settings as well as useful plugins.
I will try to merge the recommendations into a set of .vimrc settings and to a list of recommended plugins, ftplugins
and syntax files.
.vimrc settings
"Create a command :Tidy to invoke perltidy"
"By default it operates on the whole file, but you can give it a"
"range or visual range as well if you know what you're doing."
command -range=% -nargs=* Tidy <line1>,<line2>!
\perltidy -your -preferred -default -options <args>
vmap <tab> >gv "make tab in v mode indent code"
vmap <s-tab> <gv
nmap <tab> I<tab><esc> "make tab in normal mode indent code"
nmap <s-tab> ^i<bs><esc>
let perl_include_pod = 1 "include pod.vim syntax file with perl.vim"
let perl_extended_vars = 1 "highlight complex expressions such as @{[$x, $y]}"
let perl_sync_dist = 250 "use more context for highlighting"
set nocompatible "Use Vim defaults"
set backspace=2 "Allow backspacing over everything in insert mode"
set autoindent "Always set auto-indenting on"
set expandtab "Insert spaces instead of tabs in insert mode. Use spaces for indents"
set tabstop=4 "Number of spaces that a <Tab> in the file counts for"
set shiftwidth=4 "Number of spaces to use for each step of (auto)indent"
set showmatch "When a bracket is inserted, briefly jump to the matching one"
@Manni: You are welcome. I have been using the same .vimrc for many years and a recent bunch of vim related questions
got me curious. I was too lazy to wade through everything that was posted on PerlMonks (and see what was current etc.), so I figured
we could put together something here. – Sinan Ünür
Oct 15 '09 at 20:02
Rather than closepairs, I would recommend delimitMate or one of the various autoclose plugins. (There are about three named autoclose,
I think.) The closepairs plugin can't handle a single apostrophe inside a string (i.e. print "This isn't so hard, is it?"
), but delimitMate and others can. github.com/Raimondi/delimitMate
– Telemachus
Jul 8 '10 at 0:40
Three hours later: turns out that the 'p' in that mapping is a really bad idea. It will bite you when vim's got something to paste.
– innaM
Oct 21 '09 at 13:22
@Manni: I just gave it a try: if you type, pt, vim waits for you to type something else (e.g. <cr>) as a signal that
the command is ended. Hitting, ptv will immediately format the region. So I would expect that vim recognizes that
there is overlap between the mappings, and waits for disambiguation before proceeding. –
Ether
Oct 21 '09 at 19:44
" Create a command :Tidy to invoke perltidy.
" By default it operates on the whole file, but you can give it a
" range or visual range as well if you know what you're doing.
command -range=% -nargs=* Tidy <line1>,<line2>!
\perltidy -your -preferred -default -options <args>
Look also at perl-support.vim (a Perl
IDE for Vim/gVim). Comes with suggestions for customizing Vim (.vimrc), gVim (.gvimrc), ctags, perltidy, and Devel:SmallProf beside
many other things.
I hate the fact that \$ is changed automatically to a "my $" declaration (same with \@ and \%). Does the author never use references
or what?! – sundar
Mar 11 '10 at 20:54
" Allow :make to run 'perl -c' on the current buffer, jumping to
" errors as appropriate
" My copy of vimparse: http://irc.peeron.com/~zigdon/misc/vimparse.pl
set makeprg=$HOME/bin/vimparse.pl\ -c\ %\ $*
" point at wherever you keep the output of pltags.pl, allowing use of ^-]
" to jump to function definitions.
set tags+=/path/to/tags
@sinan it enables quickfix - all it does is reformat the output of perl -c so that vim parses it as compiler errors. The the usual
quickfix commands work. – zigdon
Oct 16 '09 at 18:51
Here's an interesting module I found on the weekend:
App::EditorTools::Vim
. Its most interesting feature seems to be its ability to rename lexical variables. Unfortunately, my tests revealed that it doesn't
seem to be ready yet for any production use, but it sure seems worth to keep an eye on.
Here are a couple of my .vimrc settings. They may not be Perl specific, but I couldn't work without them:
set nocompatible " Use Vim defaults (much better!) "
set bs=2 " Allow backspacing over everything in insert mode "
set ai " Always set auto-indenting on "
set showmatch " show matching brackets "
" for quick scripts, just open a new buffer and type '_perls' "
iab _perls #!/usr/bin/perl<CR><BS><CR>use strict;<CR>use warnings;<CR>
The first one I know I picked up part of it from someone else, but I can't remember who. Sorry unknown person. Here's how I
made "C^N" auto complete work with Perl. Here's my .vimrc commands.
" to use CTRL+N with modules for autocomplete "
set iskeyword+=:
set complete+=k~/.vim_extras/installed_modules.dat
Then I set up a cron to create the installed_modules.dat file. Mine is for my mandriva system. Adjust accordingly.
locate *.pm | grep "perl5" | sed -e "s/\/usr\/lib\/perl5\///" | sed -e "s/5.8.8\///" | sed -e "s/5.8.7\///" | sed -e "s/vendor_perl\///" | sed -e "s/site_perl\///" | sed -e "s/x86_64-linux\///" | sed -e "s/\//::/g" | sed -e "s/\.pm//" >/home/jeremy/.vim_extras/installed_modules.dat
The second one allows me to use gf in Perl. Gf is a shortcut to other files. just place your cursor over the file and type
gf and it will open that file.
" To use gf with perl "
set path+=$PWD/**,
set path +=/usr/lib/perl5/*,
set path+=/CompanyCode/*, " directory containing work code "
autocmd BufRead *.p? set include=^use
autocmd BufRead *.pl set includeexpr=substitute(v:fname,'\\(.*\\)','\\1.pm','i')
How do I
open and edit multiple files on a VIM text editor running under Ubuntu Linux / UNIX-like
operating systems to improve my productivity?
Vim offers multiple file editing with the help of windows. You can easily open multiple
files and edit them using the concept of buffers.
Understanding vim buffer
A buffer is nothing but a file loaded into memory for editing. The original file remains
unchanged until you write the buffer to the file using w or other file saving
related commands.
Understanding vim window
A window is noting but a viewport onto a buffer. You can use multiple windows on one buffer,
or several windows on different buffers. By default, Vim starts with one window, for example
open /etc/passwd file, enter: $ vim /etc/passwd
Open two windows using vim at shell promot
Start vim as follows to open two windows stacked, i.e. split horizontally : $ vim -o /etc/passwd /etc/hosts
OR $ vim -o file1.txt resume.txt
Sample outputs:
(Fig.01: split horizontal windows under VIM)
The -O option allows you to open two windows side by side, i.e. split vertically ,
enter: $ vim -O /etc/passwd /etc/hostsHow do I switch or jump between open
windows?
This operation is also known as moving cursor to other windows. You need to use the
following keys:
Press CTRL + W + <Left arrow key> to activate left windows
Press CTRL + W + <Right arrow key> to activate right windows
Press CTRL + W + <Up arrow key> to activate to windows above current one
Press CTRL + W + <Down arrow key> to activate to windows down current one
Press CTRL-W + CTRL-W (hit CTRL+W twice) to move quickly between all open windows
How do I edit current buffer?
Use all your regular vim command such as i, w and so on for editing and saving
text.
How do I close windows?
Press CTRL+W CTRL-Q to close the current windows. You can also press [ESC]+:q to
quit current window.
How do I open new empty window?
Press CTRL+W + n to create a new window and start editing an empty file in
it.
Press [ESC]+:new /path/to/file. This will create a new window and start editing file
/path/to/file in it. For example, open file called /etc/hosts.deny, enter: :new /etc/hosts.deny
Sample outputs:
(Fig.02: Create a new window and start editing file /etc/hosts.deny in it.)
(Fig.03: Two files opened in a two windows)
How do I resize Window?
You can increase or decrease windows size by N number. For example, increase windows size by
5, press [ESC] + 5 + CTRL + W+ + . To decrease windows size by 5, press [ESC]+
5 + CTRL+ W + - .
Moving windows cheat sheet
Key combination
Action
CTRL-W h
move to the window on the left
CTRL-W j
move to the window below
CTRL-W k
move to the window above
CTRL-W l
move to the window on the right
CTRL-W t
move to the TOP window
CTRL-W b
move to the BOTTOM window
How do I quit all windows?
Type the following command (also known as quit all command): :qall
If any of the windows contain changes, Vim will not exit. The cursor will automatically be
positioned in a window with changes. You can then either use ":write" to save the changes: :write
or ":quit!" to throw them away: :quit!
How do save and quit all windows?
To save all changes in all windows and quite , use this command: :wqall
This writes all modified files and quits Vim. Finally, there is a command that quits Vim and
throws away all changes: :qall!
Further readings:
Refer "Splitting windows" help by typing :help under vim itself.
SyslogLevel=
See syslog(3)
for details. This option is
only useful when StandardOutput= or StandardError= are set to syslog or kmsg . Note that individual lines output by the daemon
might be prefixed with a different log level which can be used to
override the default log level specified here.
The interpretation
of these prefixes may be disabled with SyslogLevelPrefix= , see
below. For details, see sd-daemon(3) . Defaults to
info .
I don't do a lot of development work, but while learning Python I've found pycharm to be a
robust and helpful IDE. Other than that, I'm old school like Proksch and use vi.
MICHAEL BAKER
SYSTEM ADMINISTRATOR, IT MAIL SERVICES
Yes, I'm the same as @Proksch. For my development environment at Red Hat, vim is easiest to
use as I'm using Linux to pop in and out of files. Otherwise, I've had a lot of great
experiences with Visual Studio.
Most of the time on newly created file systems of NFS filesystems we see error
like below :
1 2 3 4
root @ kerneltalks # touch file1 touch : cannot touch ' file1 ' : Read - only file
system
This is because file system is mounted as read only. In such scenario you have to mount it
in read-write mode. Before that we will see how to check if file system is mounted in read only
mode and then we will get to how to re mount it as a read write filesystem.
How to check if file system is read only
To confirm file system is mounted in read only mode use below command –
Grep your mount point in cat /proc/mounts and observer third column which shows
all options which are used in mounted file system. Here ro denotes file system is
mounted read-only.
You can also get these details using mount -v command
1 2 3 4
root @ kerneltalks # mount -v |grep datastore / dev / xvdf on / datastore type ext3 (
ro , relatime , seclabel , data = ordered )
In this output. file system options are listed in braces at last column.
Re-mount file system in read-write mode
To remount file system in read-write mode use below command –
1 2 3 4 5 6
root @ kerneltalks # mount -o remount,rw /datastore root @ kerneltalks # mount -v |grep
datastore / dev / xvdf on / datastore type ext3 ( rw , relatime , seclabel , data = ordered
)
Observe after re-mounting option ro changed to rw . Now, file
system is mounted as read write and now you can write files in it.
Note : It is recommended to fsck file system before re mounting it.
You can check file system by running fsck on its volume.
1 2 3 4 5 6 7 8 9 10
root @ kerneltalks # df -h /datastore Filesystem Size Used Avail Use % Mounted on / dev
/ xvda2 10G 881M 9.2G 9 % / root @ kerneltalks # fsck /dev/xvdf fsck from util - linux
2.23.2 e2fsck 1.42.9 ( 28 - Dec - 2013 ) / dev / xvdf : clean , 12 / 655360 files , 79696 /
2621440 blocks
Sometimes there are some corrections needs to be made on file system which needs reboot to
make sure there are no processes are accessing file system.
You can see that user has to type 'y' for each query. It's in situation like these where yes
can help. For the above scenario specifically, you can use yes in the following way:
yes | rm -ri test Q3. Is there any use of yes when it's used alone?
Yes, there's at-least one use: to tell how well a computer system handles high amount of
loads. Reason being, the tool utilizes 100% processor for systems that have a single processor.
In case you want to apply this test on a system with multiple processors, you need to run a yes
process for each processor.
To open multiple files, command would be same as is for a single file; we just add the file
name for second file as well.
$ vi file1 file2 file 3
Now to browse to next file, we can use
$ :n
or we can also use
$ :e filename
Run external commands inside the editor
We can run external Linux/Unix commands from inside the vi editor, i.e. without exiting the
editor. To issue a command from editor, go back to Command Mode if in Insert mode & we use
the BANG i.e. '!' followed by the command that needs to be used. Syntax for running a command
is,
$ :! command
An example for this would be
$ :! df -H
Searching for a pattern
To search for a word or pattern in the text file, we use following two commands in command
mode,
command '/' searches the pattern in forward direction
command '?' searched the pattern in backward direction
Both of these commands are used for same purpose, only difference being the direction they
search in. An example would be,
$ :/ search pattern (If at beginning of the file)
$ :/ search pattern (If at the end of the file)
Searching & replacing a
pattern
We might be required to search & replace a word or a pattern from our text files. So
rather than finding the occurrence of word from whole text file & replace it, we can issue
a command from the command mode to replace the word automatically. Syntax for using search
& replacement is,
$ :s/pattern_to_be_found/New_pattern/g
Suppose we want to find word "alpha" & replace it with word "beta", the command would
be
$ :s/alpha/beta/g
If we want to only replace the first occurrence of word "alpha", then the command would
be
$ :s/alpha/beta/
Using Set commands
We can also customize the behaviour, the and feel of the vi/vim editor by using the set
command. Here is a list of some options that can be use set command to modify the behaviour of
vi/vim editor,
$ :set ic ignores cases while searching
$ :set smartcase enforce case sensitive search
$ :set nu display line number at the begining of the line
$ :set hlsearch highlights the matching words
$ : set ro change the file type to read only
$ : set term prints the terminal type
$ : set ai sets auto-indent
$ :set noai unsets the auto-indent
Some other commands to modify vi editors are,
$ :colorscheme its used to change the color scheme for the editor. (for VIM editor only)
$ :syntax on will turn on the color syntax for .xml, .html files etc. (for VIM editor
only)
This complete our tutorial, do mention your queries/questions or suggestions in the comment
box below.
"... If you can, freeze changes in the weeks leading up to your vacation. Try to encourage other teams to push off any major changes until after you get back. ..."
"... Check for any systems about to hit a disk warning threshold and clear out space. ..."
"... Make sure all of your backup scripts are working and all of your backups are up to date. ..."
If you do need to take your computer, I highly recommend making a full backup before the
trip. Your computer is more likely to be lost, stolen or broken while traveling than when
sitting safely at the office, so I always take a backup of my work machine before a trip. Even
better than taking a backup, leave your expensive work computer behind and use a cheaper more
disposable machine for travel and just restore your important files and settings for work on it
before you leave and wipe it when you return. If you decide to go the disposable computer
route, I recommend working one or two full work days on this computer before the vacation to
make sure all of your files and settings are in place.
Documentation
Good documentation is the best way to reduce or eliminate how much you have to step in when
you aren't on call, whether you're on vacation or not. Everything from routine procedures to
emergency response should be documented and kept up to date. Honestly, this falls under
standard best practices as a sysadmin, so it's something you should have whether or not you are
about to go on vacation.
First, all routine procedures from how you deploy code and configuration changes, how you
manage tickets, how you perform security patches, how you add and remove users, and how the
overall environment is structured should be documented in a clear step-by-step way. If you
use automation tools for routine procedures, whether it's as simple as a few scripts or as
complex as full orchestration tools, you should make sure you document not only how to use
the automation tools, but also how to perform the same tasks manually should the automation
tools fail.
If you are on call, that means you have a monitoring system in place that scans your
infrastructure for problems and pages you when it finds any. Every single system check in
your monitoring tool should have a corresponding playbook that a sysadmin can follow to
troubleshoot and fix the problem. If your monitoring tool allows you to customize the alerts
it sends, create corresponding wiki entries for each alert name, and then customize the alert
so that it provides a direct link to the playbook in the wiki.
If you happen to be the subject-matter expert on a particular system, make sure that
documentation in particular is well fleshed out and understandable. These are the systems
that will pull you out of your vacation, so look through those documents for any assumptions
you may have made when writing them that a junior member of the team might not understand.
Have other members of the team review the documentation and ask you questions.
One saying about documentation is that if something is documented in two places, one of them
will be out of date. Even if you document something only in one place, there's a good chance it
is out of date unless you perform routine maintenance. It's a good practice to review your
documentation from time to time and update it where necessary and before a vacation is a
particularly good time to do it. If you are the only person that knows about the new way to
perform a procedure, you should make sure your documentation covers it.
Finally, have your team maintain a page to capture anything that happens while you are gone
that they want to tell you about when you get back. If you are the main maintainer of a
particular system, but they had to perform some emergency maintenance of it while you were
gone, that's the kind of thing you'd like to know about when you get back. If there's a central
place for the team to capture these notes, they will be more likely to write things down as
they happen and less likely to forget about things when you get back.
Stable State
The more stable your infrastructure is before you leave and the more stable it stays while
you are gone, the less likely you'll be disturbed on your vacation. Right before a vacation is
a terrible time to make a major change to critical systems. If you can, freeze changes in
the weeks leading up to your vacation. Try to encourage other teams to push off any major
changes until after you get back.
Before a vacation is also a great time to perform any preventative maintenance on your
systems. Check for any systems about to hit a disk warning threshold and clear out
space. In general, if you collect trending data, skim through it for any resources that
are trending upward that might go past thresholds while you are gone. If you have any tasks
that might add extra load to your systems while you are gone, pause or postpone them if you
can. Make sure all of your backup scripts are working and all of your backups are up to
date.
Emergency Contact Methods
Although it would be great to unplug completely while on vacation, there's a chance that
someone from work might want to reach you in an emergency. Depending on where you plan to
travel, some contact options may work better than others. For instance, some cell-phone plans
that work while traveling might charge high rates for calls, but text messages and data bill at
the same rates as at home.
... ... ... Kyle Rankin is senior security and infrastructure architect, the author of
many books including Linux Hardening in Hostile Networks, DevOps Troubleshooting and The
Official Ubuntu Server Book, and a columnist for Linux Journal. Follow him
@kylerankin
If you pass -1 as the process ID argument to either the
kill shell command or the
kill C function , then the signal is sent to all the processes it can reach, which
in practice means all the processes of the user running the kill command or syscall.
pkill - ... signal processes based on name and other attributes
-u, --euid euid,...
Only match processes whose effective user ID is listed.
Either the numerical or symbolical value may be used.
-U, --uid uid,...
Only match processes whose real user ID is listed. Either the
numerical or symbolical value may be used.
-u, --user
Kill only processes the specified user owns. Command names
are optional.
I think, any utility used to find process in Linux/Solaris style /proc (procfs) will use
full list of processes (doing some readdir of /proc ). I think, they will
iterate over /proc digital subfolders and check every found process for
match.
To get list of users, use getpwent
(it will get one user per call).
skill (procps & procps-ng)
and killall (psmisc)
tools both uses getpwnam library call
to parse argument of -u option, and only username will be parsed.
pkill (procps & procps-ng)
uses both atol and getpwnam to parse -u / -U argument and allow
both numeric and textual user specifier.
; ,Aug 4, 2011 at 10:11
pkill is not obsolete. It may be unportable outside Linux, but the question was about Linux
specifically. – Lars Wirzenius
Aug 4 '11 at 10:11
Script execution Your perfect Bash script executes with syntax errors If you write Bash scripts with Bash specific
syntax and features, run them with Bash , and run them with Bash in native mode .
Wrong
no shebang
the interpreter used depends on the OS implementation and current shell
can be run by calling bash with the script name as an argument, e.g. bash myscript
#!/bin/sh shebang
depends on what /bin/sh actually is, for a Bash it means compatiblity mode, not native mode
Your script named "test" doesn't execute Give it another name. The executable test already exists.
In Bash it's a builtin. With other shells, it might be an executable file. Either way, it's bad name choice!
Workaround: You can call it using the pathname:
/home/user/bin/test
Globbing Brace expansion is not globbing The following command line is not related to globbing (filename expansion):
# YOU EXPECT
# -i1.vob -i2.vob -i3.vob ....
echo -i{*.vob,}
# YOU GET
# -i*.vob -i
Why? The brace expansion is simple text substitution. All possible text formed by the prefix, the postfix and the braces themselves
are generated. In the example, these are only two: -i*.vob and -i . The filename expansion happens after
that, so there is a chance that -i*.vob is expanded to a filename - if you have files like -ihello.vob
. But it definitely doesn't do what you expected.
Variables Setting variables The Dollar-Sign There is no $ (dollar-sign) when you reference the
name of a variable! Bash is not PHP!
# THIS IS WRONG!
$myvar="Hello world!"
A variable name preceeded with a dollar-sign always means that the variable gets expanded . In the example above, it might expand
to nothing (because it wasn't set), effectively resulting in
="Hello world!"
which definitely is wrong !
When you need the name of a variable, you write only the name , for example
(as shown above) to set variables: picture=/usr/share/images/foo.png
to name variables to be used by the read builtin command: read picture
to name variables to be unset: unset picture
When you need the content of a variable, you prefix its name with a dollar-sign , like
echo "The used picture is: $picture"
Whitespace Putting spaces on either or both sides of the equal-sign ( = ) when assigning a value to a variable
will fail.
# INCORRECT 1
example = Hello
# INCORRECT 2
example= Hello
# INCORRECT 3
example =Hello
The only valid form is no spaces between the variable name and assigned value
Expanding (using) variables A typical beginner's trap is quoting.
As noted above, when you want to expand a variable i.e. "get the content", the variable name needs to be prefixed with a dollar-sign.
But, since Bash knows various ways to quote and does word-splitting, the result isn't always the same.
Let's define an example variable containing text with spaces:
example="Hello world"
Used form
result
number of words
$example
Hello world
2
"$example"
Hello world
1
\$example
$example
1
'$example'
$example
1
If you use parameter expansion, you must use the name ( PATH ) of the referenced variables/parameters. i.e. not (
$PATH ):
# WRONG!
echo "The first character of PATH is ${$PATH:0:1}"
# CORRECT
echo "The first character of PATH is ${PATH:0:1}"
Note that if you are using variables in arithmetic expressions
, then the bare name is allowed:
((a=$a+7)) # Add 7 to a
((a = a + 7)) # Add 7 to a. Identical to the previous command.
((a += 7)) # Add 7 to a. Identical to the previous command.
a=$((a+7)) # POSIX-compatible version of previous code.
Exporting Exporting a variable means to give newly created (child-)processes a copy of that variable. not copy a variable
created in a child process to the parent process. The following example does not work, since the variable hello is set
in a child process (the process you execute to start that script ./script.sh ):
Exporting is one-way. The direction is parent process to child process, not the reverse. The above example will work, when you
don't execute the script, but include ("source") it:
Exit codes Reacting to exit codes If you just want to react to an exit code, regardless of its specific value, you
don't need to use $? in a test command like this:
grep
^root:
etc
passwd
>/
dev
null
>&
if
$?
-neq
then
echo
"root was not found - check the pub at the corner"
fi
This can be simplified to:
if
grep
^root:
etc
passwd
>/
dev
null
>&
then
echo
"root was not found - check the pub at the corner"
fi
Or, simpler yet:
grep
^root:
etc
passwd
>/
dev
null
>&
||
echo
"root was not found - check the pub at the corner"
If you need the specific value of $? , there's no other choice. But if you need only a "true/false" exit indication,
there's no need for $? .
Output vs. Return Value It's important to remember the different ways to run a child command, and whether you want the output,
the return value, or neither.
When you want to run a command (or a pipeline) and save (or print) the output , whether as a string or an array, you use Bash's
$(command) syntax:
$(ls -l /tmp)
newvariable=$(printf "foo")
When you want to use the return value of a command, just use the command, or add ( ) to run a command or pipeline in a subshell:
if grep someuser /etc/passwd ; then
# do something
fi
if ( w | grep someuser | grep sqlplus ) ; then
# someuser is logged in and running sqlplus
fi
Make sure you're using the form you intended:
# WRONG!
if $(grep ERROR /var/log/messages) ; then
# send alerts
fi
If you are looking for an even better command line utility for taking screenshots, then you
must give Scrot a try. This tool has some extra features that are currently not available in
gnome-screenshot. In this tutorial, we will explain Scrot using easy to understand examples.
Scrot
(
SCR
eensh
OT
)
is a screenshot capturing utility that uses the imlib2 library to acquire and save images.
Developed by Tom Gilbert, it's written in C programming language and is licensed under the BSD
License.
It would be interesting to see how long they will last (in active maintainance of the package).
The package written in shell (old style codeing like $(aaa) dor variables. Pretty large package.
Tarball is available form the site. RPM can be tricky to install on some distributions as it has dependencies,
just downloading it is not enough.
Software packages are available via https://packages.cisofy.com. Requirements Shell and basic utilities
For CentOs, RHEL and similar flavors RPM is available from EPEL: download.fedora.redhat.com/pub/fedora/epel/6/x86_64/
lynis-2.4.0-1.el6.noarch.rpm
sudo lynis
[ Lynis 2.4.0 ]
################################################################################
Lynis comes with ABSOLUTELY NO WARRANTY. This is free software, and you are
welcome to redistribute it under the terms of the GNU General Public License.
See the LICENSE file for details about using this software.
2007-2016, CISOfy - https://cisofy.com/lynis/
Enterprise support available (compliance, plugins, interface and tools)
################################################################################
[+] Initializing program
------------------------------------
Usage: lynis command [options]
Command:
audit
audit system : Perform local security scan
audit system remote : Remote security scan
audit dockerfile : Analyze Dockerfile
show
show : Show all commands
show version : Show Lynis version
show help : Show help
update
update info : Show update details
update release : Update Lynis release
Options:
--no-log : Don't create a log file
--pentest : Non-privileged scan (useful for pentest)
--profile : Scan the system with the given profile file
--quick (-Q) : Quick mode, don't wait for user input
Layout options
--no-colors : Don't use colors in output
--quiet (-q) : No output
--reverse-colors : Optimize color display for light backgrounds
Misc options
--debug : Debug logging to screen
--view-manpage (--man) : View man page
--verbose : Show more details on screen
--version (-V) : Display version number and quit
Enterprise options
--plugin-dir "" : Define path of available plugins
--upload : Upload data to central node
More options available. Run '/usr/sbin/lynis show options', or use the man page.
No command provided. Exiting..
To change the hostname on your CentOS or Ubuntu machine you
should run the following command:
# hostnamectl set-hostname virtual.server.com
For more command options you can add the
--help
flag at the end.
# hostnamectl --help
hostnamectl [OPTIONS...] COMMAND ...
Query or change system hostname.
-h --help Show this help
--version Show package version
--no-ask-password Do not prompt for password
-H --host=[USER@]HOST Operate on remote host
-M --machine=CONTAINER Operate on local container
--transient Only set transient hostname
--static Only set static hostname
--pretty Only set pretty hostname
Commands:
status Show current hostname settings
set-hostname NAME Set system hostname
set-icon-name NAME Set icon name for host
set-chassis NAME Set chassis type for host
set-deployment NAME Set deployment environment for host
set-location NAME Set location for host
Synkron is an application that helps you keep your files and folders always updated. You can
easily sync your documents, music or pictures to have their latest versions everywhere.
Synkron provides an easy-to-use interface and a lot of features. Moreover, it is free and
cross-platform.
Features
Sync multiple folders. With Synkron you can sync multiple folders at once
Analyse. Analyse folders to see what is going to be done in sync.
Blacklist. Exclude files from sync. Apply wildcards to sync only the files you want.
Restore. Restore files that were overwritten or deleted in previous syncs.
Options. Synkron lets you configure your synchronisations in detail.
Runs everywhere. Synkron is a cross-platform application that runs on Windows, Mac OS X
and Linux.
Documentation. Have a look at the documentation to learn about all the features of Synkron.
, optimized for programmers. This tool isn't aimed to "search all text
files". It is specifically created to search source code trees, not trees of text
files. It searches entire trees by default while ignoring Subversion, Git and other
VCS directories and other files that aren't your source code.
Linux on the desktop is making great progress. However, the real beauty of Linux and Unix like
operating system lies beneath the surface at the command prompt. nixCraft picks his best open source
terminal applications of 2012.
Most of the following tools are packaged by all major Linux distributions and can be installed on
*BSD or Apple OS X. #3: ngrep – Network grep
Fig.02: ngrep in action
Ngrep is a network packet analyzer. It follows most of GNU grep's common features, applying them
to the network layer. Ngrep is not related to tcpdump. It is just an easy to use tool. You can run
queries such as:
## grep all HTTP GET or POST requests from network traffic on eth0 interface ##
sudo
ngrep
-l
-q
-d
eth0
"^GET |^POST "
tcp and port
80
## grep all HTTP GET or POST requests from network traffic on eth0 interface ## sudo ngrep -l
-q -d eth0 "^GET |^POST " tcp and port 80
I often use this tool to find out security related problems and tracking down other network and
server related problems.
dtrx is an acronmy for "Do The Right Extraction." It's a tool for Unix-like systems that take all
the hassle out of extracting archives. As a sysadmin, I download source code and tar balls. This
tool saves lots of time.
You only need to remember one simple command to extract tar, zip, cpio, deb, rpm, gem, 7z,
cab, lzh, rar, gz, bz2, lzma, xz, and many kinds of exe files, including Microsoft Cabinet archives,
InstallShield archives, and self-extracting zip files. If they have any extra compression, like
tar.bz2 files, dtrx will take care of that for you, too.
dtrx will make sure that archives are extracted into their own dedicated directories.
dtrx makes sure you can read and write all the files you just extracted, while leaving the
rest of the permissions intact.
Recursive extraction: dtrx can find archives inside the archive and extract those too.
Fig.05: dstat in action
As a sysadmin, I heavily depends upon tools such as
vmstat, iostat and friends for troubleshooting server issues. Dstat overcomes some of the limitations
provided by vmstat and friends. It adds some extra features. It allows me to view all of my system
resources instantly. I can compare disk usage in combination with interrupts from hard disk controller,
or compare the network bandwidth numbers directly with the disk throughput and much more.
#8:mtr – Traceroute+ping in a single network diagnostic tool
Fig.07: mtr in action
The mtr command combines the functionality of the traceroute and ping programs in a single network
diagnostic tool. Use mtr to monitor outgoing bandwidth, latency and jitter in your network. A great
little app to solve network problems. If you see a sudden increase in packetloss or response time
is often an indication of a bad or simply overloaded link.
Fig.08: multitail in action (image credit – official project)
MultiTail is a program for monitoring multiple log files, in the fashion of the original tail
program. This program lets you view one or multiple files like the original tail program. The difference
is that it creates multiple windows on your console (with ncurses). I often use this tool when I
am monitoring logs on my server.
Fig.10: nc server and telnet client in action
Netcat or nc is a simple Linux or Unix command which reads and writes data across network connections,
using TCP or UDP protocol. I often use this tool to open up a network pipe to test network connectivity,
make backups, bind to sockets to handle incoming / outgoing requests and much more. In this example,
I tell nc to listen to a port # 3005 and execute /usr/bin/w command when client connects and send
data back to the client: $ nc -l -p 3005 -e /usr/bin/w
From a different system try to connect to port # 3005: $ telnet server1.cyberciti.biz.lan 3005
elinks or lynx – I use this browse remotely when some sites (such as RHN or Novell or Sun/Oracle)
require registration/login before making downloads.
wget – Best
download tool ever. I use wget all the time, even with Gnome desktop.
mplayer –
Best console mp3 player that can play any audio file format.
newsbeuter – Text mode rss feed reader with podcast support.
parallel – Build and execute shell command lines from standard input in parallel.
iftop – Display bandwidth usage on network interface by host.
iotop – Find out what's stressing and increasing load on your hard disks.
Conclusion
This is my personal FOSS terminal apps list and it is not absolutely definitive, so if you've
got your own terminal apps, share in the comments below.
GuentherHugo July 16, 2014, 8:27 am have a look at cluster-ssh
Whattteva August 23, 2013, 8:00 pm This is not quite a terminal program, but Terminator is
one of the best terminal emulators I know of out there. It makes multi-tasking in the terminal
100 times better, IMHO.
Boy nux January 8, 2013, 3:23 am lsblk
watch
Brendon December 30, 2012, 7:05 pm This is a great list – some of these utilities I've only
recently discovered and others I know will be super useful.
Another one that hasn't been mentioned here is iperf. From the Debian package description:
Iperf is a modern alternative for measuring TCP and UDP bandwidth performance, allowing the
tuning of various parameters and characteristics.
Features:
* Measure bandwidth, packet loss, delay jitter
* Report MSS/MTU size and observed read sizes.
* Support for TCP window size via socket buffers.
* Multi-threaded. Client and server can have multiple simultaneous connections.
* Client can create UDP streams of specified bandwidth.
* Multicast and IPv6 capable.
* Options can be specified with K (kilo-) and M (mega-) suffices.
* Can run for specified time, rather than a set amount of data to transfer.
* Picks the best units for the size of data being reported.
* Server handles multiple connections.
* Print periodic, intermediate bandwidth, jitter, and loss reports at specified
intervals.
* Server can be run as a daemon.
* Use representative streams to test out how link layer compression affects
vidir – edit directories (part of the 'moreutils' package)
@yjmbo December 12, 2012, 2:16 am htop, for sure. Thanks for dtrx, I'd not heard of that one.
mitmproxy ( http://mitmproxy.org/ ) might
be a nice complement for nc/nmap/openssl it's a curses-based HTTP/HTTPS proxy that lets you
examine, edit and replay the conversations your browser is having with the rest of the world
phusss December 12, 2012, 12:48 am socat > netcat
openssh > *
:)
Here are 4 commands i use for checking out disk usages.
#Grabs the disk usage in the current directory
alias usage='du -ch | grep total'
#Gets the total disk usage on your machine
alias totalusage='df -hl --total | grep total'
#Shows the individual partition usages without the temporary memory values
alias partusage='df -hlT --exclude-type=tmpfs --exclude-type=devtmpfs'
#Gives you what is using the most space. Both directories and files. Varies on
#current directory
alias most='du -hsx * | sort -rh | head -10'
shadowbq
December
17, 2012, 2:08 pm
usage is better written as
alias usage='du -ch 2> /dev/null |tail -1′
Mark
January 12,
2013, 6:08 pm
Thank you all for your aliases.
I found this one long time ago and it proved to be useful.
# shoot the fat ducks
in your current dir and sub dirs
alias ducks='du -ck | sort -nr | head'
Karsten
July 17,
2013, 9:30 pm
While it would still work, the problem with usage='du -ch | grep total' is that
you will also get directory names that happen to also have the word 'total' in
them.
A better way to do this might be: 'du -ch | tail -1'
James C. Woodburn
June
12, 2012, 11:45 am
I always create a ps2 command that I can easily pass a string to and look for it in
the process table. I even have it remove the grep of the current line.
Linux on the desktop is making great
progress. However, the real beauty of Linux and Unix like operating system lies beneath the surface at the command prompt. nixCraft
picks his best open source terminal applications of 2012.
Most of the following tools are packaged by all major Linux distributions and can be installed on *BSD or Apple OS X. #3:
ngrep – Network grep
Fig.02: ngrep in action
Ngrep is a network packet analyzer. It follows most of GNU grep's common features, applying them to the network layer. Ngrep is
not related to tcpdump. It is just an easy to use tool. You can run queries such as:
## grep all HTTP GET or POST requests from network traffic on eth0 interface ##
sudo
ngrep
-l
-q
-d
eth0
"^GET |^POST "
tcp and port
80
## grep all HTTP GET or POST requests from network traffic on eth0 interface ## sudo ngrep -l -q -d eth0 "^GET |^POST " tcp
and port 80
I often use this tool to find out security related problems and tracking down other network and server related problems.
Fig.04: dtrx in action
dtrx is an acronmy for "Do The Right Extraction." It's a tool for Unix-like systems that take all the hassle out of extracting
archives. As a sysadmin, I download source code and tar balls. This tool saves lots of time.
You only need to remember one simple command to extract tar, zip, cpio, deb, rpm, gem, 7z, cab, lzh, rar, gz, bz2, lzma,
xz, and many kinds of exe files, including Microsoft Cabinet archives, InstallShield archives, and self-extracting zip files.
If they have any extra compression, like tar.bz2 files, dtrx will take care of that for you, too.
dtrx will make sure that archives are extracted into their own dedicated directories.
dtrx makes sure you can read and write all the files you just extracted, while leaving the rest of the permissions intact.
Recursive extraction: dtrx can find archives inside the archive and extract those too.
Fig.05: dstat in action
As a sysadmin, I heavily depends upon tools such as
vmstat, iostat and friends
for troubleshooting server issues. Dstat overcomes some of the limitations provided by vmstat and friends. It adds some extra
features. It allows me to view all of my system resources instantly. I can compare disk usage in combination with interrupts from
hard disk controller, or compare the network bandwidth numbers directly with the disk throughput and much more.
#8:mtr – Traceroute+ping in a single network diagnostic tool
Fig.07: mtr in action
The mtr command combines the functionality of the traceroute and ping programs in a single network diagnostic tool. Use mtr to
monitor outgoing bandwidth, latency and jitter in your network. A great little app to solve network problems. If you see a sudden
increase in packetloss or response time is often an indication of a bad or simply overloaded link.
Fig.08: multitail in action (image credit – official project) MultiTail is a program
for monitoring multiple log files, in the fashion of the original tail program. This program lets you view one or multiple
files like the original tail program. The difference is that it creates multiple windows on your console (with ncurses). I often
use this tool when I am monitoring logs on my server.
Fig.10: nc server and telnet client in action
Netcat or nc is a simple Linux or Unix command which reads and writes data across network connections, using TCP or UDP protocol.
I often use this tool to open up a network pipe to test network connectivity, make backups, bind to sockets to handle incoming
/ outgoing requests and much more. In this example, I tell nc to listen to a port # 3005 and execute /usr/bin/w command when client
connects and send data back to the client: $ nc -l -p 3005 -e /usr/bin/w
From a different system try to connect to port # 3005: $ telnet server1.cyberciti.biz.lan 3005
The variable CDPATH defines the search path for the directory containing directories. So it served much like "directories
home". The dangers are in creating too complex CDPATH. Often a single directory works best. For example export CDPATH = /srv/www/public_html
. Now, instead of typing cd /srv/www/public_html/CSS I can simply type: cd CSS
Use CDPATH to access frequent directories in bash
Mar 21, '05 10:01:00AM • Contributed by:
jonbauman
I often find myself wanting to cd to the various directories beneath my home directory (i.e. ~/Library, ~/Music, etc.),
but being lazy, I find it painful to have to type the ~/ if I'm not in my home directory already. Enter CDPATH
, as desribed in man bash ):
The search path for the cd command. This is a colon-separated list of directories in which the shell looks for destination
directories specified by the cd command. A sample value is ".:~:/usr".
Personally, I use the following command (either on the command line for use in just that session, or in .bash_profile
for permanent use):
CDPATH=".:~:~/Library"
This way, no matter where I am in the directory tree, I can just cd dirname , and it will take me to the directory that
is a subdirectory of any of the ones in the list. For example:
$ cd
$ cd Documents
/Users/baumanj/Documents
$ cd Pictures
/Users/username/Pictures
$ cd Preferences
/Users/username/Library/Preferences
etc...
[ robg adds: No, this isn't some deeply buried treasure of OS X, but I'd never heard of the CDPATH variable, so
I'm assuming it will be of interest to some other readers as well.]
cdable_vars is also nice
Authored by: clh on Mar 21, '05 08:16:26PM
Check out the bash command shopt -s cdable_vars
From the man bash page:
cdable_vars
If set, an argument to the cd builtin command that is not a directory is assumed to be the name of a variable whose value
is the directory to change to.
With this set, if I give the following bash command:
export d="/Users/chap/Desktop"
I can then simply type
cd d
to change to my Desktop directory.
I put the shopt command and the various export commands in my .bashrc file.
Find "unit" – that's the new name for "init script name" to us oldtimers:
systemctl list-units --type=service
# this one is way more verbose
systemctl list-units
Enable or disable a service:
systemctl enable ossec
systemctl disable ossec
Start, stop, restart, reload, status:
systemctl start sshd
systemctl stop sshd
systemctl restart sshd
systemctl reload sshd
# status, gives some log output too
systemctl status sshd
Check ALL the logs, follow the logs, get a log for a service:
journalctl -l
journalctl -f
journalctl -u sshd
Install a systemd service:
(This is what a systemd service description looks like)
cat > ossec.service << EOF
[Unit]
Description=OSSEC Host-based Intrusion Detection System
[Service]
Type=forking
ExecStart=/var/ossec/bin/ossec-control start
ExecStop=/var/ossec/bin/ossec-control stop
[Install]
WantedBy=basic.target
EOF
# now copy that file into the magic place, /etc/init.d in the old days
install -Dm0644 ossec.service /usr/lib/systemd/system/ossec.service
# now make systemd pick up the changes
systemctl daemon-reload
Remote logging
OK so you now know your way around this beast.
Now you want remote logging.
According to the Arch wiki [#], systemd doesn't actually do remote logging (yet. what else
doesn't it do?) but it will helpfully spew its logs onto the socket /run/systemd/journal/syslog
if you knock twice, gently.
To convince systemd to write to this socket, go to /etc/systemd/journald.conf
and set
ForwardToSyslog=yes
then issue a journald restart
systemctl restart systemd-journald
You can install syslog-ng and it should pick up the logs. Test it now by making a log entry
with
If it don't, syslog-ng.conf's source src { system(); }; isn't picking up the socket
file. Fix this by adding the socket explicitly by changing the source in /etc/syslog-ng/syslog-ng.conf
like so:
This command displays a clock on your terminal which updates the time every second. Press Ctrl-C
to exit.
A couple of variants:
A little bit bigger text:
watch -t -n1 "date +%T|figlet -f big"You can try other figlet fonts, too.
Big sideways characters:
watch -n 1 -t '/usr/games/banner -w 30 $(date +%M:%S)'This requires a particular
version of banner and a 40-line terminal or you can adjust the width ("30″ here).
8) Remove duplicate entries in a file without sorting.
awk '!x[$0]++' <file>
Using awk, find duplicates in a file without sorting, which reorders the contents. awk will not
reorder them, and still find and remove duplicates which you can then redirect into another file.
This example, for example, produces the output, "Fri Feb 13 15:26:30 EST 2009″
13) Job Control
^Z $bg $disown
You're running a script, command, whatever.. You don't expect it to take long, now 5pm has rolled
around and you're ready to go home… Wait, it's still running… You forgot to nohup it before running
it… Suspend it, send it to the background, then disown it… The ouput wont go anywhere, but at least
the command will still run…
Watch is a very useful command for periodically running another command – in this using
mysqladmin to display the processlist. This is useful for monitoring which queries are causing your
server to clog up.
25) sshfs name@server:/path/to/folder /path/to/mount/point
Mount folder/filesystem through SSH
Install SSHFS from http://fuse.sourceforge.net/sshfs.html
Will allow you to mount a folder security over a network.
24) !!:gs/foo/bar
Runs previous command replacing foo by bar every time that foo appears
Very useful for rerunning a long command changing some arguments globally.
As opposed to ^foo^bar, which only replaces the first occurrence of foo, this one changes every
occurrence.
23) mount | column -t
currently mounted filesystems in nice layout
Particularly useful if you're mounting different drives, using the following command will allow
you to see all the filesystems currently mounted on your computer and their respective specs
with the added benefit of nice formatting.
22) <space>command
Execute a command without saving it in the history
Prepending one or more spaces to your command won't be saved in history.
Useful for pr0n or passwords on the commandline.
21) ssh user@host cat /path/to/remotefile | diff /path/to/localfile -
Compare a remote file with a local file
Useful for checking if there are differences between local and remote files.
20) mount -t tmpfs tmpfs /mnt -o size=1024m
Mount a temporary ram partition
Makes a partition in ram which is useful if you need a temporary working space as read/write
access is fast.
Be aware that anything saved in this partition will be gone after your computer is turned off.
19) dig +short txt <keyword>.wp.dg.cx
Query Wikipedia via console over DNS
Query Wikipedia by issuing a DNS query for a TXT record. The TXT record will also include a
short URL to the complete corresponding Wikipedia entry.
18) netstat -tlnp
Lists all listening ports together with the PID of the associated process
The PID will only be printed if you're holding a root equivalent ID.
17) dd if=/dev/dsp | ssh -c arcfour -C username@host dd of=/dev/dsp
output your microphone to a remote computer's speaker
This will output the sound from your microphone port to the ssh target computer's speaker port.
The sound quality is very bad, so you will hear a lot of hissing.
16) echo "ls -l" | at midnight
Execute a command at a given time
This is an alternative to cron which allows a one-off task to be scheduled for a certain time.
15) curl -u user:pass -d status="Tweeting from the shell"
http://twitter.com/statuses/update.xml
Update twitter via curl
14) ssh -N -L2001:localhost:80 somemachine
start a tunnel from some machine's port 80 to your local post 2001
now you can acces the website by going to http://localhost:2001/
13) reset
Salvage a borked terminal
If you bork your terminal by sending binary data to STDOUT or similar, you can get your terminal
back using this command rather than killing and restarting the session. Note that you often
won't be able to see the characters as you type them.
12) ffmpeg -f x11grab -s wxga -r 25 -i :0.0 -sameq /tmp/out.mpg
Capture video of a linux desktop
11) > file.txt
Empty a file
For when you want to flush all content from a file without removing it (hat-tip to Marc Kilgus).
10) $ssh-copy-id user@host
Copy ssh keys to user@host to enable password-less ssh logins.
To generate the keys use the command ssh-keygen
9) ctrl-x e
Rapidly invoke an editor to write a long, complex, or tricky command
Next time you are using your shell, try typing ctrl-x e (that is holding control key press x and
then e). The shell will take what you've written on the command line thus far and paste it into
the editor specified by $EDITOR. Then you can edit at leisure using all the powerful macros and
commands of vi, emacs, nano, or whatever.
8 ) !whatever:p
Check command history, but avoid running it
!whatever will search your command history and execute the first command that matches
'whatever'. If you don't feel safe doing this put :p on the end to print without executing.
Recommended when running as superuser.
7) mtr google.com
mtr, better than traceroute and ping combined
mtr combines the functionality of the traceroute and ping programs in a single network
diagnostic tool.
As mtr starts, it investigates the network connection between the host mtr runs on and HOSTNAME.
by sending packets with purposly low TTLs. It continues to send packets with low TTL, noting the
response time of the intervening routers. This allows mtr to print the response percentage and
response times of the internet route to HOSTNAME. A sudden increase in packetloss or response
time is often an indication of a bad (or simply over‐loaded) link.
6 ) cp filename{,.bak}
quickly backup or copy a file with bash
5) ^foo^bar
Runs previous command but replacing
Really useful for when you have a typo in a previous command. Also, arguments default to empty
so if you accidentally run:
echo "no typozs"
you can correct it with
^z
4) cd -
change to the previous working directory
3):w !sudo tee %
Save a file you edited in vim without the needed permissions
I often forget to sudo before editing a file I don't have write permissions on. When you come to
save that file and get the infamous "E212: Can't open file for writing", just issue that vim
command in order to save the file without the need to save it to a temp file and then copy it
back again.
2) python -m SimpleHTTPServer
Serve current directory tree at http://$HOSTNAME:8000/
1) sudo !!
Run the last command as root
Useful when you forget to use sudo for a command. "!!" grabs the last run command.
Monitoring Processes with pgrep By Sandra Henry-Stocker
This week, we're going to look at a simple bash script for monitoring
processes that we want to ensure are running all the time. We'll use a
couple cute scripting "tricks" to facilitate this process and make it as
useful as possible.
The basic command we're going to use is pgrep. For those of you
unfamiliar with pgrep, it's a very nice Solaris command that looks in
the process queue to see whether a process by a particular name is
running. If it finds the requested process, it returns the process id.
For example:
% pgrep httpd
1345
1346
1347
1348
This output tells us that there are four httpd processes running on our
system. These processes might look like this if we were to execute a ps
-ef command:
% ps -ef | grep httpd
output
The pgrep command, therefore, accomplishes what many of us used to do
with strings of Unix command of this variety:
In this command, we ran the ps command, narrowed the output down to only
those lines containing the word "httpd", removed the grep command
itself, and then printed out the second column of the output, the
process id. With pgrep, extracting the process ids for the processes
that we want to track is faster and "cleaner". Let's look at a couple
code segments. First, the old way:
for PROC in [ proc1 proc2 proc3 proc4 proc5 ]
do
RUNNING = `ps -ef | grep $PROC | grep -v grep | wc -l`
if [ $RUNNING ge 1 ]; then
echo $proc1 is running
else
echo $proc1 is down
fi
done
For each process, we generate a count of the number of instances we
detect in the ps output and, if this number is one or more, we issue the
"running" output. Otherwise, we display a message saying the process is
down.
Now, here's out replacement code using pgrep:
for PROC in [ proc1 proc2 proc3 proc4 proc5 ]
do
if [ `pgrep $PROC` ]; then
echo $PROC is running
else
echo $PROC is down
fi
done
In this case, we've simplified our code in a couple of ways. First, we
rely on pgrep to give is output (procids) if the process is running and
nothing if it isn't. Second, because we're not using ps and grep, we
don't have to remove the output that isn't relevant to our task. We
don't have to remove the ps output relating to the other running
processes and to the process generated by our grep command.
The process for killing a set of processes would be quite similar. In
fact, we could use both pgrep and a "sister" command, pkill in a similar
manner.
for PROC in [ proc1 proc2 proc3 proc4 proc5 ]
do
if [ `pgrep $PROC` ]; then
pkill $PROC
else
echo $PRIC is not running
fi
done
The pgrep command is more predictable because we know we're going to get
only the process id and that we won't be matching on other strings that
just happen to appear in the ps output (e.g., if someone were editing
the httpd.conf file).
The pgrep, pkill and related commands are not only easier to use. The
code is easier to read and understand. One of the reasons for using
sequences of commands such as this:
ps -ef | grep $PROC | grep -v grep | wc -l
was to ensure that we knew what our answer would have to look like. If
we left off the final "wc -l", we might get one or a number of pieces of
output and have to deal with this fact when we went to check it. In
addition, we could use similar logic when the number of processes,
rather than just some or none, was important. We would just check the
number against what we expected to see.
Even so, anyone reading this script a year later would have to stop and
think through this command. This is not true for pgrep. The command
"pgrep httpd" is easy and quick to interpret as "if httpd is running".
The "if [ `pgrep $PROC` ]" is especially efficient as well. This
statement tests whether there is output from the command and is compact
and readable. Much as I love Unix for the way it allows me to pipe
output from one command to the other, I love it even more when I don't
have to.
sh -x
By S. Lee Henry
Whenever you enter a command in a Unix shell, whether interactively or
through a script, the shell expands your commands and passes them on to
the Unix kernel for execution. Normally, the shell does its work
invisibly. In fact, it so unobtrusively processes your commands that
you can easily forget that it's actually doing something for you. As we
saw last week, presenting the shell with a command like "rm *" can, on
rare occasion, results in a complaint. When the shell balks, producing
an error indicating that the argument list is too long, it suddenly
reminds us of its presence and that it is subject to resource
limitations just like everything else.
Invoking the shell with an option to display commands as it processes
them is another way to become acquainted with the shell's method of
intercepting and interpreting your commands. The Bourne family shells
use the option -x. If you enter the shell using a -x, then commands
will be displayed for you before execution. For example:
boson% /bin/ksh -x
$ date
+ date
Mon Jun 4 07:11:01 EDT 2001
You can also see file expansion as the shell provides it for you:
$ ls oops*
+ ls oops1 oops2 oop3 oops4 oops5 oopsies
oops1 oops2 oop3 oops4 oops5 oopsies
This is all very exciting, of course, but of limited utility once you
get a solid appreciation of how hard the shell is working for you
command line after command line. The sh -x "trick" can be very useful
when you are debugging a script though. Instead of inserting lines of
code like "echo at end of loop" to help determine your code is failing,
you can change your "shebang" line to include the -x option:
#!/bin/sh -x
Afterwards, when you run the script, each line of code will display as
it is processed so you can easily see which of the commands are working
and where your breakdown is occurring. This is far more useful than
looking at no output or little output and wondering where processing is
hanging up -- especially true for a complex script where execution
follows numerous paths. Being able to watch the executed commands and
the order in which they are executed while the script is running can be
an invaluable debugging aid -- particularly for complex scripts that
don't write much output to the screen while running.
How Many is Too Many?
By Sandra Henry-Stocker
I surprised myself recently when I issued a command to remove all the
files in an old, and clearly unimportant, directory and received this
response:
bin/rm: arg list too long
I seldom encounter this response when cleaning up server directories
that I manage, so seeing it surprised me. When I began listing the
directory's contents, I wasn't surprised that my command had failed.
The directory contained more than 200,000 small, old, and meaningless
files, which would take a long time to list, consumes quite a bit of
directory file space, and would comprise a very long command line if
the shell were about to manage it. Even if every file name had only
eight characters, then a line containing all of their names (with blank
characters separating the names) would be nearly 1.8 million bytes
long. Not surprisingly, my shell balked at the task.
Situations like this remind us that, even though Unix is flexible,
powerful, and fun, each of the commands has built in limits. My shell
could not allocate adequate space to "expand" the asterisk that I
presented in my "rm *" command to a list of all 200,000+ files.
Of course, Unix offers several ways to solve every problem and running
out of space to expand a command merely invites one to solve the
problem differently. In my case, the easiest solution was to remove the
directory along with its contents. The rm -r command, since it doesn't
require any argument expansion, is "happy" to comply with such a
request. Had I not wanted to remove every file in the directory, I
would have gone through a little more trouble. I could have removed
subsets of the files, using commands like "rm a*" or "rm *5" until I
had removed all of the unwanted files.
A third approach would have been the appropriate for preserving only a
small number of the directory's files ? especially files that are
easily described by substring or date. I would have tarred up the
interesting files using tar and a wild card or a find command to create
an include file.
You will not often encounter situations where the shell will be unable
to expand your file names into a workable command. Few directories
house as many files as the one that I was cleaning up, and the Unix
shells allocate enough buffer space for most commands that you might
enter. Even so, limits exist and you might happen to bump into one of
them every few years.
So you're new to Linux and wondering how this virtual
terminal stuff works. Well you can work in six different terminals at
a time. To move around from one to another:
To change to Terminal 1 - Alt + F1
To change to Terminal 2 - Alt + F2
...
To change to Terminal 6 - Alt + F6
That's cool. But I just did locate on something and a lot of stuff scrolled up. How do I scroll up to see what flew by?
Shift + PgUp - Scroll Up
Shift + PgDn - Scroll Down
Note: If you switch away from a console and switch back to it,
you will lose what has already scrolled by.
If you had X running and wanted to change from X to
text based and vice versa
To change to text based from X - Ctrl + Alt + F(n) where n = 1..6
To change to X from text based - Alt + F7
Something unexpected happened and I want to shut down
my X server.
Just press:
What do you do when you need to see what a program
is doing, but it's not one that you'd normally run from the command
line? Perhaps it's one that is called as a network daemon from inetd,
is called from inside another shell script or application, or is even
called from cron. Is it actually being called? What command line parameters
is it being handed? Why is it dying?
Let's assume the app in question is /the/path/to/myapp
. Here's what you do. Make sure you have the "strace" program installed.
Download "apptrace" from
ftp://ftp.stearns.org/pub/apptrace/
and place it in your path, mode 755. Then type:
apptrace /the/path/to/myapp
When that program is called in the future, apptrace
will record the last time myapp ran (see the timestamp on myapp-last-run),
the command line parameters used (see myapp-parameters), and the strace
output from running myapp (see myapp.pid.trace) in either $HOME/apptrace
or /tmp/apptrace if $HOME is not set.
Note that if the original application is setuid-root,
strace will not honor that flag and it will run with the permissions
of the user running it like any other non-setuid-root app. See the man
page for strace for more information on why.
When you've found out what you need to know and wish
to stop monitoring the application, type:
mv -f /the/path/to/myapp.orig /the/path/to/myapp
Many thanks to David S. Miller
, kernel hacker extraordinaire, for the right to publish
his idea. His original version was:
It's actually pretty easy once if you can get a shell on the machine
before the event, once you know the program in question:
mv /path/to/${PROGRAM} /path/to/${PROGRAM}.ORIG
edit /path/to/${PROGRAM}
#!/bin/sh
strace -f -o /tmp/${PROGRAM}.trace /path/to/${PROGRAM}.ORIG $*
I do it all the time to debug network services started from
inetd for example.
Ever wonder what ports are open on your Linux machine ?
Did you ever
want to know who was connecting to your machine and what services were they
connecting to ? Netstat does just that.
To take a look at all TCP ports that are open on you system. The use of the '-n' option will give you numerical addresses instead of
determining the host. This speeds up the response of the output. The '-l'
option only shows connections which are in "LISTEN" mode. And '-t' only
shows the TCP connections.
netstat -nlt
[user@mymachine /home/user]# netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
The above output show that I have 3 open ports (80, 3306, 22) on my sytem
and are waiting for connections on all of the interfaces. The three ports
are 80 => apache , 3306 => mysql, 22 => ssh.
Let's take a look at the active connections to this machine. For this
you don't use the '-l' option but instead use the '-a' option. The '-a'
stand for, yup, you guessed it, show all.
netstat -nat
[user@mymachine /user]# netstat -nat
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 206.112.62.102:80 204.210.35.27:3467 ESTABLISHED
tcp 0 0 206.112.62.102:80 208.229.189.4:2582 FIN_WAIT2
tcp 0 7605 206.112.62.102:80 208.243.30.195:36957 CLOSING
tcp 0 0 206.112.62.102:22 10.60.1.18:3150 ESTABLISHED
tcp 0 0 206.112.62.102:22 10.60.1.18:3149 ESTABLISHED
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
The above output shows I have 3 web request that are currently being
made or are about to finish up. It also show I have 2 SSH connections established.
Now I know which IP address are making web requests or have SSH connections
open. For more info on the different states, ie "FIN_WAIT2" and "CLOSING"
please consult your local man pages.
Well that was a quick tip on how to use netstat to see what TCP ports
are open on your machine and who is connecting to them. Hope it was helpful.
Share the knowledge !
how virtual terminal stuff works
So you're new to Linux and wondering how this virtual terminal stuff
works. Well you can work in six different terminals at a time. To move around
from one to another:
To change to Terminal 1 - Alt + F1
To change to Terminal 2 - Alt + F2
...
To change to Terminal 6 - Alt + F6
That's cool. But I just did locate on something and a lot of stuff scrolled
up. How do I scroll up to see what flew by?
Shift + PgUp - Scroll Up
Shift + PgDn - Scroll Down
Note: If you switch away from a console and switch back to it,
you will lose what has already scrolled by.
If you had X running and wanted to change from X to text based and vice
versa
To change to text based from X - Ctrl + Alt + F(n) where n = 1..6
To change to X from text based - Alt + F7
Something unexpected happened and I want to shut down my X server.
Just press:
Ctrl + Alt + Backspace
apptrace /the/path/to/myapp
When that program is called in the future, apptrace will record the last
time myapp ran (see the timestamp on myapp-last-run), the command line parameters
used (see myapp-parameters), and the strace output from running myapp (see
myapp.pid.trace) in either $HOME/apptrace or /tmp/apptrace if $HOME is not
set.
Note that if the original application is setuid-root, strace will not
honor that flag and it will run with the permissions of the user running
it like any other non-setuid-root app. See the man page for strace for more
information on why.
When you've found out what you need to know and wish to stop monitoring
the application, type:
mv -f /the/path/to/myapp.orig /the/path/to/myapp
RPM - Installing, Querying, Deleting your packages.
RPM (Redhat Package Manager) is an excellent package manager. RPM, created
by Red Hat, can be used
for building, installing, querying, updating, verifying, and removing software
packages. This brief article will show you some of the usage of the
rpm tool.
So you have an rpm package that you wish to install. But you want to
find out more information about the package, like who built it, and when
was it built. Or you want to find out a short description about the package.
The following command will show you such information.
rpm -qpi packagename.rpm
Now that you know more about the package, you're ready to install it.
But before you install it you want to get a list of files and find out where
will these files be installed. The following command will show you exactly
that.
rpm -qpl packagename.rpm
To actually install the package, use:
rpm -i packagename.rpm
But what if I have an older version of the rpm already installed ? Then
you want to upgrade the package. The following command will remove any older
version of the package and install the newer version.
rpm -Uvh packagename.rpm
How do I check all the packages installed on my system ? The following
will list their names and version numbers.
rpm -qa
and to see all the packages installed with the latest ones on top.
rpm -qa --last
And if you want to see what package a file belongs to, if any, you can
do the following. This command will show the rpm name or tell you that the
file does not belong to any packages.
rpm -qf file
And if you wanted to uninstall the package, you can do the following.
rpm -e packagename
and to unistall even if it other packages depend on it. Note: This
is dangerous; this should only be done if you are absolutely sure the dependency
does not apply in your case.
rpm -e packagename --nodeps
There are a lot more comands to help you manage your packages better.
But this will cover the thirst of most users. If you want to learn more
about rpms type man rpm at your prompt or visit
www.rpm.org. In particular,
see the RPM-HOWTO at www.rpm.org.
"Some snippets of helpful advice were lying around my hard drive, so
I thought it a good time to unload it. There's no theme to any of it, really,
but I think that themes are sometimes overrated, don't you?"
"My favorite mail reader, pine 4.21, does not lag behind when it comes
to modern features. For example, it supports rule-based filtering just like
those graphical clients that get all the press these days. Just head to
Main menu -> Setup -> Rules -> Filters -> Add. Voila!"
"Red Hat 6.2 ships with the ability to display TrueType fonts with the
XFree86 X server. Oddly, the freetype package doesn't include any TrueType
fonts, nor does it provide clear instructions on how to add them to your
system."
Linux lets you use "virtual consoles"
to log on to multiple sessions simultaneously, so you can do more than
one operation or log on as another user. Logging on to another virtual
console is like sitting down and logging in at a different physical
terminal, except you are actually at one terminal, switching between
login sessions."
"Temporarily use a different shell.
Every user account has a
shell associated with it. The default Linux shell is bash; a popular
alternative is tcsh. The last field of the password table (/etc/passwd)
entry for an account contains the login shell information. You can get
the information by checking the password table, or you can use the finger
command."
"Print a man page.
Here are a few useful tips for viewing or
printing manpages:
To print a manpage, run the command:
man
| col -b | lpr
The col -b command removes any backspace or other characters that
would make the printed manpage difficult to read."
From the SGI Admin Guide - last I checked the CPU
spends most of its time waiting for something to do
Table 5-3 : Indications of an I/O-Bound System
Field Value sar Option
%busy (% time disk is busy) >85 sar -d
%rcache (reads in buffer cache) low, <85 sar -b
%wcache (writes in buffer cache) low, <60% sar -b
%wio (idle CPU waiting for disk I/O) dev. system >30 sar -u
fileserver >80
Table 5-5 Indications of Excessive Swapping/Paging
bswot/s (ransfers from memory to disk swap area) >200 sar -w
bswin/s (transfers to memory) >200 sar -w
%swpocc (time swap queue is occupied) >10 sar -q
rflt/s (page reference fault) >0 sar -t
freemem (average pages for user processes) <100 sar -r
Indications of a CPU bound systems
%idle (% of time CPU has no work to do) <5 sar -u
runq-sz (processes in memory waiting for CPU) >2 sar -q
%runocc (% run queue occupied and processes not executing) >90 sar -q
hypermail /usr/local/src/src/hypermail - mailing list to web page
converter; grep hypermail /etc/aliases shows which lists use hypermail
pwck, grpck should be run weekly to make sure ok; grpck produces
a ton of errors
can use local man pages - text only - see Ch3 User Services
put in /usr/local/manl (try /usr/man/local/manl) suffix .l
long ones pack -> pack program.1;mv program.1.z /usr/man/local/mannl/program.z
Wed, 17 May 2000 08:38:09 +0200
From: Sebastian Schleussner
[email protected]
I have been trying to set
command line editing (vi mode) as part of
my bash shell environment and have been unsuccessful so far.
You might
think this is trivial - well so did I.
I am using Red Hat Linux 6.1 and wanted to use "set -o vi" in
my
start up scripts. I have tried all possible combinations but
it JUST DOES
NOT WORK. I inserted the line in /etc/profile , in my .bash_profile,
in
my .bashrc etc but I cannot get it to work. How can I get this
done? This
used to be a breeze in the korn shell. Where am I going wrong?
Hi!
I recently learned from the SuSE help that you have to put the line
set keymap vi
into your /etc/inputrc or ~/.inputrc file, in addition to what you
did
('set -o vi' in ~/.bashrc or /etc/profile)!
I hope that will do the trick for you.
Funny thing; I was just about to post this tip when
I read Matt Willis' "HOWTO searching script" in LG45. Still, this script
is a good bit more flexible (allows diving into subdirectories, actually
displays the HOWTO or the document whether .gz or .html or whatever
format, etc.), uses the Bash shell instead of csh (well, _I_ see it
as an advantage
...), and reads the entire /usr/doc hierarchy - perfect for those
times when the man page isn't quite enough. I find myself using it about
as often as I do the 'man' command.
You will need the Midnight Commander on your system
to take advantage of this (in my opinion, one of the top three apps
ever written for the Linux console). I also find that it is at its best
when used under X-windows, as this allows the use of GhostView, xdvi,
and all the other nifty tools that aren't available on the console.
and press Enter. The script will respond with a menu
of all the /usr/doc subdirs beginning with 'xl' prefixed by menu numbers;
simply select the number for the directory that you want, and the script
will switch to that directory and present you with another menu. Whenever
your selection is an actual file, MC will open it in the appropriate
manner - and when you exit that view of it, you'll be presented with
the menu again. To quit the script, press 'Ctrl-C'.
A couple of built-in minor features (read: 'bugs')
- if given a nonsense number as a selection, 'doc' will drop you into
your home directory. Simply 'Ctrl-C' to get out and try again. Also,
for at least one directory in '/usr/doc' (the 'gimp-manual/html') there
is simply not enough scroll-back buffer to see all the menu-items (526
of them!). I'm afraid that you'll simply have to switch there and look
around; fortunately, MC makes that relatively easy!
Oh, one more MC tip. If you define the 'CDPATH' variable
in your .bash_profile and make '/usr/doc' one of the entries in it,
you'll be able to switch to any directory in that hierarchy by simply
typing 'cd <first_few_letters_of_dir_name>' and pressing the Tab key
for completion. Just like using 'doc', in some ways...
If you need to move your Linux installation to a different hard drive or partition
(and keep it working) and your distro uses grub this tech tip is what you need.
To start, get a live CD and boot into it. I prefer Ubuntu for things like
this. It has Gparted. Now follow the steps outlined below.
Copying
Mount both your source and destination partitions.
After the command finishes copying, shut down, remove the source drive,
and boot the live CD again.
Configuration
Mount your destination drive (or partition).
Run the command "gksu gedit" (or use nano or vi).
Edit the file /etc/fstab. Change the UUID or device entry with
the mount point / (the root partition) to your new drive. You can
find your new drive's (or partition's) UUID with this command:
$ ls -l /dev/disk/by-uuid/
Edit the file /boot/grub/menu.lst. Change the UUID of the appropriate
entries at the bottom of the file to the new one.
Install Grub
Run sudo grub.
At the Grub prompt, type:
find /boot/grub/menu.lst
This will tell you what your new drive and partition's number is. (Something
like hd(0,0))
Type:
root hd(0,0)
but replace "hd(0,0)" with your partition's number from above.
Type:
setup hd(0)
but replace "hd(0)" with your drive's number from above. (Omit the comma
and the number after it).
That's it! You should now have a bootable working copy of your source drive
on your destination drive! You can use this to move to a different drive, partition,
or filesystem.
Using UNIX in a day-to-day office setting doesn't have to be clumsy. Learn
some of the many ways, both simple and complex, to use the power of the
UNIX shell and available system tools to greatly increase your productivity
in the office.
The language of the UNIX® command line is notoriously versatile: With a panorama
of small tools and utilities and a shell to combine and execute them, you can
specify many precise and complex tasks.
But when used in an office setting, these same tools can become a powerful
ally toward increasing your productivity. Many techniques unique to UNIX can
be applied to the issue of workplace efficiency.
This article gives several suggestions and techniques for bolstering office
productivity at the command-line level: how to review your current system habits,
how to time your work, secrets for manipulating dates, a quick and simple method
of sending yourself a reminder, and a way to automate repetitive interactions.
The first step toward increasing your office productivity using the UNIX
command line is to take a close look at your current day-to-day habits. The
tools and applications you regularly use and the files you access and modify
can give you an idea of what routines are taking up a lot of your time -- and
what you might be avoiding.
You'll want to see what tools and applications you're using regularly. You
can easily ascertain your daily work habits on the system with the shell's
history built-in, which outputs an enumerated listing of the input
lines you've sent to the shell in the current and past sessions. See
Listing 1 for a typical example.
$ history
1 who
2 ls
3 cd /usr/local/proj
4 ls
5 cd websphere
6 ls
7 ls -l
$
The actual history is usually kept in a file so that it can be kept through
future sessions; for example, the Korn shell keeps its command history hidden
in the .sh_history file in the user's home directory, and the Bash shell uses
.bash_history. These files are usually overwritten when they reach a certain
length, but many shells have variables to set the maximum length of the history;
the Korn and Bash shells have the HISTSIZE and HISTFILESIZE variables,
which you can set in your shell startup file.
It can be useful to run
history through sort to get a list of the most popular
commands. Then, use awk to strip out the command name minus options
and arguments, and pass the sorted list to uniq to give an enumerated
list. Finally, call sort again to resort the list in reverse order
(highest first) by the first column, which is the enumeration itself.
Listing 2 shows an example of this in
action.
Use the same principle to review the files that you've modified or accessed.
To do this, use the find utility to locate and review all files
you've accessed or changed during a certain time period -- today, yesterday,
or at any date or segment of time in the past.
You generally can't find out who last accessed or modified a file,
because this information isn't easily available under UNIX, but you can review
your personal files by limiting the search to only files contained in your home
directory tree. You can also limit the search to only files in the directory
of a particular project that you're monitoring or otherwise working on.
The find utility has several flags that aid in locating files
by time, as listed in Table 1. Directories
aren't regular files but are accessed every time you list them or make them
the current working directory, so exclude them in the search using a negation
and the -type flag.
The time the file was last accessed -- in number of days.
-ctime
The time the file's status last changed -- in number of days.
-mtime
The time the file was last modified -- in number of days.
-amin
The time the file was last accessed -- in number of minutes. (It
is not available on all implementations.)
-cmin
The time the file's status last changed -- in number of minutes.
(It is not available on all implementations.)
-mmin
The time the file was last modified -- in number of minutes. (It
is not available on all implementations.)
-type
This flag describes the type of file, such as d for
directories.
-userX
Files belonging to user X.
-groupX
Files belonging to group X.
-newerX
Files that are newer than file X.
Here's how to list all the files in your home directory tree that were modified
exactly one hour ago:
$ find ~ -mmin 60 \! -type d
Giving a negative value for a flag means to match that number or sooner.
For example, here's how to list all the files in your home directory tree that
were modified exactly one hour ago or any time since:
$ find ~ -mmin -60 \! -type d
Not all implementations of find support the min flags.
If yours doesn't, you can make a workaround by using touch to create
a dummy file whose timestamp is older than what you're looking for, and then
search for files newer than it with the -newer flag:
$ date
Mon Oct 23 09:42:42 EDT 2006
$ touch -t 10230842 temp
$ ls -l temp
-rw-r--r-- 1 joe joe 0 Oct 23 08:42 temp
$ find ~ -newer temp \! -type d
The special -daystart flag, when used in conjunction with any of
the day options, measures days from the beginning of the current day instead
of from 24 hours previous to when the command is executed. Try listing all of
your files, existing anywhere on the system, that have been accessed any time
from the beginning of the day today up until right now:
Similarly, you can list all the files in your home directory tree that were
modified at any time today:
$ find ~ -daystart -mtime -1 \! -type d
Give different values for the various time flags to change the search times.
You can also combine flags. For instance, you can list all the files in your
home directory tree that were both accessed and modified between now
and seven days ago:
$ find ~ -daystart -atime -7 -mtime -7 \! -type d
You can also find files based on a specific date or a range of time, measured
in either days or minutes. The general way to do this is to use touch
to make a dummy file or files, as described earlier.
When you want to find
files that match a certain range, make two dummy files whose timestamps delineate
the range. Then, use the -newer flag with the older file, and use
"\! -newer" on the second file.
For example, to find all the files in the /usr/share directory tree that
were accessed in August, 2006, try the following:
Finally, it's sometimes helpful when listing the contents of a directory to
view the files sorted by their time of last modification. Some versions of the
ls tool have the -c option, which sorts by the time
of file modification, showing the most recently modified files first. In conjunction
with the -l (long-listing) and -t (sort by
modification time) options, you can peruse a directory listing by the most recently
modified files first; the long listing shows the file modification time instead
of the default creation time:
Another
useful means of increasing office productivity using UNIX is to time commands
that you regularly execute. Then, you can evaluate the results and determine
whether you're spending too much time waiting for a particular process to finish.
Is the system slowing you down? How long are you waiting at the shell, doing
nothing, while a particular command is being executed? How long does it take
you to run through your usual morning routine?
You can get concrete answers to these questions when you use the
date,
sleep, and echo commands to time your work.
To do this, type a long input line that first contains a date
statement to output the time and date in the desired format (usually hours and
minutes suffice). Then, run the command input line -- this can be several lines
strung together with shell directives -- and finally, get the date again on
the same input line. If the commands you're testing produce a lot of output,
redirect it so that you can read both start and stop dates. Calculate the difference
between the two dates:
You can use these same principles to test your typing speed:
$ date;cat|wc -w;date
This command works best if you give a long typing sample that lasts at least
a minute, but ideally three minutes or more. Take the difference in minutes
between the two dates and divide by the number of words you typed (which is
output by the middle command) to get the average number of words per minute
you type.
You can automate this by setting variables for the start and stop
dates and for the command that outputs the number of words. But to do this right,
you must be careful to avoid a common error in calculation when subtracting
times. A GNU extension to the date command, the %s
format option, avoids such errors -- it outputs the number of seconds since
the UNIX epoch, which is defined as midnight UTC on January 1, 1970.
Then, you can calculate the time based on seconds alone.
Assign a variable, SPEED, as the output of an echo
command to set up the right equation to pipe to a calculator tool, such as
bc. Then, output a new echo statement that outputs
a message with the speed:
$ START=`date +%s`;WORDS=`cat|wc -w`; STOP=`date +%s; SPEED=\
> `echo "$WORDS / ( ( $STOP - $START ) / 60 )"|bc`;echo \
> "You have a typing speed of $SPEED words per minute."
You can put this in a script and then change the permissions to make it executable
by all users, so that others on the system can use it, too, as in
Listing 3.
$ typespeed
The quick brown fox jumped over the lazy dog. The quick brown dog--
...
--jumped over the lazy fox.
^D
You have a typing speed of 82.33333333 words per minute.
$
The
date tool can do much more than just print the current system date.
You can use it to get the day of the week on which a given date falls and to
get dates relative to the current date.
Another GNU extension to the date command, the -d
option, comes in handy when you don't have a desk calendar nearby -- and what
UNIX person bothers with one? With this powerful option, you can quickly find
out what day of the week a particular date falls on by giving the date as a
quoted argument:
$ date -d "nov 22"
Wed Nov 22 00:00:00 EST 2006
$
In this example, you see that November 22 of this year is on a Wednesday.
So, when it's suggested that the big meeting be held on November 22, you'll
know right away that it falls on a Wednesday -- which is the day you're out
in the field office.
The -d option can also tell you what the date will be relative
to the current date -- either a number of days or weeks from now, or before
now (ago). Do this by quoting this relative offset as an argument to
the -d option.
Suppose, for example, that you need to know the date two weeks hence. If
you're at a shell prompt, you can get the answer immediately:
$ date -d '2 weeks'
There are other important ways to use this command. With the next
directive, you can get the day of the week for a coming day:
$ date -d 'next monday'
With the ago directive, you can get dates in the past:
$ date -d '30 days ago'
And you can use negative numbers to get dates in reverse:
$ date -d 'dec 14 -2 weeks'
This technique is useful to give yourself a reminder based on a coming date,
perhaps in a script or shell startup file, like so:
DAY=`date -d '2 weeks' +"%b %d"`
if test "`echo $DAY`" = "Aug 16"; then echo 'Product launch is now two weeks away!'; fi
Use the tools at your disposal to leave reminders for yourself on the system
-- they take up less space than notes on paper, and you'll see them from anywhere
you happen to be logged in.
When you're working on the system, it's easy to get distracted. The
leave tool, common on the IBM AIX® operating system and Berkeley Software
Distribution (BSD) systems (see Resources)
can help.
Give leave the time when you have to leave, using a 24-hour
format: HHMM. It runs in the background, and five minutes before that
given time, it outputs on your terminal a reminder for you to leave. It does
this again one minute before the given time if you're still logged in, and then
at the time itself -- and from then on, it keeps sending reminders every minute
until you log out (or kill the leave process). See
Listing 4 for an example. When you log
out, the leave process is killed.
$ leave
When do you have to leave? 1830
Alarm set for Fri Aug 4 18:30. (pid 1735)
$ date +"Time now: %l:%M%p"
Time now: 6:20PM
$
<system bell rings>
You have to leave in 5 minutes.
$ date +"Time now: %l:%M%p"
Time now: 6:25PM
$
<system bell rings>
Just one more minute!
$ date +"Time now: %l:%M%p"
Time now: 6:29PM
$
Time to leave!
$ date +"Time now: %l:%M%p"
Time now: 6:30PM
$
<system bell rings>
Time to leave!
$ date +"Time now: %l:%M%p"
Time now: 6:31PM
$ kill 1735
$ sleep 120; date +"Time now: %l:%M%p"
Time now: 6:33PM
$
You can give relative times. If you want to leave a certain amount of time from
now, precede the time argument with a +. So, to be reminded
to leave in two hours, type the following:
$ leave +0200
To give a time amount in minutes, make the hours field 0. For example, if
you know you have only 10 more minutes before you absolutely have to go, type:
$ leave +0010
You can also specify the time to leave as an argument, which makes
leave a useful command to put in scripts -- particularly in shell startup
files. For instance, if you're normally scheduled to work until 5 p.m., but
on Fridays you have to be out of the building at 4 p.m., you can set a weekly
reminder in your shell startup file:
if test "`date +%a`" = "Fri"; then leave 1600; fi
You can put a plain leave statement, with no arguments, in a
startup script. Every time you start a login shell, you can enter a time to
be reminded when to leave; if you press the Enter key, giving no
value, then leave exits without setting a reminder.
You can also send yourself a reminder using a text message. Sometimes it's
useful to make a reminder that you'll see either later in your current login
session or the next time you log in.
At one time, the old elm mail agent came bundled with a tool
that enabled you to send memorandums using e-mail; it was basically a script
that prompted for the sender, the subject, and the body text. This is easily
replicated by the time-honored method of sending mail to yourself with the command-line
mailx tool. (On some UNIX systems, mail is used instead
of mailx.)
Give as an argument your e-mail address (or your username on the local system,
if that's where you read mail); then, you can type the reminder on the Subject
line when prompted, if it's short enough, as in
Listing 5. If the reminder won't fit
on the Subject line, type it in the body of the message. A ^D on
a line by itself exits mailx and sends the mail.
The Expect language (an extension of Tcl/Tk, but other variations are also
available) is used to write scripts that run sessions with interactive programs,
as if the script were a user interacting directly with the program.
Expect scripts can save you a great deal of time, particularly when you find
yourself engaging in repetitive tasks. Expect can interact with multiple programs
including shells and text-based Web browsers, start remote sessions, and run
over the network.
For example, if you frequently connect to a system on your local intranet
to run particular program -- the test-servers command, for instance
-- you can automate it with an Expect script named servmaint, whose
contents appear in Listing 6.
Now, instead of going through the entire process of having to run
telnet
to connect to the remote system, log in with your username and password, run
the command(s) on that system, and then log out. You just run the
servmaint
script as given in Listing 6; everything
else is done for you. Of course, if you give a password or other proprietary
information in such a script, there is a security consideration; at minimum,
you should change the file's permissions so that you're the only user (besides
the superuser) who can read it.
Any repetitive task involving system interaction can be programmed in Expect
-- it's capable of branching, conditionals, and all other features of a high-level
language so that the response and direction of the interaction with the program(s)
can be completely automated.
In an office
setting, UNIX systems can handle many of the tasks that are normally handled
by standalone computers running other operating systems -- and with their rich
supply of command-line tools, they're capable of productivity boosters that
can't be found anywhere else.
This article introduced several techniques and concepts to increase your
office productivity using UNIX command-line tools and applications. You should
be able to apply these ideas to your own office situations and, with a little
command-line ingenuity, come up with even more ways to save time and be more
productive.
"Expect
exceeds expectations" (developerWorks, April 2002): For a concise introduction
to the Expect language, read Cameron Laird's article.
"Hone
your regexp pattern-building skills" (developerWorks, July 2006): This
article gives examples of powerful, real-world regular expressions that
can increase your office productivity.
"Tcl
your desktop" (developerWorks, June 2006): Make your work desktop uncluttered
and efficient by following the steps outlined in this article.
"Text
processing with UNIX" (developerWorks, August 2006): This article demonstrates
the power of UNIX command-line tools for processing text.
"Working
smarter, not harder" (developerWorks, August 2006): Part 2 of the "Speaking
UNIX" series contains a primer on the history built-in.
Get products and technologies
leave utility: If your UNIX system doesn't come with the
leave
utility, you can download a free copy from netbsd.org.
date
tool: The GNU Project's implementation of the date tool
contains an extension for outputting the number of seconds since the UNIX
epoch. Download a free copy from the GNU Project Web site.
Michael Stutz is author of The Linux Cookbook,
which he also designed and typeset using only open source software.
His research interests include digital publishing and the future of
the book. He has used various UNIX operating systems for 20 years. You
can reach him at
[email protected].
So you're new to Linux and wondering how this virtual terminal
stuff works. Well you can work in six different terminals at a time. To move
around from one to another:
To change to Terminal 1 - Alt + F1
To change to Terminal 2 - Alt + F2
...
To change to Terminal 6 - Alt + F6
That's cool. But I just did locate on something and a lot of stuff scrolled up. How do I scroll up to see what flew by?
Shift + PgUp - Scroll Up
Shift + PgDn - Scroll Down
Note: If you switch away from a console and switch back to it,
you will lose what has already scrolled by.
If you had X running and wanted to change from X to text based
and vice versa
To change to text based from X - Ctrl + Alt + F(n) where n = 1..6
To change to X from text based - Alt + F7
Something unexpected happened and I want to shut down my X
server.
Just press:
What do you do when you need to see what a program is doing,
but it's not one that you'd normally run from the command line? Perhaps it's
one that is called as a network daemon from inetd, is called from inside another
shell script or application, or is even called from cron. Is it actually being
called? What command line parameters is it being handed? Why is it dying?
Let's assume the app in question is /the/path/to/myapp . Here's
what you do. Make sure you have the "strace" program installed. Download "apptrace"
from ftp://ftp.stearns.org/pub/apptrace/
and place it in your path, mode 755. Then type:
apptrace /the/path/to/myapp
When that program is called in the future, apptrace will record
the last time myapp ran (see the timestamp on myapp-last-run), the command line
parameters used (see myapp-parameters), and the strace output from running myapp
(see myapp.pid.trace) in either $HOME/apptrace or /tmp/apptrace if $HOME is
not set.
Note that if the original application is setuid-root, strace
will not honor that flag and it will run with the permissions of the user running
it like any other non-setuid-root app. See the man page for strace for more
information on why.
When you've found out what you need to know and wish to stop
monitoring the application, type:
mv -f /the/path/to/myapp.orig /the/path/to/myapp
Many thanks to David S. Miller
, kernel hacker extraordinaire, for the right to publish his
idea. His original version was:
It's actually pretty easy once if you can get a shell on the machine
before the event, once you know the program in question:
mv /path/to/${PROGRAM} /path/to/${PROGRAM}.ORIG
edit /path/to/${PROGRAM}
#!/bin/sh
strace -f -o /tmp/${PROGRAM}.trace /path/to/${PROGRAM}.ORIG $*
I do it all the time to debug network services started from
inetd for example.
Linux lets you use "virtual consoles" to
log on to multiple sessions simultaneously, so you can do more than one operation
or log on as another user. Logging on to another virtual console is like sitting
down and logging in at a different physical terminal, except you are actually
at one terminal, switching between login sessions."
"Temporarily use a different shell.
Every user account has a shell associated
with it. The default Linux shell is bash; a popular alternative is tcsh. The
last field of the password table (/etc/passwd) entry for an account contains
the login shell information. You can get the information by checking the password
table, or you can use the finger command."
"Print a man page.
Here are a few useful tips for viewing or printing
manpages:
To print a manpage, run the command:
man
| col -b | lpr
The col -b command removes any backspace or other characters that would make
the printed manpage difficult to read."
From the SGI Admin Guide - last I checked the CPU spends most
of its time waiting for something to do
Table 5-3 : Indications of an I/O-Bound System
Field Value sar Option
%busy (% time disk is busy) >85 sar -d
%rcache (reads in buffer cache) low, <85 sar -b
%wcache (writes in buffer cache) low, <60% sar -b
%wio (idle CPU waiting for disk I/O) dev. system >30 sar -u
fileserver >80
Table 5-5 Indications of Excessive Swapping/Paging
bswot/s (ransfers from memory to disk swap area) >200 sar -w
bswin/s (transfers to memory) >200 sar -w
%swpocc (time swap queue is occupied) >10 sar -q
rflt/s (page reference fault) >0 sar -t
freemem (average pages for user processes) <100 sar -r
Indications of a CPU bound systems
%idle (% of time CPU has no work to do) <5 sar -u
runq-sz (processes in memory waiting for CPU) >2 sar -q
%runocc (% run queue occupied and processes not executing) >90 sar -q
hypermail /usr/local/src/src/hypermail - mailing list to web page converter;
grep hypermail /etc/aliases shows which lists use hypermail
pwck, grpck should be run weekly to make sure ok; grpck produces a ton of
errors
can use local man pages - text only - see Ch3 User Services
put in /usr/local/manl (try /usr/man/local/manl) suffix .l
long ones pack -> pack program.1;mv program.1.z /usr/man/local/mannl/program.z
Wed, 17 May 2000 08:38:09 +0200
From: Sebastian Schleussner
[email protected]
I have been trying to set command
line editing (vi mode) as part of
my bash shell environment and have been unsuccessful so far. You might
think this is trivial - well so did I.
I am using Red Hat Linux 6.1 and wanted to use "set -o vi" in my
start up scripts. I have tried all possible combinations but it JUST
DOES
NOT WORK. I inserted the line in /etc/profile , in my .bash_profile,
in
my .bashrc etc but I cannot get it to work. How can I get this done?
This
used to be a breeze in the korn shell. Where am I going wrong?
Hi!
I recently learned from the SuSE help that you have to put the line
set keymap vi
into your /etc/inputrc or ~/.inputrc file, in addition to what you did
('set -o vi' in ~/.bashrc or /etc/profile)!
I hope that will do the trick for you.
Sat, 11 Mar 2000 07:08:15 +0100 (CET)
From: Hans Zoebelein <[email protected]>
Everybody who is running a software project needs a FAQ to
clarify questions about the project and to enlighten newbies how to run the
software. Writing FAQs can be a time consuming process without much fun.
Now here comes a little Perl script which transforms simple
ASCII input into HTML output which is perfect for FAQs (Frequently Asked Questions).
I'm using this script on a daily basis and it is really nice and spares a lot
of time. Check out http://leb.net/blinux/blinux-faq.html for results.
Attachment faq_builder.txt is the ASCII input to produce faq_builder.html
using faq_builder.pl script.
When I browse through the 2 cent tips, I see a lot of general
Sysadmin/bash questions that could be answered by a book called
"An Introduction
to Linux Systems Administration" - written by David Jones and Bruce Jamieson.
If you can't or don't want to use auto-mounting, and are tired
of typing out all those 'mount' and 'umount' commands, here's a script called
'fd' that will do "the right thing at the right time" - and is easily modified
for other devices:
#!/bin/bash
d="/mnt/fd0"
if [ -n "$(mount $d 2>&1)" ]; then umount $d; fi
It's a fine example of "obfuscated Bash scripting" ,
but it works well - I use it and its relatives 'cdr', 'dvd', and 'fdl' (Linux-ext2
floppy) every day.
Ben Okopnik
2 Cent Tips
Wed, 08 Mar 2000 16:13:59 -0500 From: Bolen Coogler <[email protected]>
How to set vi edit mode in bash for Mandrake 7.0
If, like me, you prefer vi-style command line editing in bash, here's how
to get it working in Mandrake 7.0.
When I wiped out Redhat 5.2 on my PC and installed Mandrake 7.0, I found
vi command line editing no longer worked, even after issuing the "set -o vi"
command. After much hair pulling and gnashing of teeth, I finally found the
problem is with the /etc/inputrc file. I still don't know which line in this
file caused the problem. If you have this same problem in Mandrake or some other
distribution, my suggestion for a fix is:
1. su to root. 2. Save a copy of the original /etc/inputrc file (you may
want it back).
3. Replace the contents of /etc/inputrc with the following:
set convert-meta off
set input-meta on
set output-meta on
set keymap vi
set editing-mode vi
The next time you start a terminal session, vi editing will be functional.
Funny thing; I was just about to post this tip when I read
Matt Willis' "HOWTO searching script" in LG45. Still, this script is a good
bit more flexible (allows diving into subdirectories, actually displays the
HOWTO or the document whether .gz or .html or whatever format, etc.), uses the
Bash shell instead of csh (well, _I_ see it as an advantage
...), and reads the entire /usr/doc hierarchy - perfect for those times
when the man page isn't quite enough. I find myself using it about as often
as I do the 'man' command.
You will need the Midnight Commander on your system to take
advantage of this (in my opinion, one of the top three apps ever written for
the Linux console). I also find that it is at its best when used under X-windows,
as this allows the use of GhostView, xdvi, and all the other nifty tools that
aren't available on the console.
and press Enter. The script will respond with a menu of all
the /usr/doc subdirs beginning with 'xl' prefixed by menu numbers; simply select
the number for the directory that you want, and the script will switch to that
directory and present you with another menu. Whenever your selection is an actual
file, MC will open it in the appropriate manner - and when you exit that view
of it, you'll be presented with the menu again. To quit the script, press 'Ctrl-C'.
A couple of built-in minor features (read: 'bugs') - if given
a nonsense number as a selection, 'doc' will drop you into your home directory.
Simply 'Ctrl-C' to get out and try again. Also, for at least one directory in
'/usr/doc' (the 'gimp-manual/html') there is simply not enough scroll-back buffer
to see all the menu-items (526 of them!). I'm afraid that you'll simply have
to switch there and look around; fortunately, MC makes that relatively easy!
Oh, one more MC tip. If you define the 'CDPATH' variable in
your .bash_profile and make '/usr/doc' one of the entries in it, you'll be able
to switch to any directory in that hierarchy by simply typing 'cd <first_few_letters_of_dir_name>'
and pressing the Tab key for completion. Just like using 'doc', in some ways...
I am not familiar with Norton Ghost; however I have been successfully
dual booting NT 4 and versions of linux (currently Redhat 6.0) for the past
year.
First let me refer you to the excellent article on multibooting
by Tom de Blende in issue 47 of LG. Note step 17. "The tricky part is configuring
Lilo. You must keep Lilo OUT OF THE MBR! The mbr is reserved for NT. If you'd
install Lilo in your mbr, NT won't boot anymore".
As your requirements are quite modest they can easily be accomplished
without any third party software ie. "Bootpart".
If NT is on a Fat partition then install MSdos and use the
NT loader floppy disks to repair the startup environment. If NT is on an NTFS
partition then you will need a Fat partition to load MSdos. Either way you should
get to a stage where you can use NT's boot manager to select between NT and
MSdos.
Boot into dos and from the dos prompt: "copy bootsect.dos
*.lux".
Use attrib to remove attributes from boot.ini "attrib -s -h
-r boot.ini" and edit the boot.ini file; after a line similar to C:\bootsect.dos="MS-DOS
v6.22" add the line C:\bootsect.lux="Redhat Linux".
Save the edited file and replace the attributes.
At the boot menu you should now have four options: two for
NT (normal and vga mode) and one each for msdos and Linux. To get the linux
option to work you will have to use redhat's boot disk to boot into Linux and
configure Lilo. Log on as root and use your favorite text editor to edit /etc/lilo.conf.
Here is a copy of mine:
It can be quite minimal as it only has one operating system
to boot; there is no requirement for a prompt and the timeout is reduced to
1 so that it boots almost immediately without further user intervention. If
your linux root partition is not /dev/hda5 then the root line will require amendment.
I mount my MSdos C: drive as /c/ under linux. I am sure this
will make some unix purists cringe but I find C: to /c easy to type and easy
to remember. If you are happy with that; then all that is required is to create
the mount point, "mkdir /c" and mount the C: drive. "mount -t msdos /dev/hda1
/c" will do for now but you may want to include /dev/hda1 in /etc/fstab so that
it will automatically mounted in the future; useful for exporting files to make
them available to NT.
Check that /c/bootsect.lux is visible to Linux "ls /c/bootsect*"
/c/bootsect.dos /c/bootsect.lux
Then run "lilo"
Added linux *
Following an orderly shutdown and reboot you can now select
Redhat Linux at NT's boot prompt and boot into Linux. I hope you find the above
useful.
The Last but not LeastTechnology is dominated by
two types of people: those who understand what they do not manage and those who manage what they do not understand ~Archibald Putt.
Ph.D
FAIR USE NOTICEThis site contains
copyrighted material the use of which has not always been specifically
authorized by the copyright owner. We are making such material available
to advance understanding of computer science, IT technology, economic, scientific, and social
issues. We believe this constitutes a 'fair use' of any such
copyrighted material as provided by section 107 of the US Copyright Law according to which
such material can be distributed without profit exclusively for research and educational purposes.
This is a Spartan WHYFF (We Help You For Free)
site written by people for whom English is not a native language. Grammar and spelling errors should
be expected. The site contain some broken links as it develops like a living tree...
You can use PayPal to to buy a cup of coffee for authors
of this site
Disclaimer:
The statements, views and opinions presented on this web page are those of the author (or
referenced source) and are
not endorsed by, nor do they necessarily reflect, the opinions of the Softpanorama society.We do not warrant the correctness
of the information provided or its fitness for any purpose. The site uses AdSense so you need to be aware of Google privacy policy. You you do not want to be
tracked by Google please disable Javascript for this site. This site is perfectly usable without
Javascript.