This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our Essentials of Linux System Administration course!

Back in the mid 90s and early 00s, Linux, being a fledgling operating system, suffered from a severe lack of useful applications. This issue was especially critical in the world of business ─ where Windows desktop applications could make or break productivity. To overcome this weakness, a compatibility layer called WINE was created. The name originally stood for Wine Is Not an Emulator (because everyone mistook the tool for a Windows emulator). The name is now simply Wine.

Effectively, what Wine did was to allow Windows applications to run on the Linux platform. It wasn’t perfect, and the supported apps were limited. If you wanted Notepad, Calculator, or Solitaire…you were good to go.

But then something interesting happened. Over time more and more applications were supported until Wine became a must-have tool for many users and businesses (and especially Linux gamers). To date there are thousands of fully supported applications that now run on Wine (check out the application database for a full list) and that list is ever growing. Granted most of the Wine work is focused on games, but you’ll still find a healthy list of productivity apps available.

You might think, because of the complexity of bringing such a tool to life, that Wine would be complicated to install and use. That assumption would be incorrect. In fact, the developers of Wine have gone out of their way to make the compatibility layer as user-friendly as possible. What exactly does that mean? To make this easier, let’s walk through the process of installing Wine and then installing and running a Windows application with the tool.

I will demonstrate the process on Elementary OS Freya and install the latest version of Wine.

Installation

If you are running an Ubuntu derivative, you’ll find Wine located in the Software Center. Chances are, however, that version is outdated. Because of that, we want to avoid installing the “out of the box” version offered. To do this, we must add the official Wine repository. This can be done one of two ways, via command line or GUI. Since our goal is running Windows applications, let’s use the GUI method.

Here’s how:

  1. Click on the Applications menu

  2. Type software

  3. Click Software & Updates

  4. Click on the Other Software tab

  5. Click Add

  6. Enter ppa:ubuntu-wine/ppa in the APT line section (Figure 2)

  7. Click Add Source

  8. Enter your sudo password

  9. Click Authenticate

  10. Click Close

  11. When prompted, click Reload

  12. Open the Software Center

  13. Search for Wine

  14. Click the Wine entry and then click Install

  15. Allow the installation to complete.

That’s it. Wine is now ready to help you install and run Windows applications. Remember, however, that not every application will work. Most will, but if you’re looking to get your in-house, proprietary solution up and running, you might hit a few snags.

Installing and running an app

Let’s install a very popular programmers notepad—Notepad++. You’ll want to download the file from a location that doesn’t include third-party app install options (which can cause the application installation to fail). To be safe, download the Notepad++ installer from Filehippo. You will find .exe file for Notepad in your Downloads directory. Right-click that file and select Open in Wine Windows Program Loader (Figure 3).

Upon first run, the Wine configuration for ~/.wine will be updated. This can, depending upon the speed of your machine, take a bit of time. Allow this to finish and then the all-too-familiar Windows installation wizard will start up and walk you through the installation of Notepad++.

Click Next and walk through the installation process. When the second screen pops up (Figure 4), you will notice a rather un-Linux Folder path.

Linux doesn’t contain a C drive as does Windows. Is this wrong? No. If you look in the ~/.wine folder, you will notice a folder called drive_c. Within that folder lies three familiar sub-folders:

  • Program Files

  • users

  • windows.

As you might expect, this is your C drive. All of that is to say, leave the Folder path as-is during installation.

You will eventually come to the Choose Components section of the installation (Figure 5). Here you can select options for the installation. If your particular desktop environment allows desktop icons (and that is your preference for launching apps), you might want to select Create Shortcut on Desktop (to make the launching of the newly installed app easier—more on this in a moment).

The installation will complete and present you with the Finish screen. Leave the Run Notepad box checked and click Finish. Notepad++ will run (Figure 1).

What happens, if you didn’t add the app icon to your desktop, when you want to run the software again? This is one issue that can easily trip users up. Remember that Program Files sub-directory? If you venture into that folder, you’ll see a folder for Notepad++ which contains the notepad++.exe file. Guess what? Right-click that file, select Open in Wine Windows Program Loader, and Notepad++ will run.

Notepad++ is a simple example of how Wine works. When you dive into more complicated applications, your results may vary. The best thing to do is to go back to the Wine application database, locate the app you want to install, click on it, and check the current app status. You will find every app lists the version of Wine tested, if it installs, if it runs, and gives it a rating. There are:

  • Platinum: Applications which run flawlessly out of the box.

  • Gold: Applications which run with some modifications necessary.

  • Silver: Applications which run with minor issues that do not affect usage.

You will also find some apps listed as Garbage, which means they won’t install and/or run.

If you have a Windows app that simply doesn’t have a Linux equivalent, never fear ─ Wine is here to assist you. Even though not every Windows app will run under Wine, the collection of apps that do is seriously impressive. And considering most everything we do nowadays is handled within a web browser, with a little help from Wine, you should be covered from every angle.

Ready to continue your Linux journey? Check out our Essentials of Linux System Administration course!

This is a classic article written by Istimsak Abdulbasir from the Linux.com archives. For more great SysAdmin tips and techniques check out our Essentials of Linux System Administration course!

Preamble
Have you ever wondered why you have to type “sudo” or “su” in a Linux terminal to do any system-wide changes? Well, sudo means, “super user do”, “su” means, “super user”. This command indicates that you want to be granted a super user and gain super user/root privileges. Linux then checks a special file and sees if you are allowed to be granted root privileges, similar to a VIP CLUB. If your name is not on the list, no rights.Now you can still gain root privileges, you would have to login as root to gain it. This is not a very safe thing to do. Reason, if you are root, all the doors in your system are open to everything, which leaves your system vulnerable. What “sudo” and “su” do is grant you rights to run a particular program that you specify, savvy?.

In some distros, the maintenance user account is already setup in that special file. All you do is type:

# sudo command

and enter the password of your user account, or:

# su -l root

and enter the root password and then the command. I have realized that not every distro allows this easy transaction, and that you may have to manually add your username to the sudoers file.  Well, we just snatched the VIP list from the sleeping guard and will show you how to put your name on it.

SUDOERS

The sudoers file is a file Linux and Unix administrators use to allocate system rights to system users. This allows the administrator to control who does what. Remember, Linux is built with security in mind. When you want to run a command that requires root rights, Linux checks your username against the sudoers file. This happens when you type the command “sudo”. If it determines, that your username is not on the list, you cannot run the command/program logged in as that user.

What you will have to do is login as “root” by using the command “su -l”. The “-l” means it should login normally.  The default user for the su command is root. Then you will enter the password for the root account, giving you a shell prompt where you can run any command as root. Again, this not safe. Once you are logged in as root, the system is open to vulnerabilities. It is best to supply rights to the non-root user for the sole purpose to run a desired command/program. However, your username must be in the sudoers file.

You can find the sudoers file in “/etc/sudoers”. Use the “ls -l /etc/” command to get a list of everything in the directory. Using -l after ls will give you a long and detailed listing.

SUDOERS FILE

Here is a layout of the sudoers file in Ubuntu. Your sudoers file may differ depending on the type of system you are using but should be the same genetically.
# /etc/sudoers
#
# This file MUST be edited with the ‘visudo’ command as root.
#
# See the man page for details on how to write a sudoers file.
#

Defaults    env_reset

# Host alias specification

# User alias specification

# Cmnd alias specification

# User privilege specification
root    ALL=(ALL) ALL

# Allow members of group sudo to execute any command after they have
# provided their password
# (Note that later entries override this, so you might need to move
# it further down)
%sudo ALL=(ALL) ALL
#
#includedir /etc/sudoers.d

# Members of the admin group may gain root privileges

Lets skip all the way down to the section that says. “# User privilege specification”. Under that comment, the user “root” is given system privileges. The variable ALL means all in the root. The (ALL) ALL value represents all privileges, more or less, at least that is what I determined it to be. If you want to add another user, like yourself, under the line root ALL=(ALL) ALL, type:

username ALL=(ALL) ALL

substituting username with your account name. Now your user account has sudo rights, or you are finally on that VIP list.

Look further down till you see,
%sudo ALL=(ALL) ALL

In Ubuntu, there is a group called sudo that grant users added to it system rights after they have submitted their password. This specifies rights to it. So, if you were wise enough to add your username to the sudo group, you’re good money.  The same goes for the admin group. Take notice of the “%” right before the group name. This indicates that admin and sudo are system groups.

Once you have all your settings in place you can write out and exit the file by typing the ESC key followed by “:wq”, if you used visudo to edit the sudoers file, like you are supposed to.

Now you do not need to use visudo as recommended. You can use the program “nano” that allows you to view text files in a terminal and modify them. This is my preferred method. To write out and exit the sudoers file with nano, type control-X.

As I said before, the sudoers file will differ depending on the system your using. I am using Fedora 14, a sort of fragile system. There is no sudo group. In Ubuntu as this file was taken from, does have a sudo group. Either way, the steps stated here will work on any other Linux distro.

Now, enjoy the VIP club.

Ready to continue your Linux journey? Check out our Essentials of Linux System Administration course!

This is a classic article written by  Surendra Anne from the Linux.com archives. For more great SysAdmin tips and techniques check out our Essentials of Linux System Administration course!

Many people know about cat command which is useful in displaying entire file content. But in some cases we have to print part of file. In today’s post we will be talking about head and tail commands, which are very useful when you want to view a certain part at the beginning or at the end of a file, specially when you are sure you want to ignore the rest of the file content.

let’s start with the tail command, and explore all of the features this handy command can provide and see how to use it best to suit your needs. After that we will show some options that you can do and can not do with the head command.

Linux Tail Command Syntax

tail [OPTION]... [FILE]...

Tail is a command which prints the last few number of lines (10 lines by default) of a certain file, then terminates.
Example 1: By default “tail” prints the last 10 lines of a file, then exits.

tail /path/to/file

Example :

# tail /var/log/messages
Mar 20 12:42:22 hameda1d1c dhclient[4334]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x280436dd)
Mar 20 12:42:24 hameda1d1c avahi-daemon[2027]: Registering new address record for fe80::4639:c4ff:fe53:4908 on eth0.*.
Mar 20 12:42:28 hameda1d1c dhclient[4334]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x280436dd)
Mar 20 12:42:28 hameda1d1c dhclient[4334]: DHCPACK from 10.76.198.1 (xid=0x280436dd)
Mar 20 12:42:30 hameda1d1c avahi-daemon[2027]: Joining mDNS multicast group on interface eth0.IPv4 with address 10.76.199.87.
Mar 20 12:42:30 hameda1d1c avahi-daemon[2027]: New relevant interface eth0.IPv4 for mDNS.
Mar 20 12:42:30 hameda1d1c avahi-daemon[2027]: Registering new address record for 10.76.199.87 on eth0.IPv4.
Mar 20 12:42:30 hameda1d1c NET[4385]: /sbin/dhclient-script : updated /etc/resolv.conf
Mar 20 12:42:30 hameda1d1c dhclient[4334]: bound to 10.76.199.87 -- renewal in 74685 seconds.
Mar 20 12:45:39 hameda1d1c kernel: usb 3-7: USB disconnect, device number 2

as you can see, this prints the last 10 lines of /var/log/messages.

Example 2: Now what about you are interested in just the last 3 lines of a file, or maybe interested in the last 15 lines of a file. this is when the -n option comes handy, to choose specific number of lines instead of the default 10.

tail -n <number_of_lines> /path/to/file

Example :

# tail -n 4 /etc/group
vboxusers:x:491:
avahi:x:70:
mailnull:x:47:
smmsp:x:51:

Example 3: We can even open multiple files using tail command with out need to execute multiple tail commands to view multiple files. Suppose if you want to see first two lines of a

tail -n <number of lines> <file1> <file2> <file3>

Example:

surendra@sanne-taggle:~/code/sh$ tail -n 2 99abc.txt startup_script.sh wifiactivate.sh ==> 99abc.txt <==
==> startup_script.sh <==sed -i 's/^.*PermitRootLogin.*$/PermitRootLogin yes/g' /etc/ssh/sshd_configservice sshd reload
==> wifiactivate.sh <==modprobe -rv iwlwifimodprobe -v iwlwifi 11n_disable=8

Example 4: Now this might be by far the most useful and commonly used option for tail command. Unlike the default behaviour which is to end after printing certain number of lines, the -f option “which stands for follow” will keep the stream going. It will start printing extra lines on to console added to the file after it is opened. This command will keep the file open to display updated changes to console until the user breaks the command.

tail -f /path/to/file

Example :

#service crond start ; tail -f /var/log/cron
Starting crond: [ OK ]
Mar 20 13:35:01 hameda1d1c CROND[5338]: (root) CMD (/root/mail/mailscript.sh)
Mar 20 13:35:33 hameda1d1c crond[2354]: (CRON) INFO (Shutting down)
Mar 20 13:35:50 hameda1d1c crond[5385]: (CRON) STARTUP (1.4.4)
Mar 20 13:35:50 hameda1d1c crond[5385]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 64% if used.)
Mar 20 13:35:51 hameda1d1c crond[5385]: (CRON) INFO (running with inotify support)
Mar 20 13:35:51 hameda1d1c crond[5385]: (CRON) INFO (@reboot jobs will be run at computer's startup.)
Mar 20 13:36:01 hameda1d1c CROND[5390]: (root) CMD (/root/mail/mailscript.sh)
Mar 20 13:36:12 hameda1d1c crond[5385]: (CRON) INFO (Shutting down)
Mar 20 13:36:25 hameda1d1c crond[5436]: (CRON) STARTUP (1.4.4)
Mar 20 13:36:25 hameda1d1c crond[5436]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 85% if used.)
^C

As you can see in this example, I wanted to start the crond service, then watch the /var/log/cron log file as service starts. I used ;  which a kind of command chaining in Linux in order to execute two commands in single line. I am not interested in just a few number of lines then exit, but moreover I am interested in keeping watching the whole log file till service starts, then break it with CTRL+C.

Example 5: The same tail -f command can be replicated using less command well. Once you open a file with less

less /path/to/filename

Once you open file, then press shift+f

Example:

Mar 21 08:25:01 sanne-taggle CRON[5553]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)Mar 21 08:27:24 sanne-taggle wpa_supplicant[807]: wlan0: WPA: Group rekeying completed with da:3c:69:04:b1:21 [GTK=CCMP]Mar 21 08:35:01 sanne-taggle CRON[5982]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)Waiting for data... (interrupt to abort)

In order to come out from update mode in less, you have to press ctrl+c and then press q for quit.

Example 6: We have other option -s  which should always be used with -f” will determine the sleep interval, whereas tail -f will keep watching the file, the refresh rate is each 1 second, if you wish to control this, then you will have to use the -s option “sleep” and specify the sleep interval

tail -f -s <sleep interval in seconds> /path/to/file

Example :

# tail -f -s 5 /var/log/secureMar 20 12:43:27 sa su: pam_unix(su:session): session opened for user rabbitmq by (uid=0)Mar 20 12:43:27 sa su: pam_unix(su:session): session closed for user rabbitmqMar 20 12:43:27 sa su: pam_unix(su:session): session opened for user rabbitmq by (uid=0)Mar 20 12:43:28 sa su: pam_unix(su:session): session closed for user rabbitmqMar 20 12:43:28 sa su: pam_unix(su:session): session opened for user rabbitmq by (uid=0)

Example 7: As we seen in example 3, We can open more files using tail command. Even we can view 2 files at the same time growing using -f option as well. It will also print a header viewing which file is showing this output. the header line will be beginning with “==>”

tail /path/to/file1 /path/to/file2

Example:

# tail -f /var/log/secure /var/log/cron
==> /var/log/secure <==
Mar 20 13:13:19 sa su: pam_unix(su:session): session opened for user rabbitmq by (uid=0)
Mar 20 13:13:20 sa su: pam_unix(su:session): session closed for user rabbitmq
Mar 20 13:13:25 sa su: pam_unix(su:session): session opened for user rabbitmq by (uid=0)
Mar 20 13:13:26 sa su: pam_unix(su:session): session closed for user rabbitmq
Mar 20 13:13:26 sa su: pam_unix(su:session): session opened for user rabbitmq by (uid=0)
Mar 20 13:13:26 sa su: pam_unix(su:session): session closed for user rabbitmq
Mar 20 13:13:26 sa su: pam_unix(su:session): session opened for user rabbitmq by (uid=0)
Mar 20 13:13:27 sa su: pam_unix(su:session): session closed for user rabbitmq
Mar 20 13:13:27 sa su: pam_unix(su:session): session opened for user rabbitmq by (uid=0)
Mar 20 13:13:27 sa su: pam_unix(su:session): session closed for user rabbitmq
==> /var/log/cron <==
Mar 20 13:00:02 sa CROND[19837]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Mar 20 13:01:01 sa CROND[20705]: (root) CMD (run-parts /etc/cron.hourly)
Mar 20 13:01:01 sa run-parts(/etc/cron.hourly)[20705]: starting 0anacron
Mar 20 13:01:01 sa run-parts(/etc/cron.hourly)[20714]: finished 0anacron
Mar 20 13:01:01 sa run-parts(/etc/cron.hourly)[20705]: starting logrotate
Mar 20 13:01:01 sa run-parts(/etc/cron.hourly)[20721]: finished logrotate
Mar 20 13:02:01 sa CROND[21587]: (root) CMD (/etc/puppet/scripts/update-federation-links.py)
Mar 20 13:10:01 sa CROND[28672]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Mar 20 13:13:00 sa crontab[31759]: (root) LIST (root)
Mar 20 13:13:01 sa CROND[31817]: (root) CMD (/etc/puppet/scripts/update-federation-links.py)
^C

Example 8: If you want to remove this header, use the -q option for quiet mode.

Example :

# tail -fq /var/log/secure /var/log/cron
Mar 20 13:13:19 sa su: pam_unix(su:session): session opened for user rabbitmq by (uid=0)
Mar 20 13:13:20 sa su: pam_unix(su:session): session closed for user rabbitmq
Mar 20 13:13:25 sa su: pam_unix(su:session): session opened for user rabbitmq by (uid=0)
Mar 20 13:13:26 sa su: pam_unix(su:session): session closed for user rabbitmq
Mar 20 13:13:26 sa su: pam_unix(su:session): session opened for user rabbitmq by (uid=0)
Mar 20 13:13:26 sa su: pam_unix(su:session): session closed for user rabbitmq
Mar 20 13:13:26 sa su: pam_unix(su:session): session opened for user rabbitmq by (uid=0)
Mar 20 13:13:27 sa su: pam_unix(su:session): session closed for user rabbitmq
Mar 20 13:13:27 sa su: pam_unix(su:session): session opened for user rabbitmq by (uid=0)
Mar 20 13:13:27 sa su: pam_unix(su:session): session closed for user rabbitmq
Mar 20 13:01:01 sa CROND[20705]: (root) CMD (run-parts /etc/cron.hourly)
Mar 20 13:01:01 sa run-parts(/etc/cron.hourly)[20705]: starting 0anacron
Mar 20 13:01:01 sa run-parts(/etc/cron.hourly)[20714]: finished 0anacron
Mar 20 13:01:01 sa run-parts(/etc/cron.hourly)[20705]: starting logrotate
Mar 20 13:01:01 sa run-parts(/etc/cron.hourly)[20721]: finished logrotate
Mar 20 13:02:01 sa CROND[21587]: (root) CMD (/etc/puppet/scripts/update-federation-links.py)
Mar 20 13:10:01 sa CROND[28672]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Mar 20 13:13:00 sa crontab[31759]: (root) LIST (root)
Mar 20 13:13:01 sa CROND[31817]: (root) CMD (/etc/puppet/scripts/update-federation-links.py)
Mar 20 13:20:01 sa CROND[8700]: (root) CMD (/usr/lib64/sa/sa1 1 1)

Example 9: Now what if I have a very huge /var/log/messages and I am only interested in the last certain number of bytes of data, the -c option can do this easily. observe the below example where I want to view only the last 500 bytes of data from /var/log/messages

tail -c <number of bytes> /path/to/file

Example :

# tail -c 500 /var/log/messages
 failed to connect to device: failed to connect to nws://admin@localhost:56025/?group=Administrators&;cert=%2Fvar%2Flib%2Fpuppet%2Fssl%2Fcerts%2Fc569b530-6b4f-4d72-8417-48a556bf55a5.pem&key=%2Fvar%2Flib%2Fpuppet%2Fssl%2Fprivate_keys%2Fc569b530-6b4f-4d72-8417-48a556bf55a5.pem
Mar 20 13:27:07 sa collectd[2249]: nwservice.py: Upstart service in status (status:) does not match nwipdbextractor
Mar 20 13:27:07 sa collectd[2249]: nwservice.py: Upstart service in status (status:) does not match nwbroker

Now, since we have been talking for a while about tail, lets talk about “head” command.

Head command in Linux

Head command will obviously on the contrary to tail, it will print the first 10 lines of the file. Till this part of the post, the head command will do pretty much the same as tail in all previous examples, with exception to the -f option, there is no -f option in head, which is very natural since files will always grow from the bottom.

Head Command Syntax In Linux

head [OPTION]... [FILE]...

Example 10: As mention earlier print first 10 lines.

# head /etc/passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
uucp:x:10:14:uucp:/var/spool/uucp:/sbin/nologin

Example 11: Print first two lines of a file.

# head -n 2 /etc/passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin

Example 12: this option lets you print all lines starting from a line number you specify, unlike Example 11 which will show you the first number of lines you provided.

head -n <number of lines preceeded with "-"> /path/to/file

Example :

# head -n -27 /etc/passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync

As you can notice, in this example, it printed all the lines starting after line 27.

Combine Head And Tail Command In Linux

Example 13: As tail and head commands print different parts of files in an effective way, we can combine these two to print some advanced filtering of file content. To print 15th line to 20th line in /etc/passwd file use below example.

head -n 20 /etc/passwd | tail -n 5

Output:

syslog:x:101:104::/home/syslog:/bin/falsemessagebus:x:102:106::/var/run/dbus:/bin/falseusbmux:x:103:46:usbmux daemon,,,:/home/usbmux:/bin/falsednsmasq:x:104:65534:dnsmasq,,,:/var/lib/misc:/bin/falsekernoops:x:106:65534:Kernel Oops Tracking Daemon,,,:/:/bin/false

Example 14: Many people do not suggest above method to print from one line to other line. The above example is to show how we can combine these things. If you really want to print a particular line, use sed command as shown below.

Example:

$ sed -n '5p' /etc/passwdsync:x:4:65534:sync:/bin:/bin/sync

Ready to continue your Linux journey? Check out our Essentials of Linux System Administration course!

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

Many years ago, when I first began with Linux, installing applications and keeping a system up to date was not an easy feat. In fact, if you wanted to tackle either task you were bound for the command line. For some new users this left their machines outdated or without applications they needed. Of course, at the time, most everyone trying their hand at Linux knew they were getting into something that would require some work. That was simply the way it was. Fortunately times and Linux have changed. Now Linux is exponentially more user friendly – to the point where so much is automatic and point and click – that today’s Linux hardly resembles yesterday’s Linux.

But even though Linux has evolved into the user-friendly operating system it is, there are still some systems that are fundamentally different than their Windows counterparts. So it is always best to understand those systems in order to be able to properly use those system. Within the confines of this article you will learn how to keep your Linux system up to date. In the process you might also learn how to install an application or two.

There is one thing to understand about updating Linux: Not every distribution handles this process in the same fashion. In fact, some distributions are distinctly different down to the type of file types they use for package management.

  • Ubuntu and Debian use .deb
  • Fedora, SuSE, and Mandriva use .rpm
  • Slackware uses .tgz archives which contain pre-built binaries
  • And of course there is also installing from source or pre-compiled .bin or .package files.
As you can see there are number of possible systems (and the above list is not even close to being all-inclusive). So to make the task of covering this topic less epic, I will cover the Ubuntu and Fedora systems. I will touch on both the GUI as well as the command line tools for handling system updates.

Ubuntu Linux

Ubuntu Linux has become one of the most popular of all the Linux distributions. And through the process of updating a system, you should be able to tell exactly why this is the case. Ubuntu is very user friendly. Ubuntu uses two different tools for system update:
  • apt-get: Command line tool.
  • Update Manager: GUI tool.
Ubuntu Update ManagerThe Update Manger is a nearly 100% automatic tool. With this tool you will not have to routinely check to see if there are updates available. Instead you will know updates are available because the Update Manager will open on your desktop (see Figure 1) as soon as the updates depending upon their type:
  • Security updates: Daily
  • Non-security updates: Weekly

If you want to manually check for updates, you can do this by clicking the Administration sub-menu of the System menu and then selecting the Update Manager entry. When the Update Manager opens click the Check button to see if there are updates available.

Figure 1 shows a listing of updates for a Ubuntu 9.10 installation. As you can see there are both Important Security Updates as well as Recommended Update. If you want to get information about a particular update you can select the update and then click on the Description of update dropdown.
In order to update the packages follow these steps:
  1. Check the updates you want to install. By default all updates are selected.
  2. Click the Install Updates button.
  3. Enter your user (sudo) password.
  4. Click OK.

The updates will proceed and you can continue on with  your work. Now some updates may require either you to log out of your desktop and log back in, or to reboot the machine. There are is a new tool in development (Ksplice) that allow even the update of a kernel to not require a reboot.
Once all of the updates are complete the Update Manage main window will return reporting that Your system is up to date.

Updating via command lineNow let’s take a look at the command line tools for updating your system. The Ubuntu package management system is called apt. Apt is a very powerful tool that can completely manage your systems packages via command line. Using the command line tool has one drawback – in order to check to see if you have updates, you have to run it manually. Let’s take a look at how to update your system with the help of Apt. Follow these steps:

  1. Open up a terminal window.
  2. Issue the command sudo apt-get upgrade.
  3. Enter your user’s password.
  4. Look over the list of available updates (see Figure 2) and decide if you want to go through with the entire upgrade.
  5. To accept all updates click the ‘y’ key (no quotes) and hit Enter.
  6. Watch as the update happens.

That’s it. Your system is now up to date. Let’s take a look at how the same process happens on Fedora (Fedora 12 to be exact).

Fedora Linux

Fedora is a direct descendant of Red Hat Linux, so it is the beneficiary of the Red Hat Package Management system (rpm). Like Ubuntu, Fedora can be upgraded by:
  • yum: Command line tool.
  • GNOME (or KDE) PackageKit: GUI tool.

GNOME PackageKitDepending upon your desktop, you will either use the GNOME or the KDE front-end for PackageKit. In order to open up this tool you simply go to the Administration sub-menu of the System menu and select the Software Update entry.  When the tool opens (see Figure 3) you will see the list of updates. To get information about a particular update all you need to do is to select a specific package and the information will be displayed in the bottom pane.

To go ahead with the update click the Install Updates button. As the process happens a progress bar will indicate where GNOME (or KDE) PackageKit is in the steps. The steps are:

  1. Resolving dependencies.
  2. Downloading packages.
  3. Testing changes.
  4. Installing updates.

When the process is complete, GNOME (or KDE) PackageKit will report that your system is update. Click the OK button when prompted.

Now let’s take a look at upgrading Fedora via the command line. As stated earlier, this is done with the help of the yum command. In order to take care of this, follow these steps:

Updating with the help of yum

  1. Open up a terminal window (Do this by going to the System Tools sub-menu of the Applications menu and select Terminal).
  2. Enter the su command to change to the super user.
  3. Type your super user password and hit Enter.
  4. Issue the command yum update and yum will check to see what packages are available for update.
  5. Look through the listing of updates (see Figure 4).
  6. If you want to go through with the update enter ‘y’ (no quotes) and hit Enter.
  7. Sit back and watch the updates happen.
  8. Exit out of the root user command prompt by typing “exit” (no quotes) and hitting Enter.
  9. Close the terminal when complete.

Your Fedora system is now up to date.

Final Thoughts

Granted only two distributions were touched on here, but this should illustrate how easily a Linux installation is updated. Although the tools might not be universal, the concepts are. Whether you are using Ubuntu, OpenSuSE, Slackware, Fedora, Mandriva, or anything in-between, the above illustrations should help you through updating just about any Linux distribution. And hopefully this tutorial helps to show you just how user-friendly the Linux operating system has become.

Ready to continue your Linux journey? Check out our free intro to Linux course!

This is a classic article written by Joe “Zonker” Brockmeier from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

The first step is often the hardest, but don’t let that stop you. If you’ve ever wanted to learn how to write a shell script but didn’t know where to start, this is your lucky day.

If this is your first time writing a script, don’t worry — shell scripting is not that complicated. That is, you can do some complicated things with shell scripts, but you can get there over time. If you know how to run commands at the command line, you can learn to write simple scripts in just 10 minutes. All you need is a text editor and an idea of what you want to do. Start small and use scripts to automate small tasks. Over time you can build on what you know and wind up doing more and more with scripts.

Starting Off

Each script starts with a “shebang” and the path to the shell that you want the script to use, like so:

#!/bin/bash

The “#!” combo is called a shebang by most Unix geeks. This is used by the shell to decide which interpreter to run the rest of the script, and ignored by the shell that actually runs the script. Confused? Scripts can be written for all kinds of interpreters — bash, tsch, zsh, or other shells, or for Perl, Python, and so on. You could even omit that line if you wanted to run the script by sourcing it at the shell, but let’s save ourselves some trouble and add it to allow scripts to be run non-interactively.

What’s next? You might want to include a comment or two about what the script is for. Preface comments with the hash (#) character:

#!/bin/bash
# A simple script

Let’s say you want to run an rsync command from the script, rather than typing it each time. Just add the rsync command to the script that you want to use:

#!/bin/bash
# rsync script
rsync -avh --exclude="*.bak" /home/user/Documents/ /media/diskid/user_backup/Documents/

Save your file, and then make sure that it’s set executable. You can do this using the chmod utility, which changes a file’s mode. To set it so that a script is executable by you and not the rest of the users on a system, use “chmod 700 scriptname” — this will let you read, write, and execute (run) the script — but only your user. To see the results, run ls -lh scriptname and you’ll see something like this:

-rwx------ 1 jzb jzb   21 2010-02-01 03:08 echo

The first column of rights, rwx, shows that the owner of the file (jzb) has read, write, and execute permissions. The other columns with a dash show that other users have no rights for that file at all.

Variables

The above script is useful, but it has hard-coded paths. That might not be a problem, but if you want to write longer scripts that reference paths often, you probably want to utilize variables. Here’s a quick sample:

#!/bin/bash
# rsync using variables

SOURCEDIR=/home/user/Documents/
DESTDIR=/media/diskid/user_backup/Documents/

rsync -avh --exclude="*.bak" $SOURCEDIR $DESTDIR

There’s not a lot of benefit if you only reference the directories once, but if they’re used multiple times, it’s much easier to change them in one location than changing them throughout a script.

Taking Input

Non-interactive scripts are useful, but what if you need to give the script new information each time it’s run? For instance, what if you want to write a script to modify a file? One thing you can do is take an argument from the command line. So, for instance, when you run “script foo” the script will take the name of the first argument (foo):

#!/bin/bash

echo $1

Here bash will read the command line and echo (print) the first argument — that is, the first string after the command itself.

You can also use read to accept user input. Let’s say you want to prompt a user for input:

#!/bin/bash

echo -e "Please enter your name: "
read name
echo "Nice to meet you $name"

That script will wait for the user to type in their name (or any other input, for that matter) and use it as the variable $name. Pretty simple, yeah? Let’s put all this together into a script that might be useful. Let’s say you want to have a script that will back up a directory you specify on the command line to a remote host:

#!/bin/bash

echo -e "What directory would you like to back up?" 
read directory

DESTDIR=
 This e-mail address is being protected from spambots. You need JavaScript enabled to view it
 :$directory/

rsync --progress -avze ssh --exclude="*.iso" $directory $DESTDIR

That script will read in the input from the command line and substitute it as the destination directory at the target system, as well as the local directory that will be synced. It might look a bit complex as a final script, but each of the bits that you need to know to put it together are pretty simple. A little trial and error and you’ll be creating useful scripts of your own.

Of course, this is just scraping the surface of bash scripting. If you’re interested in learning more, be sure to check out the Bash Guide for Beginners.

Ready to continue your Linux journey? Check out our free intro to Linux course!

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

At some point in your career as a Linux administrator, you are going to have to view log files. After all, they are there for one very important reason…to help you troubleshoot an issue. In fact, every seasoned administrator will immediately tell you that the first thing to be done, when a problem arises, is to view the logs.

And there are plenty of logs to be found: logs for the system, logs for the kernel, for package managers, for Xorg, for the boot process, for Apache, for MySQL… For nearly anything you can think of, there is a log file.

Most log files can be found in one convenient location: /var/log. These are all system and service logs, those which you will lean on heavily when there is an issue with your operating system or one of the major services. For desktop app-specific issues, log files will be written to different locations (e.g., Thunderbird writes crash reports to ‘~/.thunderbird/Crash Reports’). Where a desktop application will write logs will depend upon the developer and if the app allows for custom log configuration.

We are going to be focus on system logs, as that is where the heart of Linux troubleshooting lies. And the key issue here is, how do you view those log files?

Fortunately there are numerous ways in which you can view your system logs, all quite simply executed from the command line.

/var/log

This is such a crucial folder on your Linux systems. Open up a terminal window and issue the command cd /var/log. Now issue the command ls and you will see the logs housed within this directory (Figure 1).

Figure 1: A listing of log files found in /var/log/.

Now, let’s take a peek into one of those logs.

Viewing logs with less

One of the most important logs contained within /var/log is syslog. This particular log file logs everything except auth-related messages. Say you want to view the contents of that particular log file. To do that, you could quickly issue the command less /var/log/syslog. This command will open the syslog log file to the top. You can then use the arrow keys to scroll down one line at a time, the spacebar to scroll down one page at a time, or the mouse wheel to easily scroll through the file.

The one problem with this method is that syslog can grow fairly large; and, considering what you’re looking for will most likely be at or near the bottom, you might not want to spend the time scrolling line or page at a time to reach that end. Will syslog open in the less command, you could also hit the [Shift]+[g] combination to immediately go to the end of the log file. The end will be denoted by (END). You can then scroll up with the arrow keys or the scroll wheel to find exactly what you want.

This, of course, isn’t terribly efficient.

Viewing logs with dmesg

The dmesg command prints the kernel ring buffer. By default, the command will display all messages from the kernel ring buffer. From the terminal window, issue the command dmesg and the entire kernel ring buffer will print out (Figure 2).

Figure 2: A USB external drive displaying an issue that may need to be explored.

Fortunately, there is a built-in control mechanism that allows you to print out only certain facilities (such as daemon).

Say you want to view log entries for the user facility. To do this, issue the command dmesg –facility=user. If anything has been logged to that facility, it will print out.

Unlike the less command, issuing dmesg will display the full contents of the log and send you to the end of the file. You can always use your scroll wheel to browse through the buffer of your terminal window (if applicable). Instead, you’ll want to pipe the output of dmesg to the less command like so:

dmesg | less

The above command will print out the contents of dmesg and allow you to scroll through the output just as you did viewing a standard log with the less command.

Viewing logs with tail

The tail command is probably one of the single most handy tools you have at your disposal for the viewing of log files. What tail does is output the last part of files. So, if you issue the command tail /var/log/syslog, it will print out only the last few lines of the syslog file.

But wait, the fun doesn’t end there. The tail command has a very important trick up its sleeve, by way of the -f option. When you issue the command tail -f /var/log/syslog, tail will continue watching the log file and print out the next line written to the file. This means you can follow what is written to syslog, as it happens, within your terminal window (Figure 3).
Figure 3: Following /var/log/syslog using the tail command.

Using tail in this manner is invaluable for troubleshooting issues.

To escape the tail command (when following a file), hit the [Ctrl]+[x] combination.

You can also instruct tail to only follow a specific amount of lines. Say you only want to view the last five lines written to syslog; for that you could issue the command:

tail -f -n 5 /var/log/syslog

The above command would follow input to syslog and only print out the most recent five lines. As soon as a new line is written to syslog, it would remove the oldest from the top. This is a great way to make the process of following a log file even easier. I strongly recommend not using this to view anything less than four or five lines, as you’ll wind up getting input cut off and won’t get the full details of the entry.

There are other tools

You’ll find plenty of other commands (and even a few decent GUI tools) to enable the viewing of log files. Look to more, grep, head, cat, multitail, and System Log Viewer to aid you in your quest to troubleshooting systems via log files.

Advance your career with Linux system administration skills. Check out the Essentials of System Administration course from The Linux Foundation.

This is a classic article written by Carla Schroder from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

grub command shell

Once upon a time we had legacy GRUB, the Grand Unified Linux Bootloader version 0.97. Legacy GRUB had many virtues, but it became old and its developers did yearn for more functionality, and thus did GRUB 2 come into the world.

GRUB 2 is a major rewrite with several significant differences. It boots removable media, and can be configured with an option to enter your system BIOS. It’s more complicated to configure with all kinds of scripts to wade through, and instead of having a nice fairly simple /boot/grub/menu.lst file with all configurations in one place, the default is /boot/grub/grub.cfg. Which you don’t edit directly, oh no, for this is not for mere humans to touch, but only other scripts. We lowly humans may edit /etc/default/grub, which controls mainly the appearance of the GRUB menu. We may also edit the scripts in /etc/grub.d/. These are the scripts that boot your operating systems, control external applications such as memtest and os_prober, and theming./boot/grub/grub.cfg is built from /etc/default/grub and /etc/grub.d/* when you run the update-grub command, which you must run every time you make changes.

The good news is that the update-grub script is reliable for finding kernels, boot files, and adding all operating systems to your GRUB boot menu, so you don’t have to do it manually.

We’re going to learn how to fix two of the more common failures. When you boot up your system and it stops at the grub> prompt, that is the full GRUB 2 command shell. That means GRUB 2 started normally and loaded the normal.mod module (and other modules which are located in /boot/grub/[arch]/), but it didn’t find your grub.cfg file. If you see grub rescue> that means it couldn’t find normal.mod, so it probably couldn’t find any of your boot files.

How does this happen? The kernel might have changed drive assignments or you moved your hard drives, you changed some partitions, or installed a new operating system and moved things around. In these scenarios your boot files are still there, but GRUB can’t find them. So you can look for your boot files at the GRUB prompt, set their locations, and then boot your system and fix your GRUB configuration.

GRUB 2 Command Shell

The GRUB 2 command shell is just as powerful as the shell in legacy GRUB. You can use it to discover boot images, kernels, and root filesystems. In fact, it gives you complete access to all filesystems on the local machine regardless of permissions or other protections. Which some might consider a security hole, but you know the old Unix dictum: whoever has physical access to the machine owns it.

When you’re at the grub> prompt, you have a lot of functionality similar to any command shell such as history and tab-completion. The grub rescue> mode is more limited, with no history and no tab-completion.

If you are practicing on a functioning system, press C when your GRUB boot menu appears to open the GRUB command shell. You can stop the bootup countdown by scrolling up and down your menu entries with the arrow keys. It is safe to experiment at the GRUB command line because nothing you do there is permanent. If you are already staring at the grub> or grub rescue>prompt then you’re ready to rock.

The next few commands work with both grub> and grub rescue>. The first command you should run invokes the pager, for paging long command outputs:

grub> set pager=1

There must be no spaces on either side of the equals sign. Now let’s do a little exploring. Type ls to list all partitions that GRUB sees:

grub> ls
(hd0) (hd0,msdos2) (hd0,msdos1)

What’s all this msdos stuff? That means this system has the old-style MS-DOS partition table, rather than the shiny new Globally Unique Identifiers partition table (GPT). (See Using the New GUID Partition Table in Linux (Goodbye Ancient MBR). If you’re running GPT it will say (hd0,gpt1). Now let’s snoop. Use the ls command to see what files are on your system:

grub> ls (hd0,1)/
lost+found/ bin/ boot/ cdrom/ dev/ etc/ home/  lib/
lib64/ media/ mnt/ opt/ proc/ root/ run/ sbin/ 
srv/ sys/ tmp/ usr/ var/ vmlinuz vmlinuz.old 
initrd.img initrd.img.old

Hurrah, we have found the root filesystem. You can omit the msdos and gpt labels. If you leave off the slash it will print information about the partition. You can read any file on the system with the cat command:

grub> cat (hd0,1)/etc/issue
Ubuntu 14.04 LTS n l

Reading /etc/issue could be useful on a multi-boot system for identifying your various Linuxes.

Booting From grub>

This is how to set the boot files and boot the system from the grub> prompt. We know from running the ls command that there is a Linux root filesystem on (hd0,1), and you can keep searching until you verify where /boot/grub is. Then run these commands, using your own root partition, kernel, and initrd image:

grub> set root=(hd0,1)
grub> linux /boot/vmlinuz-3.13.0-29-generic root=/dev/sda1
grub> initrd /boot/initrd.img-3.13.0-29-generic
grub> boot

The first line sets the partition that the root filesystem is on. The second line tells GRUB the location of the kernel you want to use. Start typing /boot/vmli, and then use tab-completion to fill in the rest. Type root=/dev/sdX to set the location of the root filesystem. Yes, this seems redundant, but if you leave this out you’ll get a kernel panic. How do you know the correct partition? hd0,1 = /dev/sda1. hd1,1 = /dev/sdb1. hd3,2 = /dev/sdd2. I think you can extrapolate the rest.

The third line sets the initrd file, which must be the same version number as the kernel.

The fourth line boots your system.

On some Linux systems the current kernels and initrds are symlinked into the top level of the root filesystem:

$ ls -l /
vmlinuz -> boot/vmlinuz-3.13.0-29-generic
initrd.img -> boot/initrd.img-3.13.0-29-generic

So you could boot from grub> like this:

grub> set root=(hd0,1)
grub> linux /vmlinuz root=/dev/sda1
grub> initrd /initrd.img
grub> boot

Booting From grub-rescue>

If you’re in the GRUB rescue shell the commands are different, and you have to load the normal.mod andlinux.mod modules:

grub rescue> set prefix=(hd0,1)/boot/grub
grub rescue> set root=(hd0,1)
grub rescue> insmod normal
grub rescue> normal
grub rescue> insmod linux
grub rescue> linux /boot/vmlinuz-3.13.0-29-generic root=/dev/sda1
grub rescue> initrd /boot/initrd.img-3.13.0-29-generic
grub rescue> boot

Tab-completion should start working after you load both modules.

Making Permanent Repairs

When you have successfully booted your system, run these commands to fix GRUB permanently:

# update-grub
Generating grub configuration file ...
Found background: /usr/share/images/grub/Apollo_17_The_Last_Moon_Shot_Edit1.tga
Found background image: /usr/share/images/grub/Apollo_17_The_Last_Moon_Shot_Edit1.tga
Found linux image: /boot/vmlinuz-3.13.0-29-generic
Found initrd image: /boot/initrd.img-3.13.0-29-generic
Found linux image: /boot/vmlinuz-3.13.0-27-generic
Found initrd image: /boot/initrd.img-3.13.0-27-generic
Found linux image: /boot/vmlinuz-3.13.0-24-generic
Found initrd image: /boot/initrd.img-3.13.0-24-generic
Found memtest86+ image: /boot/memtest86+.elf
Found memtest86+ image: /boot/memtest86+.bin
done
# grub-install /dev/sda
Installing for i386-pc platform.
Installation finished. No error reported.

When you run grub-install remember you’re installing it to the boot sector of your hard drive and not to a partition, so do not use a partition number like /dev/sda1.

But It Still Doesn’t Work

If your system is so messed up that none of this works, try the Super GRUB2 live rescue disk. The official GNU GRUB Manual 2.00 should also be helpful.

Advance your career with Linux system administration skills. Check out the Essentials of System Administration course from The Linux Foundation.

This is a classic article written by Swapnil Bhartiya from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

If you run a live or home server, moving files between local machines or two remote machines is a basic requirement. There are many ways to achieve that. In this article, we talk about scp (secure copy command) that encrypts the transferred file and password so no one can snoop. With scp you don’t have to start an FTP session or log into the system.

The scp tool relies on SSH (Secure Shell) to transfer files, so all you need is the username and password for the source and target systems. Another advantage is that with SCP you can move files between two remote servers, from your local machine in addition to transferring data between local and remote machines. In that case you need usernames and passwords for both servers. Unlike Rsync, you don’t have to log into any of the servers to transfer data from one machine to another.

This tutorial is aimed at new Linux users, so I will keep things as simple as possible. Let’s get started.

Copy a single file from the local machine to a remote machine:

The scp command needs a source and destination to copy files from one location to another location. This is the pattern that we use:

scp localmachine/path_to_the_file username@server_ip:/path_to_remote_directory

In the following example I am copying a local file from my macOS system to my Linux server (Mac OS, being a UNIX operating system has native support for all UNIX/Linux tools).

scp /Volumes/MacDrive/Distros/fedora.iso 
swapnil@10.0.0.75:/media/prim_5/media_server/

Here, ‘swapnil’ is the user on the server and 10.0.0.75 is the server IP. It will ask you to provide the password for that user, and then copy the file securely.

I can do the same from my local Linux machine:

scp /home/swapnil/Downloads/fedora.iso swapnil@10.0.0.75:/media/prim_5/media_server/

If you are running Windows 10, then you can use Ubuntu bash on Windows to copy files from the Windows system to Linux server:

scp /mnt/c/Users/swapnil/Downloads/fedora.iso swapnil@10.0.0.75:/media/prim_5/
  media_server/

Copy a local directory to a remote server:

If you want to copy the entire local directory to the server, then you can add the -r flag to the command:

scp -r localmachine/path_to_the_directory username@server_ip:/path_to_remote_directory/

Make sure that the source directory doesn’t have a forward slash at the end of the path, at the same time the destination path *must* have a forward slash.

Copy all files in a local directory to a remote directory

What if you only want to copy all the files inside a local directory to a remote directory? It’s simply, just add a forward slash and * at the end of source directory and give the path of destination directory. Don’t forget to add the -r flag to the command:

scp -r localmachine/path_to_the_directory/* username@server_ip:/path_to_remote_directory/

Copying files from remote server to local machine

If you want to make a copy of a single file, a directory or all files on the server to the local machine, just follow the same example above, just exchange the place of source and destination.

Copy a single file:

scp username@server_ip:/path_to_remote_directory local_machine/path_to_the_file

Copy a remote directory to a local machine:

scp -r username@server_ip:/path_to_remote_directory local-machine/path_to_the_directory/

Make sure that the source directory doesn’t have a forward slash at the end of the path, at the same time the destination path *must* have a forward slash.

Copy all files in a remote directory to a local directory:

scp -r username@server_ip:/path_to_remote_directory/* local-machine/path_to_the_directory/

Copy files from one directory of the same server to another directory securely from local machine

Usually I ssh into that machine and then use rsync command to perform the job, but with SCP, I can do it easily without having to log into the remote server.

Copy a single file:

scp username@server_ip:/path_to_the_remote_file username@server_ip:/
  path_to_destination_directory/

Copy a directory from one location on remote server to different location on the same server:

scp username@server_ip:/path_to_the_remote_file username@server_ip:/
  path_to_destination_directory/

Copy all files in a remote directory to a local directory

scp -r username@server_ip:/path_to_source_directory/* usdername@server_ip:/
  path_to_the_destination_directory/

Copy files from one remote server to another remote server from a local machine

Currently I have to ssh into one server in order to use rsync command to copy files to another server. I can use SCP command to move files between two remote servers:

Usually I ssh into that machine and then use rsync command to perform the job, but with SCP, I can do it easily without having to log into the remote server.

Copy a single file:

scp username@server1_ip:/path_to_the_remote_file username@server2_ip:/
  path_to_destination_directory/

Copy a directory from one location on a remote server to different location on the same server:

scp username@server1_ip:/path_to_the_remote_file username@server2_ip:/
  path_to_destination_directory/

Copy all files in a remote directory to a local directory

scp -r username@server1_ip:/path_to_source_directory/* username@server2_ip:/
  path_to_the_destination_directory/

Conclusion

As you can see, once you understand how things work, it will be quite easy to move your files around. That’s what Linux is all about, just invest your time in understanding some basics, then it’s a breeze!

Ready to continue your Linux journey? Check out our free intro to Linux course!

This is a classic article written by Jack Wallen from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

The Linux operating system includes a plethora of tools, all of which are ready to help you administer your systems. From simple file and directory tools to very complex security commands, there’s not much you can’t do on Linux. And, although regular desktop users may not need to become familiar with these tools at the command line, they’re mandatory for Linux admins. Why? First, you will have to work with a GUI-less Linux server at some point. Second, command-line tools often offer far more power and flexibility than their GUI alternative.

Determining memory usage is a skill you might need should a particular app go rogue and commandeer system memory. When that happens, it’s handy to know you have a variety of tools available to help you troubleshoot. Or, maybe you need to gather information about a Linux swap partition or detailed information about your installed RAM? There are commands for that as well. Let’s dig into the various Linux command-line tools to help you check into system memory usage. These tools aren’t terribly hard to use, and in this article, I’ll show you five different ways to approach the problem.

I’ll be demonstrating on the Ubuntu Server 18.04 platform. You should, however, find all of these commands available on your distribution of choice. Even better, you shouldn’t need to install a single thing (as most of these tools are included).

With that said, let’s get to work.

top

I want to start out with the most obvious tool. The top command provides a dynamic, real-time view of a running system. Included in that system summary is the ability to check memory usage on a per-process basis. That’s very important, as you could easily have multiple iterations of the same command consuming different amounts of memory. Although you won’t find this on a headless server, say you’ve opened Chrome and noticed your system slowing down. Issue the top command to see that Chrome has numerous processes running (one per tab – Figure 1).

Chrome isn’t the only app to show multiple processes. You see the Firefox entry in Figure 1? That’s the primary process for Firefox, whereas the Web Content processes are the open tabs. At the top of the output, you’ll see the system statistics. On my machine (a System76 Leopard Extreme), I have a total of 16GB of RAM available, of which just over 10GB is in use. You can then comb through the list and see what percentage of memory each process is using.

One of the things top is very good for is discovering Process ID (PID) numbers of services that might have gotten out of hand. With those PIDs, you can then set about to troubleshoot (or kill) the offending tasks.

If you want to make top a bit more memory-friendly, issue the command top -o %MEM, which will cause top to sort all processes by memory used (Figure 2).

The top command also gives you a real-time update on how much of your swap space is being used.

free

Sometimes, however, top can be a bit much for your needs. You may only need to see the amount of free and used memory on your system. For that, there is the free command. The free command displays:

  • Total amount of free and used physical memory
  • Total amount of swap memory in the system
  • Buffers and caches used by the kernel

From your terminal window, issue the command free. The output of this command is not in real time. Instead, what you’ll get is an instant snapshot of the free and used memory in that moment (Figure 3).

You can, of course, make free a bit more user-friendly by adding the -m option, like so: free -m. This will report the memory usage in MB (Figure 4).

Of course, if your system is even remotely modern, you’ll want to use the -g option (gigabytes), as in free -g.

If you need memory totals, you can add the option like so: free -mt. This will simply total the amount of memory in columns (Figure 5).

vmstat

Another very handy tool to have at your disposal is vmstat. This particular command is a one-trick pony that reports virtual memory statistics. The vmstat command will report stats on:

  • Processes
  • Memory
  • Paging
  • Block IO
  • Traps
  • Disks
  • CPU

The best way to issue vmstat is by using the -s switch, like vmstat -s. This will report your stats in a single column (which is so much easier to read than the default report). The vmstat command will give you more information than you need (Figure 6), but more is always better (in such cases).

dmidecode

What if you want to find out detailed information about your installed system RAM? For that, you could use the dmidecode command. This particular tool is the DMI table decoder, which dumps a system’s DMI table contents into a human-readable format. If you’re unsure as to what the DMI table is, it’s a means to describe what a system is made of (as well as possible evolutions for a system).

To run the dmidecode command, you do need sudo privileges. So issue the command sudo dmidecode -t 17. The output of the command (Figure 7) can be lengthy, as it displays information for all memory-type devices. So if you don’t have the ability to scroll, you might want to send the output of that command to a file, like so: sudo dmidecode –t 17 > dmi_infoI, or pipe it to the less command, as in sudo dmidecode | less.

/proc/meminfo

You might be asking yourself, “Where do these commands get this information from?”. In some cases, they get it from the /proc/meminfo file. Guess what? You can read that file directly with the command less /proc/meminfo. By using the less command, you can scroll up and down through that lengthy output to find exactly what you need (Figure 8).

One thing you should know about /proc/meminfo: This is not a real file. Instead /pro/meminfo is a virtual file that contains real-time, dynamic information about the system. In particular, you’ll want to check the values for:

  • MemTotal
  • MemFree
  • MemAvailable
  • Buffers
  • Cached
  • SwapCached
  • SwapTotal
  • SwapFree

If you want to get fancy with /proc/meminfo you can use it in conjunction with the egrep command like so: egrep –color ‘Mem|Cache|Swap’ /proc/meminfo. This will produce an easy to read listing of all entries that contain Mem, Cache, and Swap … with a splash of color (Figure 9).

Keep learning

One of the first things you should do is read the manual pages for each of these commands (so man top, man free, man vmstat, man dmidecode). Starting with the man pages for commands is always a great way to learn so much more about how a tool works on Linux.

Ready to continue your Linux journey? Check out our free intro to Linux course!

This is a classic article written by Paul Brown from the Linux.com archives. For more great SysAdmin tips and techniques check out our free intro to Linux course.

Back in 1996 I learned how to install software on my spanking new Linux before really understanding the topography of the filesystem. This turned out to be a problem, not so much for programs, because they would just magically work even though I hadn’t a clue of where the actual executable files landed. The problem was the documentation.

You see, back then, Linux was not the intuitive, user-friendly system it is today. You had to read a lot. You had to know things about the frequency rate of your CRT monitor and the ins and outs of your noisy dial-up modem, among hundreds of other things.  I soon realized I would need to spend some time getting a handle on how the directories were organized and what all their exotic names like /etc (not for miscellaneous files), /usr (not for user files), and /bin (not a trash can) meant.

This tutorial will help you get up to speed faster than I did.

Structure

It makes sense to explore the Linux filesystem from a terminal window, not because the author is a grumpy old man and resents new kids and their pretty graphical tools — although there is some truth to that — but because a terminal, despite being text-only, has better tools to show the map of Linux’s directory tree.

In fact, that is the name of the first tool you’ll install to help you on the way: tree. If you are using Ubuntu or Debian, you can do:

sudo apt install tree

On Red Hat or Fedora, do:

sudo dnf install tree

For SUSE/openSUSE use zypper:

sudo zypper install tree

For Arch-like distros (Manjaro, Antergos, etc.) use:

sudo pacman -S tree

… and so on.

Once installed, stay in your terminal window and run tree like this:

tree /

The / in the instruction above refers to the root directory. The root directory is the one from which all other directories branch off from. When you run tree and tell it to start with /, you will see the whole directory tree, all directories and all the subdirectories in the whole system, with all their files, fly by.

If you have been using your system for some time, this may take a while, because, even if you haven’t generated many files yourself, a Linux system and its apps are always logging, cacheing, and storing temporal files. The number of entries in the file system can grow quite quickly.

Don’t feel overwhelmed, though. Instead, try this:

tree -L 1 /

And you should see what is shown in Figure 1.

The instruction above can be translated as “show me only the 1st Level of the directory tree starting at / (root)“. The -L option tells tree how many levels down you want to see.

Most Linux distributions will show you the same or a very similar layout to what you can see in the image above. This means that even if you feel confused now, master this, and you will have a handle on most, if not all, Linux installations in the whole wide world.

To get you started on the road to mastery, let’s look at what each directory is used for. While we go through each, you can peek at their contents using ls.

Directories

From top to bottom, the directories you are seeing are as follows.

/bin

/bin is the directory that contains binaries, that is, some of the applications and programs you can run. You will find the ls program mentioned above in this directory, as well as other basic tools for making and removing files and directories, moving them around, and so on. There are more bin directories in other parts of the file system tree, but we’ll be talking about those in a minute.

/boot

The /boot directory contains files required for starting your system. Do I have to say this? Okay, I’ll say it: DO NOT TOUCH!. If you mess up one of the files in here, you may not be able to run your Linux and it is a pain to repair. On the other hand, don’t worry too much about destroying your system by accident: you have to have superuser privileges to do that.

/dev

/dev contains device files. Many of these are generated at boot time or even on the fly. For example, if you plug in a new webcam or a USB pendrive into your machine, a new device entry will automagically pop up here.

/etc

/etc is the directory where names start to get confusing. /etc gets its name from the earliest Unixes and it was literally “et cetera” because it was the dumping ground for system files administrators were not sure where else to put.

Nowadays, it would be more appropriate to say that etc stands for “Everything to configure,” as it contains most, if not all system-wide configuration files. For example, the files that contain the name of your system, the users and their passwords, the names of machines on your network and when and where the partitions on your hard disks should be mounted are all in here. Again, if you are new to Linux, it may be best if you don’t touch too much in here until you have a better understanding of how things work.

/home

/home is where you will find your users’ personal directories. In my case, under /home there are two directories: /home/paul, which contains all my stuff; and /home/guest, in case anybody needs to borrow my computer.

/lib

/lib is where libraries live. Libraries are files containing code that your applications can use. They contain snippets of code that applications use to draw windows on your desktop, control peripherals, or send files to your hard disk.

There are more lib directories scattered around the file system, but this one, the one hanging directly off of / is special in that, among other things, it contains the all-important kernel modules. The kernel modules are drivers that make things like your video card, sound card, WiFi, printer, and so on, work.

/media

The /media directory is where external storage will be automatically mounted when you plug it in and try to access it. As opposed to most of the other items on this list, /media does not hail back to 1970s, mainly because inserting and detecting storage (pendrives, USB hard disks, SD cards, external SSDs, etc) on the fly, while a computer is running, is a relatively new thing.

/mnt

The /mnt directory, however, is a bit of remnant from days gone by. This is where you would manually mount storage devices or partitions. It is not used very often nowadays.

 /opt

The /opt directory is often where software you compile (that is, you build yourself from source code and do not install from your distribution repositories) sometimes lands. Applications will end up in the /opt/bin directory and libraries in the /opt/lib directory.

A slight digression: another place where applications and libraries end up in is /usr/local, When software gets installed here, there will also be /usr/local/bin and /usr/local/lib directories. What determines which software goes where is how the developers have configured the files that control the compilation and installation process.

/proc

/proc, like /dev is a virtual directory. It contains information about your computer, such as information about your CPU and the kernel your Linux system is running. As with /dev, the files and directories are generated when your computer starts, or on the fly, as your system is running and things change.

/root

/root is the home directory of the superuser (also known as the “Administrator”) of the system. It is separate from the rest of the users’ home directories BECAUSE YOU ARE NOT MEANT TO TOUCH IT. Keep your own stuff in your own directories, people.

/run

/run is another new directory. System processes use it to store temporary data for their own nefarious reasons. This is another one of those DO NOT TOUCH folders.

/sbin

/sbin is similar to /bin, but it contains applications that only the superuser (hence the initial s) will need. You can use these applications with the sudo command that temporarily concedes you superuser powers on many distributions. /sbin typically contains tools that can install stuff, delete stuff and format stuff. As you can imagine, some of these instructions are lethal if you use them improperly, so handle with care.

/usr

The /usr directory was where users’ home directories were originally kept back in the early days of UNIX. However, now /home is where users kept their stuff as we saw above. These days, /usr contains a mish-mash of directories which in turn contain applications, libraries, documentation, wallpapers, icons and a long list of other stuff that need to be shared by applications and services.

You will also find binsbin and lib directories in /usr. What is the difference with their root-hanging cousins? Not much nowadays. Originally, the /bin directory (hanging off of root) would contain very basic commands, like lsmv and rm; the kind of commands that would come pre-installed in all UNIX/Linux installations, the bare minimum to run and maintain a system. /usr/bin on the other hand would contain stuff the users would install and run to use the system as a work station, things like word processors, web browsers, and other apps.

But many modern Linux distributions just put everything into /usr/bin and have /bin point to /usr/bin just in case erasing it completely would break something. So, while Debian, Ubuntu and Mint still keep /bin and /usr/bin (and /sbin and /usr/sbin) separate; others, like Arch and its derivatives just have one “real” directory for binaries, /usr/bin, and the rest or *bins are “fake” directories that point to /usr/bin.

/srv

The /srv directory contains data for servers. If you are running a web server from your Linux box, your HTML files for your sites would go into /srv/http (or /srv/www). If you were running an FTP server, your files would go into /srv/ftp.

/sys

/sys is another virtual directory like /proc and /dev and also contains information from devices connected to your computer.

In some cases you can also manipulate those devices. I can, for example, change the brightness of the screen of my laptop by modifying the value stored in the /sys/devices/pci0000:00/0000:00:02.0/drm/card1/card1-eDP-1/intel_backlight/brightness file (on your machine you will probably have a different file). But to do that you have to become superuser. The reason for that is, as with so many other virtual directories, messing with the contents and files in /sys can be dangerous and you can trash your system. DO NOT TOUCH until you are sure you know what you are doing.

/tmp

/tmp contains temporary files, usually placed there by applications that you are running. The files and directories often (not always) contain data that an application doesn’t need right now, but may need later on.

You can also use /tmp to store your own temporary files — /tmp is one of the few directories hanging off / that you can actually interact with without becoming superuser.

/var

/var was originally given its name because its contents was deemed variable, in that it changed frequently. Today it is a bit of a misnomer because there are many other directories that also contain data that changes frequently, especially the virtual directories we saw above.

Be that as it may, /var contains things like logs in the /var/log subdirectories. Logs are files that register events that happen on the system. If something fails in the kernel, it will be logged in a file in /var/log; if someone tries to break into your computer from outside, your firewall will also log the attempt here. It also contains spools for tasks. These “tasks” can be the jobs you send to a shared printer when you have to wait because another user is printing a long document, or mail that is waiting to be delivered to users on the system.

Your system may have some more directories we haven’t mentioned above. In the screenshot, for example, there is a /snap directory. That’s because the shot was captured on an Ubuntu system. Ubuntu has recently incorporated snap packages as a way of distributing software. The /snap directory contains all the files and the software installed from snaps.

Digging Deeper

That is the root directory covered, but many of the subdirectories lead to their own set of files and subdirectories. Figure 2 gives you an overall idea of what the basic file system tree looks like (the image is kindly supplied under a CC By-SA license by Paul Gardner) and Wikipedia has a break down with a summary of what each directory is used for.

To explore the filesystem yourself, use the cd command:

cd 

will take you to the directory of your choice (cd stands for change directory.

If you get confused,

pwd

will always tell you where you (pwd stands for print working directory). Also,

cd

with no options or parameters, will take you back to your own home directory, where things are safe and cosy.

Finally,

cd ..

will take you up one level, getting you one level closer to the / root directory. If you are in /usr/share/wallpapers and run cd .., you will move up to /usr/share.

To see what a directory contains, use

ls 

or simply

ls

to list the contents of the directory you are in right now.

And, of course, you always have tree to get an overview of what lays within a directory. Try it on /usr/share — there is a lot of interesting stuff in there.

Conclusion

Although there are minor differences between Linux distributions, the layout for their filesystems are mercifully similar. So much so that you could say: once you know one, you know them all. And the best way to know the filesystem is to explore it. So go forth with treels, and cd into uncharted territory.

You cannot damage your filesystem just by looking at it, so move from one directory to another and take a look around. Soon you’ll discover that the Linux filesystem and how it is laid out really makes a lot of sense, and you will intuitively know where to find apps, documentation, and other resources.

Ready to continue your Linux journey? Check out our free intro to Linux course!