In an effort to install various apps in Ubuntu or other Linux distribution, we often add several PPA. Most of the times these PPAs are managed by a single developer as he might have created a certain app for personal amusement or as a hobby. Over times these PPAs might not be updated with latest version of operating system. This might create trouble when you try updating your OS. You may have other reasons as well for deleting or removing a PPA from the source list.


There are several ways to remove a PPA in Ubuntu. You can do it from Software Sources list, by removing the source files from the directory or the simplest way by using apt. There are different advantages of all of them. We’ll see all methods to delete a PPA in detail:


This method is apt for those of you who prefer to use GUI over command line. While you are using Linux, I highly recommend that you use command line. But lets see how to remove PPA in graphical manner.


Go to Unity dash (by pressing the super or Windows key) and search for Software Sources:

Ubuntu Software Center Unity DashSTEP 2:

In the Software Sources, go to Other Software tab and choose the desire PPA from the list. Afterwards click on Remove to remove it:

Delete a PPA from Software Source


Mostly, you add a PPA using add-apt. We’ll use same add-apt to remove a PPA.Use the following command in terminal:

sudo add-apt-repository --remove ppa:PPA_Name/ppa

In the above command replace PPA_Name with the desired PPA name. By this time you might have realized that you need to know the exact PPA name to use this otherwise straight forward method.


Alternatively, you can remove the PPA from sources list where these PPAs are stored. PPAs are store in the form of PPA_Name.list. Use the following command to see all the PPAs added in your system:

sudo ls /etc/apt/sources.list.d

Look for your desire PPA here and then remove the PPA using the following command:

sudo rm -i /etc/apt/sources.list.d/PPA_Name.list


You might have noticed that in all the above three methods we only talked about deleting or removing a PPA. What about the applications installed using these PPAs? Will they be removed as a result of removing the PPA? The answer is NO. So this is when PPA purge comes in picture. It not only removes the PPA but also uninstalls all the programs installed by the PPA.

Install ppa-purge by using the following command:

sudo apt-get install ppa-purge

Now use it in following manner to purge the PPA:

sudo ppa-purge ppa-url

The URL of the PPA can be found in the Software Sources list.

I hope you’ll find at least one good method to delete or remove a PPA and uninstall the corresponding applications.

Source https://itsfoss.com


6 Best Linux/Unix Command Cheat Sheet

Linux has become so user-friendly nowadays that there is less and less need to use the command line. However, the commands and shell scripts remained powerful for advanced users to be able to do more complicated tasks quickly and efficiently.

With the vast number of commands available, you have to know loads of commands and how to effectively use them. But there is really no need to memorize everything since there are many cheat sheets available on the internet and books. We have here a collection of 6 Linux/Unix cheat sheets that can greatly help you when you are stuck and need a reference.

1. Linux Command Reference

In this cheat sheet you will find a bunch of the most common Linux commands that you’re likely to use on a regular basis.


Source: Linux Command Reference


2. Unix/Linux Command Reference

A great command reference published by FOSSWire.


Source: Unix/Linux Command Reference


3. One Page Linux Manual

An incredible one page reference to the most popular Linux commands. It is a summary of the most useful Linux commands.

Source: The One Page Linux Manual

4. Linux Commands Cheat Sheet

Keep this linux cheat sheet on your desk printed , I am sure you will learn them quick and will be a linux expert very soon.


Source: Linux Commands Cheat Sheet


5. Linux Command Line Cheat Sheet

The origin of this cheat sheet is a Danish blog. But they have some useful cheat sheets which could be used as a wallpaper. It’s definitely useful when it is easily accessible from your desktop at any time.


Source: Linux Command Line Cheat Sheet


6. Linux Command Line Cheat Sheet

A cheat sheet with the most used commands for Linux.


Source: http://www.cheatography.com



Open-source Linux is a popular alternative to Microsoft Windows, and if you choose to use this low-cost or free operating system, you need to know some basic Linux commands to make your system smoothly. The most common Linux commands are shown in this table.

Command Description
cat [filename] Display file’s contents to the standard output device
(usually your monitor).
cd /directorypath Change to directory.
chmod [options] mode filename Change a file’s permissions.
chown [options] filename Change who owns a file.
clear Clear a command line screen/window for a fresh start.
cp [options] source destination Copy files and directories.
date [options] Display or set the system date and time.
df [options] Display used and available disk space.
du [options] Show how much space each file takes up.
file [options] filename Determine what type of data is within a file.
find [pathname] [expression] Search for files matching a provided pattern.
grep [options] pattern [filesname] Search files or output for a particular pattern.
kill [options] pid Stop a process. If the process refuses to stop, use kill -9 pid.
less [options] [filename] View the contents of a file one page at a time.
ln [options] source [destination] Create a shortcut.
locate filename Search a copy of your filesystem for the specified
lpr [options] Send a print job.
ls [options] List directory contents.
man [command] Display the help information for the specified command.
mkdir [options] directory Create a new directory.
mv [options] source destination Rename or move file(s) or directories.
passwd [name [password]] Change the password or allow (for the system administrator) to
change any password.
ps [options] Display a snapshot of the currently running processes.
pwd Display the pathname for the current directory.
rm [options] directory Remove (delete) file(s) and/or directories.
rmdir [options] directory Delete empty directories.
ssh [options] user@machine Remotely log in to another Linux machine, over the network.
Leave an ssh session by typing exit.
su [options] [user [arguments]] Switch to another user account.
tail [options] [filename] Display the last n lines of a file (the default is
tar [options] filename Store and extract files from a tarfile (.tar) or tarball (.tar.gz or .tgz).
top Displays the resources being used on your system. Press q to
touch filename Create an empty file with the specified name.
who [options] Display who is logged on.

Source http://www.dummies.com


What’s the Difference Between Ubuntu and Linux Mint?

Ubuntu and Linux Mint are two of the most popular desktop Linux distributions at the moment. If you’re looking to take the dive into Linux – or you’ve already used Ubuntu or Mint – you wonder how they’re different.

Linux Mint and Ubuntu are closely related — Mint is based on Ubuntu. Although they were very similar at first, Ubuntu and Linux Mint have become increasingly different Linux distributions with different philosophies over time.


Ubuntu and other Linux distributions contain open-source software, so anyone can modify it, remix, and roll their own versions. Linux Mint’s first stable version, “Barbara,” was released in 2006. Barbara was a lightly customized Ubuntu system with a different theme and slightly different default software. Its major differentiating feature was its inclusion of proprietary software like Flash and Java, in addition to patent-encumbered codecs for playing MP3s and other types of multimedia. This software is included in Ubuntu’s repositories, but isn’t included on the Ubuntu disc. Many users liked Mint for the convenience installing the stuff by default, in contrast to Ubuntu’s more idealistic approach.

Over time, Mint differentiated itself from Ubuntu further, customizing the desktop and including a custom main menu and their own configuration tools. Mint is still based on Ubuntu – with the exception of Mint’s Debian Edition, which is based on Debian (Ubuntu itself is actually based on Debian).

With Ubuntu’s launch of the Unity desktop, Mint picked up additional steam. Instead of rolling the Unity desktop into Mint, Mint’s developers listened to their users and saw an opportunity to provide a different desktop experience from Ubuntu.

The Desktop

Ubuntu includes the Unity desktop by default, although you can install a wide variety of additional desktop environments from Ubuntu’s repositories and third-party personal package archives (PPAs).


Mint’s latest release comes in two versions, each with a different desktop: Cinnamon and MATE. Cinnamon is a more forward-looking desktop that builds on new technologies without throwing out standard desktop elements – for example, Cinnamon actually has a taskbar and an applications menu that doesn’t take over your entire screen.


MATE is a fork of the old GNOME 2 desktop that Ubuntu and Linux Mint previously used, and it works similarly. It uses MATE’s custom menu.


You’ll also notice that Mint has a more toned down and lighter color scheme Its window buttons are also on the right side of the window title bar instead of the left.

Which desktop environment you prefer ultimately comes down to personal choice. Ubuntu’s Unity is more jarring for users of the older Linux desktop environments, while Mint’s desktop environments are less of a drastic change. However, some people do prefer Unity, and Unity has improved somewhat in recent versions.

Proprietary Software & Codecs

Mint still includes proprietary software (like Flash) and codecs out-of-the-box, but this has become less of a differentiating feature. The latest versions of Ubuntu allow you to enable a single check box during installation and Ubuntu will automatically grab the proprietary software and codecs you need, without any additional work required.



These days, Mint seems to offer more configurability than Ubuntu out-of-the-box. Whereas Ubuntu’s Unity only includes a few options in the latest version of Ubuntu, there’s an entire settings application for configuring the Cinnamon desktop.


The latest version of Mint, “Maya,” also includes the MDM display manager, which is based on the old GNOME Display Manager. Whereas Ubuntu doesn’t ship with any graphical configuration tools for tweaking its login screen, Mint ships with an administration panel that can customize the Login Screen.


While Ubuntu is still based on Linux and is configurable under-the-hood, many pieces of Ubuntu software aren’t very configurable. For example, Ubuntu’s Unity desktop has very few options.

Ubuntu’s latest versions are more of a break from the past, dispensing with the more traditional desktop environment and large amount of configuration options. Mint retains these, and feels more familiar.

Source  http://www.howtogeek.com


Linux Commands Cheat Sheet in Black & White

Keep this linux cheat sheet on your desk printed, I am sure you will learn them quick and will be a linux expert very soon. We have added had both pdf and image (png) format of the cheat sheet. Please keep us posted if you need us to add more commands. Command are categorized 13 sections according its usage.We have designed the command in white color with black background as we often use on linux shell. We have added bit color for attraction :-).

We have grouped commands in the below categories

a) System
b) Hardware
c) Users
d) File Commands
e) Process Related
f) File Permission
g) Network
h) Compression / Archives
i) Install Packages
j) Install Source
k) Search

l) Login
m) File Transfer
n) Directory Traverselinux-cheat-sheet-612x792

Download linux cheat sheet in pdf format and refer each command in detail.

Source http://linoxide.com/linux-command

7 Creative Ways to Use Live Website Chat Software

7 Creative Ways to Use Live Website Chat Software

With the increasing demand of immediate attention by the customers, investing in live chat software has become an unavoidable part of the business. Though most of the business have gradually understood the importance of live chat software and have already installed it, some companies particularly the SMP spaces are still rigid about it. They still focus on building relationships and try to crack the deals on the basis of their relations with the clients.

Even the companies that have already installed the software are not able to leverage its power to the fullest. They just consider it as a customer support tool and look at it when some customer comes with a query. If you look at it as a sales and conversion perspective, the widget can work as a strong sales agent boosting your business.

A business that has live chat support on their website still use the same age-old technique like personalized email, ad campaigns, and A/B testing to understand customer requirements. This is because they are unaware of the power of live chat. By using it creatively, you not just offer a quick and effective solution to the customers but also talk to them at their peak point of interest to gain more knowledge about their interest.

If you are still confused how to use the software, here are seven creative ways you can leverage live chat widget in your business. It will help you generate more business by leaving a lasting impression on online customers.

These are the top 7 ways to use live chat:

Onboarding client assistance

In this, you can use proactive chat invitations to start a conversation with your top tier clients. Engaging with the clients when they sign up for a new account is the right time to approach them and win their trust. Proactive chat doesn’t limit the reps to just offering assistance when customers look for it; rather, it gives them an opportunity to provide instant, personalized and reliable solutions.

Multilingual Chat Support

Online ecommerce stores can get customers from any part of the world. It can also include people, who might not be comfortable with English or the language used on your website. With live chat software on the website, you can offer multilingual chat support to immediately connect customers with the brand.

Facebook Fan Page Chat

Facebook Fan Page Chat

Only a few live chat software companies like REVE Chat offer Facebook Fan Page Chat widget. With nearly 1.10 billion users and 77% of all the social logins, Facebooks becomes one of the largest platforms to connect with the existing and potential customers. By adding a ‘Chat with us’ tab on your Facebook page, you allow visitors to get in touch with the team in real-time, which also serve as a lead generation channel for your marketing team.

Easy pre-chat form

This is nothing but a technique to gather basic information about the visitors to connect them with the right live chat agent. While using the pre-chat form in your live chat widget, just make sure it is short and simple with only a couple of fields. It is a win-win situation for both the visitors and the companies, as visitors don’t have to follow a long path to engage with the right agent and companies don’t have to keep the customers on hold to find an appropriate team for them.

Proactive chat on 404 page

One of the most frustrating things when browsing a website is to come across a page with 404 page. Visitors generally exit the website and never visit it back. Instead of leaving your visitors confused and irritated, give them a list of another helpful link they can visit on your website. Some live chat widgets immediately send a proactive chat invitation when a customer lands on a 404 page to help them find the information they are looking for.

Real-time online counseling

If you think that live chat support on the website is just to guide the customers, you are still unaware of the power of the widget. If you have a medical website or any other business where visitors look for counseling, live chat software on the website can be a great tool for you. Make their every moment beneficial with professional and real-time online counseling.

Text-to-chat customer support

You can allow your clients to start a chat session with your support team through a simple text message. All you need is to show your ‘text-to-chat’ phone number on your website.

These are some creative ways you can use live chat widget on your website. If you get live chat software with higher pricing but with impressive features, you shouldn’t think twice. Just install it and get ready for the business growth.


The Best UI/UX Design Books & Resources for Designers

113_dont_thinkWant to be an excellent designer? Looking for the best UI/UX books and resources? Nowhere to go or to gain the right and effective channel for becoming an outstanding UI/UX designer? Just follow me, I have compiled a list of high-profile UI/UX books, which are recommended by the major professional websites, and blogs.The topic is mainly covering UI design, UX design, and web design. Hope it is helpful and useful to you. Any resource you think it’s worth to be included, please feel free to give a message below the comment area or simply drop me a line on LinkedIn.

UX Books for Entry-level Designers (Books with PDF link)

1. The Design of Everyday Things – By Donald A. Norman

It shows a teapot on the cover of the book, the teapot spout and the handle at the same side, if you tea, you are likely to burn yourself. What Norman want to tell you, the life is hard, often the “bad design” should be blamed. To learn interaction well, you must understand what’s the design requirements from people at first. As Steve Jobs said, “Design is not just what is looks like and feels like. Design is how it works.” The ultimate purpose of the designer is to make useful products, not just good-looking.

2. Don’t Make Me Think – By Steve Krug

“Characters of this book, the first one is short and pithy, 200-page length, not wordy at all. You may put it devoured on a noon, perhaps before going to sleep, even on the plane, or on your way to work. (It’s more likely to read through it on the love when you get the book) ” Therein, which stresses the three laws of Web Usability, the first one is – do not let me think.

3. The Non-Designer’s Design Book – By Robin Williams

Interaction designers must learn the basic knowledge of typography, no aesthetic, you can not be a good designer. In this ubiquitous creativity era, you have to make yourself be a designer.

In the eyes of Robin Williams, the design is quite simple. The book covers the four graphic design principles of C.R.A.P(Contrast, Repetition, Alignment, and Proximity), with concise, humor, and vivid language recounts the how much the changes and visible benefits brought by using these principles flexibly. In addition, it also introduces some basic knowledge about color and font, making the content more completely.

UX Books for Advanced Designers (Books with PDF link)
1. The Elements of User Experience – By Jesse James Garrett

If all you need is a book to teach you “how to design”, there are many, many books discuss how to build a website, but this one is not you wanted. If all you need is a book to tell you about technology, you can not find a line of code here. If you want to find an answer in this book, on the contrary, this book teaches you “how to ask the right questions.”

This book will tell you what you need to know in advance before you read other books. This book is for you, if you need a great concept, and if you need to understand the environment that user experience designers make decisions.

2. A Project Guide to UX Design: For user experience designers in the field or in the making (2nd Edition) – By Russ Unger & Carolyn Chandler

“If you are a young designer entering or contemplating entering the UX field this is a canonical book. If you are an organization that really needs to start grokking UX this book is also for you.”– Chris Bernard.)

3. Communicating Design: Developing Web Site Documentation for Design and Planning (2nd Edition) – By Dan M. Brown

Successful web design team relies on good communication between developers and customers, but also inseparable from communication within the development team members. Dan Brown will teach you through this book, wireframes, site maps, flow charts and other design established a common language. Through it, the designers and project teams can capture ideas, track progress and always allow stakeholders to know the latest situation of the project.

4. About Face: The Essentials of Interaction Design– By Alan Cooper, Robert Reimann, David Cronin, Christopher Noessel

As one of the must-read classic books, About Face series are worth the time to read, and each version is very valuable. As one of the instrumental books, AF brings interaction into the daily language of product design and development. Which is a comprehensive guide on interface design and interaction design of web and mobile devices. This book covers the best practices of project progress, goal-oriented design, persona development. For beginners who have no project experience, it may be very difficult, but still worth reading. We recommend cursory read at the first time and then study the rest part carefully when needed, because it involves too many details.

“If Norman is an old man telling stories, Krug is a crash expert let you simply entry design, while Cooper is a scholar, researcher, designer.”

UX Websites & Blogs

1. Lukew

Lukew, a senior UX expert on digital product leader who has designed and built software used by more than one billion people worldwide also the founder of several companies.

2. Smashing Magazine

A comprehensive website provides high-quality articles with the UX employee on Design, Coding, Mobile, and Word Press etc.

3. UXbooth

A professional UX website. The difference between it and Smashing Magazine is that UXbooth focuses more on the aspect of user experience design.

4. Mockplus Blog

It’s a new blog with very simple and clean interface, no more distraction from advertisements or others. Articles are all surrounding the topics of design tools, UI/UX design, web design, and mobile app design. A good design topic resource to follow.

Books Recommended by readers:

Mental Models (by Indi Young)

Practical Empathy (by Indi Young)

The Inmates Are Running the Asylum ( by Alan Cooper)

Rocket Surgery Made Easy ( by Steve Krug)

Designing for the Digital Age (by Kim Goodwin)

Start with Why (by Simon Sinek)

The UX Book: Process and Guidelines for Ensuring a Quality User Experience (by Rex Hartson, Pardha Pyla)

Interaction Design: Beyond Human – Computer Interaction (by Preece, Sharp, and Rogers)

Measuring the user experience (by Tullis and Albert) (If you want to know more about the quantitative side)

Research Methods in Human-computer Interaction (by Jonathan Lazar and Jinjuan Heidi Feng)

Handbook of Usability Testing 2nd Edition (by Rubin and Chisnell) (If you want to dig deeper into how to conduct usability tests, but may also be a bit for interviews).


Here Are 7 Brilliant Cheat Sheets For Linux/Unix

There’s nothing better than a cheatsheet when you are stuck and need a reference. So here bringing to you 7 brilliant free cheat sheets. linux-cheat-sheet-300x200
1. Unix Tool Box: An incredibly exhaustive reference for all things Linux. This document is a collection of Unix/Linux/BSD commands and tasks which are useful for IT work or for advanced users.

2. One page Linux Manual: Great one page reference to the most popular Linux commands, it is a summary of useful Linux commands.

3. Linux Reference Card: One great reference published by FOSSwire.

4. Linux Command Line Cheat Sheet: This is an interestingly sorted and helpful cheat sheet by cheatography.

5. Linux Command Line Tips: This is a linux command line reference for common operations. Cleanly sorted and well described.

6. Treebeard’s Unix Cheat Sheet: A great reference that shows command comparisons with that of DOS. So if you are someone who was a DOS user and has switched to Linux, this is the best one too have!

7. Linux Shortcuts and Commands: This is a practical selection of the commands we use most often.

Source Linoxide.com


Become a Linux Terminal Power User With These 8 Tricks

There’s more to using the Linux terminal than just typing commands into it. Learn these basic tricks and you’ll be well on your way to mastering the Bash shell, used by default on most Linux distributions.

This one’s for the less experienced users – I’m sure that many of you advanced users out there already know all these tricks. Still, take a look – maybe there’s something you missed along the way.

Tab Completion

Tab completion is an essential trick. It’s a great time saver and it’s also useful if you’re not sure of a file or command’s exact name.

For example, let’s say you have a file named “really long file name” in the current directory and you want to delete it. You could type the entire file name, but you’d have to escape the space characters properly (in other words, add the \ character before each space) and might make a mistake. If you type rm r and press Tab, Bash will automatically fill the file’s name in for you.

Of course, if you have multiple files in the current directory that begin with the letter r, Bash won’t know which one you want. Let’s say you have another file named “really very long file name” in the current directory. When you hit Tab, Bash will fill in the “really\ “ part, since the files both begin with that. After it does, press Tab again and you’ll see a list of matching file names.

tab completion

Continue typing your desired file name and press Tab. In this case, we can type an “l” and press Tab again and Bash will fill in our desired file name.

This also works with commands. Not sure what command you want, but know it begins with “gnome”? Type “gnome” and press Tab to see a list.


Pipes allow you to send the output of a command to another command. In the UNIX philosophy, each program is a small utility that do one thing well. For example, the ls command lists the files in the current directory and the grep command searches its input for a specified term.

Combine these with pipes (the | character) and you can search for a file in the current directory. The following command searches for the word “word”:

ls | grep word


Wild Cards

The * character – that is, the asterisk – is a wild card that can match anything. For example, if we wanted to delete both “really long file name” and “really very long file name” from the current directory, we could run the following command:

rm really*name

This command deletes all files with file names beginning with “really” and ending with “name.” If you ran rm * instead, you’d delete every file in the current directory, so be careful.

wild card

Output Redirection

The > character redirects a command’s output to a file instead of another command. For example, the following line runs the ls command to list the files in the current directory and, instead of printing that list to the terminal, it prints the list to a file named “file1” in the current directory:

ls > file1

bash tricks header

Command History

Bash remembers a history of the commands you type into it. You can use the up and down arrow keys to scroll through commands you’ve recently used. The history command prints a list of these commands, so you can pipe it to grep to search for commands you’ve used recently. There are many other tricks you can use with Bash history, too.


~, . & ..

The ~ character – also known as the tilde – represents the current user’s home directory. So, instead of typing cd /home/name to go to your home directory, you can type cd ~ instead. This also works with relative paths – cd ~/Desktop would switch to the current user’s desktop.

Similarly, the . represents the current directory and the .. represents the directory above the current directory. So, cd .. goes up a directory. These also work with relative paths – if you’re in your Desktop folder and want to go to the Documents folder, which is in the same directory as the Desktop folder, you can use the cd ../Documents command.


Run a Command in the Background

By default, Bash executes every command you run in the current terminal. That’s normally fine, but what if you want to launch an application and continue using the terminal? If you type firefox to launch Firefox, Firefox will take over your terminal and display error messages and other output until you close it. Add the & operator to the end of the command to have Bash execute the program in the background:

firefox &

background process

Conditional Execution

You can also have Bash run two commands, one after another. The second command will only execute if the first command completed successfully. To do this, put both commands on the same line, separated by a &&, or double ampersand.

For example, the sleep command takes a value in seconds, counts down, and completes successfully. It’s useless alone, but you can use it to run another command after a delay. The following command will wait five seconds, then launch the gnome-screenshot tool:

sleep 5 && gnome-screenshot

Do you have any more tricks to share? Leave a comment and help your fellow readers!

Source www.howtogeek.com


5 Latest Web Development Tools You Should Know

If you have been following our monthly post series on Fresh Resources for Designers and Developers, you can see that every month there are plenty of new tools introduced; the list is likely infinite. Back in 2008, when I was just beginning to learn HTML and CSS, most of these tools had not existed yet.

Today, the Web is exponentially growing. It is also getting more complex than ever before. We need more tools that can help lift some of the weight of website development. So, in this post, we have put together a set of trendy tools that will help in web development.

Hopefully these lists could help introduce you to the right web development tools, particularly for those of you who are just getting started.

1. CSS Pre-processors

CSS is very easy to write. The syntax is straightforward and easy to understand. But as your project grows larger, you may have to manage multiple stylesheets to handle thousands of CSS lines and if you know CSS, you know it becomes mighty hard to maintain in that situation.

This is where CSS Pre-processors become really useful. We have covered CSS-Preprocessors for several times in the past, so I guess you already are quite familiar with them. For those who are new, in a nutshell, CSS Pre-processor allows us to write CSS in programming fashion with Variables and Functions, which then is compiled into browser-compliant CSS format. We can also reuse CSS properties with some special rules such as @extend and @include.

There are a number of CSS Pre-processors: Sass, LESS, Stylus, and Myth.

2. Template Engine

Creating a static HTML page similarly is simple. However, if you have multiple HTML pages to handle in your project, things could get out of hand. Most of these pages may share the same components such as a Header, Sidebar, and Footer.

Using a Template Engine sounds better for this situation. There are now a number of Template Engines that we can use, such as Kit, Jade, and Handlebars. Each has its own writing conventions. Kit, for instance, comes only with Variables and Import capability which are declared with a simple HTML comment tag, like so.

// example of importing a seperate template
<!-- @include inc/partial  -->
// this is a variable
<!-- @var: The Variable Value -->

Jade and Handlebars come with a lot of robust features to cater to more complex projects. We will discuss them in more detail in a separate post (stay tuned!). The point is that if you want to build a scalable static website, you should take advantages of a Template Engine.

3. Task Runner

The process to build a website is considerably repetitive. Minification, Compilation, Unit Testing, Linting, Concatenating Files and Browser Refreshing, to name a few, are the things that we would most likely do often in projects. Luckily, they can be automated using a Task Runner, such as Grunt and Gulp.

You can tell Grunt to do a set of tasks specified in Gruntfile.js. There are now a plenty of plugins to automate almost anything with Grunt, so you need not write your own Grunt tasks.

Say you want to compile your LESS file into CSS, you can install grunt-contrib-less. In our previous post, we have also utilized Grunt to remove unnecessary modules in jQuery.

If if your project is tiring you out, it is time for you to utilize a Task Runner to streamline your workflow.

4. Synchronized Testing Tool

Here is one inevitable tool if you are building a mobile optimized website. If you have a lot of devices to test your website in, you definitely need Synchronized Testing, which allows you to test your website in multiple devices simultaneously.

Browser reloading as well as the interactions like clicking and scrolling are reflected across all tested devices at the same time, saving you from repetitive action.

There are two tools you can try to do this: A Grunt plugin called BrowserSync, and a GUI application called Ghostlab.

5. Development Toolkit

Development Toolkit puts together a number of tools in one Application. If you are not comfortable with the text-based setting in Grunt, a GUI application would probably be a better tool for you.

Codekit pioneers this kind of application, and includes LESS, Sass, Kit, Jade, Siml, Uglify, Bower, and a lot more on its feature list.


How to Find Out What Version of Linux You Are Running

There are several ways of knowing the version of Linux you are running on your machine as well as your distribution name and kernel version plus some extra information that you may probably want to have in mind or at your fingertips.

Therefore, in this simple yet important guide for new Linux users, I will show you how to do just that. Doing this may seem to be relatively easy task, however, having a good knowledge of your system is always a recommended practice for a good number of reasons including installing and running the appropriate packages for your Linux version, for easy reporting of bugs coupled with many more.

Find Out Linux Kernel Version

We will use uname command, which is used to print your Linux system information such as kernel version and release name, network hostname, machine hardware name, processor architecture, hardware platform and the operating system.

To find out which version of Linux kernel you are running, type:

$ uname -or

Shows Current Linux Kernel Version Running on System

Shows Current Linux Kernel Version Running on System

In the preceding command, the option -o prints operating system name and -r prints the kernel release version.

You can also use -a option with uname command to print all system information as shown:

$ uname -a

Shows Linux System Information

Shows Linux System Information

Next, we will use /proc file system, that stores information about processes and other system information, it’s mapped to /proc and mounted at boot time.

Simply type the command below to display some of your system information including the Linux kernel version:

$ cat /proc/version

Shows Linux System Information

Shows Linux System Information

From the image above, you have the following information:

  1. Version of the Linux (kernel) you are running: Linux version 4.5.5-300.fc24.x86_64
  2. Name of the user who compiled your kernel: mockbuild@bkernel01.phx2.fedoraproject.org
  3. Version of the GCC compiler used for building the kernel: gcc version 6.1.1 20160510
  4. Type of the kernel: #1 SMP (Symmetric MultiProcessing kernel) it supports systems with multiple CPUs or multiple CPU cores.
  5. Date and time when the kernel was built: Thu May 19 13:05:32 UTC 2016

Find Out Linux Distribution Name and Release Version

The best way to determine a Linux distribution name and release version information is using cat /etc/os-release command, which works on almost all Linux system.

---------- On Red Hat Linux ---------- 
$ cat /etc/redhat-release
---------- On CentOS Linux ---------- 
$ cat /etc/centos-release
---------- On Fedora Linux ---------- 
$ cat /etc/fedora-release
---------- On Debian Linux ---------- 
$ cat /etc/debian_version
---------- On Ubuntu and Linux Mint ---------- 
$ cat /etc/lsb-release
---------- On Gentoo Linux ---------- 
$ cat /etc/gentoo-release
---------- On SuSE Linux ---------- 
$ cat /etc/SuSE-release

Find Linux Distribution Name and Release Version

Find Linux Distribution Name and Release Version

In this article, we walked through a brief and simple guide intended to help new Linux user find out the Linux version they are running and also get to know their Linux distribution name and version from the shell prompt.

Perhaps it can also be useful to advanced users on one or two occasions. Lastly, to reach us for any assistance or suggestions you wish to offer, make use of the feedback form below.

Source http://www.tecmint.com


20 Brilliant Tools for Web Design and Development

If you’ve had a quiet time of it these last 12 months, then well done you, because the rest of us were sweating just to keep up with the base rate of change online. HTML5 has reached critical mass, responsive development continued to barrel along at full tilt, then there’s audio APIs and WebGL…

Thankfully, the degree of change correlates positively to the problem-solving efforts of the developers and designers everywhere, dug into their respective specialities.

Niche tools

As a result, along with the larger corporate-backed applications, we have a huge host of small tools and libraries, each designed to solve a particular problem or preserve a certain set of possibilities. A couple of these projects have become institutions: Modernizr, keeping the technical playing field level and PhoneGap holding the mobile market open for web types.

Most encouragingly, there’s room for some ‘just for the hell of it’ type experimentation. And even a bit of self-congratulation, evidenced by the fact that Google felt confident enough about some tools to package and prink them into the Yeoman project.

Indeed, this is a handsome list, with good representation for most slices of the development pie. From full-scale IDEs to small, exotic libraries with beautiful aesthetics. But what gives this year its character is the poise that these tools exhibit. Within it’s niche, each one shows that we use are beginning to outdistance the problems, freeing ourselves up to give more though to the creative possibilities of the web. How’s that for joyous tidings? Happy Holidays!

  • Read all our design tool-related features here

01. Bugherd

Price: Free-$99/month for 25 members

Contrary to popular belief, the launch of a new site is not the end of a dev team’s work. If anything it’s the point at which the sweat really starts to build. As clients begin to receive feedback, these garbled and conflicting requests begin to filter back as emails, which then get batted around before finally becoming a bone of contention.

BugHerd provides a neat, well organised way to handle feedback, bug fixes and feature requests – without the email overhead. A simple .js include and visitors to the site get a feedback button. Guests to the project get to file bugs and requests, members get to administer the whole shebang from a friendly, intuitive interface. Progressing bugs from report to action to completion is much preferable to the alternative situation: a gradual build-up which will eventually overwhelm.

Adding tasks is super-easy with BugHerd

02. Fontello

Price: Free

Why is it so hard to find a set of icons that covers all the bases with a consistent look and feel? One of life’s great mysteries perhaps. Well, wonder no more because Fontello not only has all the icons you need but you can pick and choose the glyphs you need and compile these into your own minimalist set.

You can, of course, download the entire set of icons from the GitHub repository(actually it’s several sets) but the fontello.com interface makes customising your font so easy it’s the only sensible approach. The project is open source but as always, donations would be appreciated.

Fontello allows you to pick and choose your icon sets from its collections
Fontello allows you to pick and choose your icon sets from its collections

03. Proto.io

Price: Free – $49/Month

A good prototyping tool should allow you to get up and running fast but also provide enough depth that you can refine your ideas to the point where they don’t need you leaning over a user’s shoulder saying things like “Just ignore that bit for now”. Proto.io does just this.

It also handles all the touch gestures you might want, tackles animations and provides for sharing and commenting. It’s smooth to use and thankfully, there’s a free plan too.

Thanks to Proto.io that game is going to be a smash, probably
Thanks to Proto.io that game is going to be a smash, probably

04. Foundation 3

Price: Free

Responsive design seems to have gone from zero to about a thousand miles an hour in no time flat. And things are still changing fast enough that small development shops are hard-pushed to stay up to date, let alone conduct their own R&D. That’s where Foundation 3 comes in.

Developed by ZURB, an agency with the resources and experience available to throw at the responsive problem, Foundation 3 can act as a blueprint for your own projects, a rapid prototyping tool or even as an object lesson in how to address some of the web’s must current issues.

The latest release introduces a simplified grid structure and makes the jump to SASS/Compass, allowing for a more readily flexible approach to styling. Though it makes sense to work with SASS if you are planning to have a look at Foundation 3, the customisable download is conceived to allow a straight CSS version too.

Foundation 3 makes great claims and even lives up to some of them
Foundation 3 makes great claims and even lives up to some of them

5. Dreamweaver CS6

Price: From £344.32

Fluid layouts, CSS3 transitions and enhanced PhoneGap support lead the charge in the latest update to Adobe’s web design all-rounder. There’s no denying that Dreamweaver CS6 hits the ground running.

The problem which Dreamweaver has always had is the difficulty of balancing it’s across the board functionality with the need to keep out of the user’s way. CS6 actually manages this pretty well.

The new fluid layouts are handy but in fact are the least convincing new feature. That accolade probably goes to CSS3 transitions which are, with Dreamweaver’s help, fun to explore.

Even with all the bells and whistles Dreamweaver CS6 has a certain poise
Even with all the bells and whistles Dreamweaver CS6 has a certain poise

06. Cloud9 IDE

Price: Free/$12 per month Premium

This year the browser-based IDE finally came of age with a number of promising projects offering fully-featured apps which make collaborating from anywhere on even large-scale projects. Among these, Cloud9 has the edge.

The code editor is very usable. Code completion, smart drag and drop document trees, FTP integration and all that, but it’s the connectivity which makes Cloud9: If a team are hacking the same file, each user is identified by their own coloured cursor. A chat module closes the feedback loop.

Integrated with the likes of GitHub, capable of working offline, and generally intuitive to use. If you want a ‘code anywhere’ solution, look at this one first.

Cloud9 gives you and your cohorts with a unified, code anywhere environment
Cloud9 gives you and your cohorts with a unified, code anywhere environment

07. Sencha Touch 2

Price: Free

There’s no denying that the mobile/touch device has changed web development for good. It’s a broader, more heterogenous world out there and everyone wants a piece of the action. Sencha Touch 2 aims to put that dream within reach of HTML5 developers.

An improved API, stronger docs and training materials as well as firmed-up native integration with many leading devices all make Sencha Touch 2 a serious contender for the mobile development framework of choice. There is a learning curve but, since Sencha aims to be an end-to-end package, at least there’s only one slope to climb.

Can't live without my radio: Sencha's radio app
Can’t live without my radio: Sencha’s radio app

08. Adobe Edge Inspect

Price: Free

A great little app for mobile developers, formerly known as Adobe Shadow, which cuts a huge amount of hassle from the design process. Just pair your devices (Android and iOS) with your main machine. Then the sites you browse to are echoed direct to every connected device.

If you’ve got conditional code or responsive templates then these should work fine. And if you want to tinker with the code, just hit the angle brackets next to your paired device (in Chrome) and away you go.

Edge Inspect just takes a couple of clicks to set up once you have all the downloads - browser, desktop and mobile
Edge Inspect just takes a couple of clicks to set up once you have all the downloads – browser, desktop and mobile

09. Brackets

Price: Free

You’d think by now that the concept of the code editor would be pretty mature. There’s so many out there and they’re all so similar it’s easy to imagine that the final blueprint has been found. Brackets shows that even at this level there’s plenty of possibilities left to explore.

The central goal for Brackets seems to be a removal of all the repetitive little tasks we fold into the development process. Browser reloading, editing an element’s CSS, function searching. Full credit to those involved because, even at beta stage, Brackets is refreshingly good to use. Check out their YouTube channel.

And if you’d like an augmented experience, now you can sign up for Adobe’s creative cloud and get Edge Code. Built on Brackets, Edge Code adds some excellent features for typography and PhoneGap.

HTML/CSS/JS Brackets makes them feel newly detailed and responsive
HTML/CSS/JS Brackets makes them feel newly detailed and responsive

10. Modernizr 2.6

Price: Free

Leading with improved geolocation, WebGL and a host of community contributed detections, the latest update to Modernizr delivers some important new detects for the progressive enhancement cabal to get their teeth into.

Version 2.6 of the popular browser capability detection tool updates a couple of dependencies too, but the largest volume of new detects comes from the community. The list itself makes interesting reading: css-backgroundposition-xy, css-subpixelfont, svg-filters, vibration…

If you’re keen to make use of the latest features in a responsible fashion then this is one library you need to keep up to the minute.

Front end development tool with a serious pedigree
Front end development tool with a serious pedigree

11. Trello

Price: Free

There are a surprisingly large number of project management tools out there. They clock time and assign teams, but very few of them have the kind of natural appeal which Trello exhibits in spades.

The visual metaphor is the board. A simple concept but Trello makes it into something capable of displaying all the ins and outs of a project at one glance. What’s to do, being done, complete. Commenting, sharing, attaching files, prioritising. Trello makes all this fun, and in so doing, helps to shake your sorry self into something resembling ship shape. Great stuff.

Make sense of the big picture with Trello's boards
Make sense of the big picture with Trello’s boards

12. TypeCast beta

Price: Free while in beta

Thanks to the great leap forward in web fonts, typography is becoming an increasingly important element of the web designer’s job. But, the simple fact that there’s now thousands of fonts to choose from doesn’t actually make the job any easier. For that, you’ll want TypeCast.

TypeCast allows you to choose your fonts from suppliers Fonts.com, TypeKit, FontDeck and Google. But it also allows you to actually create comps, style them and compare them side-by-side with alternative designs. Then just publish a URL when you want feedback. No need to actually buy the fonts till you’ve decided on a final signed-off solution.

The interface is so intuitive you’ll be knocking out new page concepts in no time. There’s no doubt this is a great way to experiment with type on the web, and the results speak for themselves. Whether this is actually time-saving is debatable – for the typographically challenged it could be a huge (but enjoyable) rabbit hole down which to disappear.

Typesetting the web as it should be done
Typesetting the web as it should be done

13. Gridset

Price: $12 per set/$18 per month

The grid is becoming as much of a focal point for web design, much as it has traditionally been for the print world, but with some added complications. Obviously these grids need to be flexible, and responsive. But how do you ‘play’ with this kind of thing when the calculations keep getting in the way? Answer: Gridset.

Created by the folk at Mark Boulton Design, gridset allows you to explore the possibilities of grids, adding columns, defining ratios and setting gutters, all without worrying about he underlying arithmetic. The generated code is generous in size for the simple reason that it attempts to cover all the basis. You also get pngs, an .js overlay and CSS/HTML files. It opens up the grid for exploration.

Generate your grids visually, let gridset take care of the mathematics
Generate your grids visually, let gridset take care of the mathematics

14. Yeoman

Price: Free

Modern web development seems to be coalescing around a number of small, open source projects and tools. The likes of Bootstrap, Compass and PhantomJS. Each package contributing a single aspect to a new job – could be testing, CSS frameworks or code compilation.

Yeoman is Google’s attempt to pull together the best of these apps under a single, customisable banner. Scaffolding new web apps, keeping packages up to date, auto-compiling your code, optimising your images. Yeoman has got your back. As they say.

Once Yeoman, and its dependencies are installed, the magic all happens on the command line so you’ll ideally be comfortable issuing commands there. If you’re not, don’t worry, the system is very well documented, even to the point of making this a good place to learn the basics of the stuff which Yeoman packages up.

Yeoman provides a friendly interface to the most up to date development techniques
Yeoman provides a friendly interface to the most up to date development techniques

15. Emmet

Price: Free

Formerly known as Zen Coding, project Emmet, delivers a CSS-like shorthand to your manually coded pages. It’s a simple concept, which builds on the idea of the ‘snippet’, providing your with a tool to cut through the most repetitive of coding tasks like a knife through butter.

Apply the simple syntax: ‘p.class_name’ gives you ‘<p class=”class_name”> </p>’. Nesting is simple: ‘div>p.class_name’ becomes ‘<div><p class=”class_name”> </p></div>’. You get the picture. You can even embed lorem impsum text, generate lists and, once you get a feel for things, create your own shortcuts and snippet structures.

All in all, this is a really powerful extension, available for a broad selection of popular editors. If you’re hand-coding it’s going to save you a lot of time. Don’t overdo it though, as the strings rapidly become undecipherable.

All that from this: p.outer>ul>li.item$*5>p.inner{inner}
All that from this: p.outer>ul>li.item$*5>p.inner{inner}

16. Sublime Text 2

Price: $59

Getting your coding environment right can be difficult. Do you want lots of buttons, menus and highlighting? Or do you favour a minimal interface and prefer to handle dependencies yourself? The questions are endless once you start asking them.

Sublime Text 2 favours the minimalist approach. A beautifully simple text editor, it focuses on making the writing experience silky smooth, so you can have your code look and handle just the way you want.

Many languages are supported including the likes of C++, Clojure and Markdown, editing features like indents an code collapsing are handled nicely and Minimap gives you fast page navigation.

Minimalism at its best: the Sublime Edit interface
Minimalism at its best: the Sublime Edit interface

17. Microsoft WebMatrix 2

Price: Free

At one time or another just about everyone who works on the web will have cursed Microsoft. Let’s just say they have a few black marks against their name. WebMatrix 2 shows that it doesn’t have to be that way. Far from it.

WebMatrix 2 is a great IDE to work with, regardless of the language you prefer or the framework you operate within. It’s clean to look at, fast to use, helpful yet unobtrusive. And if you happen to be a .NET devotee, then it’s going to make your life a lot easier.

Plugging into databases or installing a base CMS, even hacking around with Node.js. WebMatrix handles things with efficiency. Where previous MS products have suffered with over-clutter as a result of the need to shout about their accomplishments, this application treats you like an adult. If you’re an adult that works in C#, all the better.

WebMatrix has excellent manners, helping when needed, or just staying out of the way
WebMatrix has excellent manners, helping when needed, or just staying out of the way

18. PhoneGap 2.0

Price: Free

Everyone seems to agree that mobile is the new black. And since that’s the case it’s a major pain in the proverbial that this ‘new’ platform managed to recreate so many of the problems faced by the desktop: different APIs, languages, browsers, file formats… Thankfully, PhoneGap has gone a long way towards smoothing out those difficulties.

PhoneGap 2, the first release since Adobe took the reins, is a significant advance for at least two reasons. The first is the simple increase in platforms reached, Windows 8 phone included. The second is the availability of PhoneGap Build, giving developers a single compile point capable of reaching every platform. No wonder there’s so many apps.

HTML5 + CSS3 + Javascript = native mobile apps
HTML5 + CSS3 + Javascript = native mobile apps

19. Firefox 18

Price: Free

The web is moving pretty fast at the moment – even by its own standards. Consequently, it’s been a busy year for Firefox. Something the rest of us should be very grateful for.

Introducing a slew of new technologies from Web Sockets to IndexedDB and useful tools like the Web Console and Javascript Scratchpad, Firefox has given developers the chance to explore new possibilities and the tools to make doing so considerably more profitable.

The constancy of Mozilla’s effort to make the use of brand new technologies available and old ones more efficient is impressive to say the least.

Firefox makes sense of the tangled web
Firefox makes sense of the tangled web

20. Photon

Price: Free

This is a fun project. Photon creates a simple lighting effect for DOM elements rendered in 3D space. This is made possible by the new fangled CSS transform property. Or as it’s more properly known the ‘[vendor prefix] – transform’ property.

It’s clever stuff, if somewhat tricky to get your head around, and a bit heavy on the old CPU. But Photon’s creator, Tom Giannattasio, makes a good case for its use in the form of an origami crane. Fire up a decent browser and take it for a spin.

For sheer handsomeness, Photon is hard to beat

Source http://www.creativebloq.com


4 open source tools for Linux system monitoring

Information is the key to resolving any computer problem, including problems with or relating to Linux and the hardware on which it runs. There are many tools available for and included with most distributions even though they are not all installed by default. These tools can be used to obtain huge amounts of information.

This article discusses some of the interactive command line interface (CLI) tools that are provided with or which can be easily installed on Red Hat related distributions including Red Hat Enterprise Linux, Fedora, CentOS, and other derivative distributions. Although there are GUI tools available and they offer good information, the CLI tools provide all of the same information and they are always usable because many servers do not have a GUI interface but all Linux systems have a command line interface.

This article concentrates on the tools that I typically use. If I did not cover your favorite tool, please forgive me and let us all know what tools you use and why in the comments section.

My go to tools for problem determination in a Linux environment are almost always the system monitoring tools. For me, these are top, atop, htop, and glances.

All of these tools monitor CPU and memory usage, and most of them list information about running processes at the very least. Some monitor other aspects of a Linux system as well. All provide near real-time views of system activity.

Load averages

Before I go on to discuss the monitoring tools, it is important to discuss load averages in more detail.

Load averages are an important criteria for measuring CPU usage, but what does this really mean when I say that the 1 (or 5 or 10) minute load average is 4.04, for example? Load average can be considered a measure of demand for the CPU; it is a number that represents the average number of instructions waiting for CPU time. So this is a true measure of CPU performance, unlike the standard “CPU percentage” which includes I/O wait times during which the CPU is not really working.

For example, a fully utilized single processor system CPU would have a load average of 1. This means that the CPU is keeping up exactly with the demand; in other words it has perfect utilization. A load average of less than one means that the CPU is underutilized and a load average of greater than 1 means that the CPU is overutilized and that there is pent-up, unsatisfied demand. For example, a load average of 1.5 in a single CPU system indicates that one-third of the CPU instructions are forced to wait to be executed until the one preceding it has completed.

This is also true for multiple processors. If a 4 CPU system has a load average of 4 then it has perfect utilization. If it has a load average of 3.24, for example, then three of its processors are fully utilized and one is utilized at about 76%. In the example above, a 4 CPU system has a 1 minute load average of 4.04 meaning that there is no remaining capacity among the 4 CPUs and a few instructions are forced to wait. A perfectly utilized 4 CPU system would show a load average of 4.00 so that the system in the example is fully loaded but not overloaded.

The optimum condition for load average is for it to equal the total number of CPUs in a system. That would mean that every CPU is fully utilized and yet no instruction must be forced to wait. The longer-term load averages provide indication of the overall utilization trend.

Linux Journal has an excellent article describing load averages, the theory and the math behind them, and how to interpret them in the December 1, 2006 issue.


All of the monitors discussed here allow you to send signals to running processes. Each of these signals has a specific function though some of them can be defined by the receiving program using signal handlers.

The separate kill command can also be used to send signals to processes outside of the monitors. The kill -l can be used to list all possible signals that can be sent. Three of these signals can be used to kill a process.

  • SIGTERM (15): Signal 15, SIGTERM is the default signal sent by top and the other monitors when the k key is pressed. It may also be the least effective because the program must have a signal handler built into it. The program’s signal handler must intercept incoming signals and act accordingly. So for scripts, most of which do not have signal handlers, SIGTERM is ignored. The idea behind SIGTERM is that by simply telling the program that you want it to terminate itself, it will take advantage of that and clean up things like open files and then terminate itself in a controlled and nice manner.
  • SIGKILL (9): Signal 9, SIGKILL provides a means of killing even the most recalcitrant programs, including scripts and other programs that have no signal handlers. For scripts and other programs with no signal handler, however, it not only kills the running script but it also kills the shell session in which the script is running; this may not be the behavior that you want. If you want to kill a process and you don’t care about being nice, this is the signal you want. This signal cannot be intercepted by a signal handler in the program code.
  • SIGINT (2): Signal 2, SIGINT can be used when SIGTERM does not work and you want the program to die a little more nicely, for example, without killing the shell session in which it is running. SIGINT sends an interrupt to the session in which the program is running. This is equivalent to terminating a running program, particularly a script, with the Ctrl-C key combination.

To experiment with this, open a terminal session and create a file in /tmp named cpuHog and make it executable with the permissions rwxr_xr_x. Add the following content to the file.

# This little program is a cpu hog
X=0;while [ 1 ];do echo $X;X=$((X+1));done

Open another terminal session in a different window, position them adjacent to each other so you can watch the results and run top in the new session. Run the cpuHog program with the following command:


This program simply counts up by one and prints the current value of X to STDOUT. And it sucks up CPU cycles. The terminal session in which cpuHog is running should show a very high CPU usage in top. Observe the effect this has on system performance in top. CPU usage should immediately go way up and the load averages should also start to increase over time. If you want, you can open additional terminal sessions and start the cpuHog program in them so that you have multiple instances running.

Determine the PID of the cpuHog program you want to kill. Press the k key and look at the message under the Swap line at the bottom of the summary section. Top asks for the PID of the process you want to kill. Enter that PID and press Enter. Now top asks for the signal number and displays the default of 15. Try each of the signals described here and observe the results.

4 open source tools for Linux system monitoring


One of the first tools I use when performing problem determination is top. I like it because it has been around since forever and is always available while the other tools may not be installed.

The top program is a very powerful utility that provides a great deal of information about your running system. This includes data about memory usage, CPU loads, and a list of running processes including the amount of CPU time and memory being utilized by each process. Top displays system information in near real-time, updating (by default) every three seconds. Fractional seconds are allowed by top, although very small values can place a significant load the system. It is also interactive and the data columns to be displayed and the sort column can be modified.

A sample output from the top program is shown in Figure 1 below. The output from top is divided into two sections which are called the “summary” section, which is the top section of the output, and the “process” section which is the lower portion of the output; I will use this terminology for top, atop, htop and glances in the interest of consistency.

The top program has a number of useful interactive commands you can use to manage the display of data and to manipulate individual processes. Use the hcommand to view a brief help page for the various interactive commands. Be sure to press h twice to see both pages of the help. Use the q command to quit.

Summary section

The summary section of the output from top is an overview of the system status. The first line shows the system uptime and the 1, 5, and 15 minute load averages. In the example below, the load averages are 4.04, 4.17, and 4.06 respectively.

The second line shows the number of processes currently active and the status of each.

The lines containing CPU statistics are shown next. There can be a single line which combines the statistics for all CPUs present in the system, as in the example below, or one line for each CPU; in the case of the computer used for the example, this is a single quad core CPU. Press the 1 key to toggle between the consolidated display of CPU usage and the display of the individual CPUs. The data in these lines is displayed as percentages of the total CPU time available.

These and the other fields for CPU data are described below.

  • us: userspace – Applications and other programs running in user space, i.e., not in the kernel.
  • sy: system calls – Kernel level functions. This does not include CPU time taken by the kernel itself, just the kernel system calls.
  • ni: nice – Processes that are running at a positive nice level.
  • id: idle – Idle time, i.e., time not used by any running process.
  • wa: wait – CPU cycles that are spent waiting for I/O to occur. This is wasted CPU time.
  • hi: hardware interrupts – CPU cycles that are spent dealing with hardware interrupts.
  • si: software interrupts – CPU cycles spent dealing with software-created interrupts such as system calls.
  • st: steal time – The percentage of CPU cycles that a virtual CPU waits for a real CPU while the hypervisor is servicing another virtual processor.

The last two lines in the summary section are memory usage. They show the physical memory usage including both RAM and swap space.

Figure 1: The top command showing a fully utilized 4-core CPU.
Figure 1: The top command showing a fully utilized 4-core CPU.

You can use the 1 command to display CPU statistics as a single, global number as shown in Figure 1, above, or by individual CPU. The l command turns load averages on and off. The t and m commands rotate the process/CPU and memory lines of the summary section, respectively, through off, text only, and a couple types of bar graph formats.

Process section

The process section of the output from top is a listing of the running processes in the system—at least for the number of processes for which there is room on the terminal display. The default columns displayed by top are described below. Several other columns are available and each can usually be added with a single keystroke. Refer to the top man page for details.

  • PID – The Process ID.
  • USER – The username of the process owner.
  • PR – The priority of the process.
  • NI – The nice number of the process.
  • VIRT – The total amount of virtual memory allocated to the process.
  • RES – Resident size (in kb unless otherwise noted) of non-swapped physical memory consumed by a process.
  • SHR – The amount of shared memory in kb used by the process.
  • S – The status of the process. This can be R for running, S for sleeping, and Z for zombie. Less frequently seen statuses can be T for traced or stopped, and D for uninterruptable sleep.
  • %CPU – The percentage of CPU cycles, or time used by this process during the last measured time period.
  • %MEM – The percentage of physical system memory used by the process.
  • TIME+ – Total CPU time to 100ths of a second consumed by the process since the process was started.
  • COMMAND – This is the command that was used to launch the process.

Use the Page Up and Page Down keys to scroll through the list of running processes. The d or s commands are interchangeable and can be used to set the delay interval between updates. The default is three seconds, but I prefer a one second interval. Interval granularity can be as low as one-tenth (0.1) of a second but this will consume more of the CPU cycles you are trying to measure.

You can use the < and > keys to sequence the sort column to the left or right.

The k command is used to kill a process or the r command to renice it. You have to know the process ID (PID) of the process you want to kill or renice and that information is displayed in the process section of the top display. When killing a process, top asks first for the PID and then for the signal number to use in killing the process. Type them in and press the enter key after each. Start with signal 15, SIGTERM, and if that does not kill the process, use 9, SIGKILL.


If you alter the top display, you can use the W (in uppercase) command to write the changes to the configuration file, ~/.toprc in your home directory.


I also like atop. It is an excellent monitor to use when you need more details about that type of I/O activity. The default refresh interval is 10 seconds, but this can be changed using the interval i command to whatever is appropriate for what you are trying to do. atop cannot refresh at sub-second intervals like top can.

Use the h command to display help. Be sure to notice that there are multiple pages of help and you can use the space bar to scroll down to see the rest.

One nice feature of atop is that it can save raw performance data to a file and then play it back later for close inspection. This is handy for tracking down internmittent problems, especially ones that occur during times when you cannot directly monitor the system. The atopsar program is used to play back the data in the saved file.

Figure 2: The atop system monitor provides information about disk and network activity in addition to CPU and process data..
Figure 2: The atop system monitor provides information about disk and network activity in addition to CPU and process data.

Summary section

atop contains much of the same information as top but also displays information about network, raw disk, and logical volume activity. Figure 2, above, shows these additional data in the columns at the top of the display. Note that if you have the horizontal screen real-estate to support a wider display, additional columns will be displayed. Conversely, if you have less horizontal width, fewer columns are displayed. I also like that atop displays the current CPU frequency and scaling factor—something I have not seen on any other of these monitors—on the second line in the rightmost two columns in Figure 2.

Process section

The atop process display includes some of the same columns as that for top, but it also includes disk I/O information and thread count for each process as well as virtual and real memory growth statistics for each process. As with the summary section, additional columns will display if there is sufficient horizontal screen real-estate. For example, in Figure 2, the RUID (Real User ID) of the process owner is displayed. Expanding the display will also show the EUID (Effective User ID) which might be important when programs run SUID (Set User ID).

atop can also provide detailed information about disk, memory, network, and scheduling information for each process. Just press the d, m, n or s keys respectively to view that data. The g key returns the display to the generic process display.

Sorting can be accomplished easily by using C to sort by CPU usage, M for memory usage, D for disk usage, N for network usage and A for automatic sorting. Automatic sorting usually sorts processes by the most busy resource. The network usage can only be sorted if the netatop kernel module is installed and loaded.

You can use the k key to kill a process but there is no option to renice a process.

By default, network and disk devices for which no activity occurs during a given time interval are not displayed. This can lead to mistaken assumptions about the hardware configuration of the host. The f command can be used to force atop to display the idle resources.


The atop man page refers to global and user level configuration files, but none can be found in my own Fedora or CentOS installations. There is also no command to save a modified configuration and a save does not take place automatically when the program is terminated. So, there appears to be now way to make configuration changes permanent.


The htop program is much like top but on steroids. It does look a lot like top, but it also provides some capabilities that top does not. Unlike atop, however, it does not provide any disk, network, or I/O information of any type.

Figure 3: htop has nice bar charts to to indicate resource usage and it can show the process tree.

Figure 3: htop has nice bar charts to to indicate resource usage and it can show the process tree.

Summary section

The summary section of htop is displayed in two columns. It is very flexible and can be configured with several different types of information in pretty much any order you like. Although the CPU usage sections of top and atop can be toggled between a combined display and a display that shows one bar graph for each CPU, htop cannot. So it has a number of different options for the CPU display, including a single combined bar, a bar for each CPU, and various combinations in which specific CPUs can be grouped together into a single bar.

I think this is a cleaner summary display than some of the other system monitors and it is easier to read. The drawback to this summary section is that some information is not available in htop that is available in the other monitors, such as CPU percentages by user, idle, and system time.

The F2 (Setup) key is used to configure the summary section of htop. A list of available data displays is shown and you can use function keys to add them to the left or right column and to move them up and down within the selected column.

Process section

The process section of htop is very similar to that of top. As with the other monitors, processes can be sorted any of several factors, including CPU or memory usage, user, or PID. Note that sorting is not possible when the tree view is selected.

The F6 key allows you to select the sort column; it displays a list of the columns available for sorting and you select the column you want and press the Enter key.

You can use the up and down arrow keys to select a process. To kill a process, use the up and down arrow keys to select the target process and press the k key. A list of signals to send the process is displayed with 15, SIGTERM, selected. You can specify the signal to use, if different from SIGTERM. You could also use the F7 and F8 keys to renice the selected process.

One command I especially like is F5 which displays the running processes in a tree format making it easy to determine the parent/child relationships of running processes.


Each user has their own configuration file, ~/.config/htop/htoprc and changes to the htop configuration are stored there automatically. There is no global configuration file for htop.


I have just recently learned about glances, which can display more information about your computer than any of the other monitors I am currently familiar with. This includes disk and network I/O, thermal readouts that can display CPU and other hardware temperatures as well as fan speeds, and disk usage by hardware device and logical volume.

The drawback to having all of this information is that glances uses a significant amount of CPU resurces itself. On my systems I find that it can use from about 10% to 18% of CPU cycles. That is a lot so you should consider that impact when you choose your monitor.

Summary section

The summary section of glances contains most of the same information as the summary sections of the other monitors. If you have enough horizontal screen real estate it can show CPU usage with both a bar graph and a numeric indicator, otherwise it will show only the number.


Figure 4: The glances interface with network, disk, filesystem, and sensor information.
Figure 4: The glances interface with network, disk, filesystem, and sensor information.

I like this summary section better than those of the other monitors; I think it provides the right information in an easily understandable format. As with atop and htop, you can press the 1 key to toggle between a display of the individual CPU cores or a global one with all of the CPU cores as a single average as shown in Figure 4, above.

Process section

The process section displays the standard information about each of the running processes. Processes can be sorted automatically a, or by CPU c, memory m, name p, user u, I/O rate i, or time t. When sorted automatically processes are first sorted by the most used resource.

Glances also shows warnings and critical alerts at the very bottom of the screen, including the time and duration of the event. This can be helpful when attempting to diagnose problems when you cannot stare at the screen for hours at a time. These alert logs can be toggled on or off with the l command, warnings can be cleared with the w command while alerts and warnings can all be cleared with x.

It is interesting that glances is the only one of these monitors that cannot be used to either kill or renice a process. It is intended strictly as a monitor. You can use the external kill and renice commands to manipulate processes.


Glances has a very nice sidebar that displays information that is not available in top or htop. Atop does display some of this data, but glances is the only monitor that displays the sensors data. Sometimes it is nice to see the temperatures inside your computer. The individual modules, disk, filesystem, network, and sensors can be toggled on and off using the d,f, n, and s commands, respectively. The entire sidebar can be toggled using 2.

Docker stats can be displayed with D.


Glances does not require a configuration file to work properly. If you choose to have one, the system-wide instance of the configuration file would be located in /etc/glances/glances.conf. Individual users can have a local instance at ~/.config/glances/glances.conf which will override the global configuration. The primary purpose of these configuration files is to set thresholds for warnings and critical alerts. There is no way I can find to make other configuration changes—such as sidebar modules or the CPU displays—permanent. It appears that you must reconfigure those items every time you start glances.

There is a document, /usr/share/doc/glances/glances-doc.html, that provides a great deal of information about using glances, and it explicitly states that you can use the configuration file to configure which modules are displayed. However, neither the information given nor the examples describe just how to do that.


Be sure to read the man pages for each of these monitors because there is a large amount of information about configuring and interacting with them. Also use the h key for help in interactive mode. This help can provide you with information about selecting and sorting the columns of data, setting the update interval and much more.

These programs can tell you a great deal when you are looking for the cause of a problem. They can tell you when a process, and which one, is sucking up CPU time, whether there is enough free memory, whether processes are stalled while waiting for I/O such as disk or network access to complete, and much more.

I strongly recommend that you spend time watching these monitoring programs while they run on a system that is functioning normally so you will be able to differentiate those things that may be abnormal while you are looking for the cause of a problem.

You should also be aware that the act of using these monitoring tools alters the system’s use of resources including memory and CPU time. top and most of these monitors use perhaps 2% or 3% of a system’s CPU time. glances has much more impact than the others and can use between 10% and 20% of CPU time. Be sure to consider this when choosing your tools.

I had originally intended to include SAR (System Activity Reporter) in this article but as this article grew longer it also became clear to me that SAR is significantly different from these monitoring tools and deserves to have a separate article. So with that in mind, I plan to write an article on SAR and the /proc filesystem, and a third article on how to use all of these tools to locate and resolve problems.

Source https://opensource.com


The 5 Most Common Mistakes HTML5 Developers Make: A Beginner’s Guide

It’s been over 20 years since Tim Berners-Lee and Robert Cailliau specified HTML, which became the standard markup language used to build the Internet. Ever since then, the HTML development community has begged for improvements to this language, but this cry was mostly heard by web browser developers who tried to ease the HTML issues of their colleagues. Unfortunately, this led to even more problems causing many cross-browser compatibility issues and duplication of development work. Over these 20 years, HTML was upgraded 4 times, while most of the browsers got double-digit numbers of major updates plus numerous small patches.

HTML5 was supposed to finally solve our problems and become one standard to rule them all (browsers). This was probably one of the most anticipated technologies since creation of the World Wide Web. Has it happened? Did we finally get one markup language that will be fully cross-browser compatible and will work on all desktop and mobile platforms giving us all of those features we were asking for? I dont know! Just few days ago (Sept. 16th 2014) we received one more call for review by W3C so the HTML5 specification is still incomplete.

Despite HTML5 issues and mistakes, is it the one language ring to rule them all?

Hopefully, when the specification is one day finalized, browsers will not already have significant obsolete code, and they will easily and properly implement great features like Web Workers, Multiple synchronized audio and video elements, and other HTML5 components that we’ve needed for a long time.

We do however have thousands of companies that have based their businesses on HTML5 and are doing great. There are also many great HTML5-based applications and games used by millions of people, so success is obviously possible and HTML5 is, and will continue to be, used regardless on the status of its specification.

However, the recipe I mentioned can easily blow up in our faces, and thus I’ve emphasized some of the most basic HTML5 mistakes that can be avoided. Most of the mistakes listed below are consequence of incomplete or missing implementation of certain HTML5 elements in different browsers, and we should hope that in the near future they will become obsolete. Until this happens, I suggest reading the list and having it in mind when building your next HTML5 application whether you’re an HTML5 beginner or an experienced vet.

Common mistake #1: Trusting local storage

Let them eat cake! This approach has been a burden on developers for ages. Due to reasonably sensible fear of security breaches and protection of computers, in the “dark ages” when the Internet was feared by many, web applications were allowed to leave unreasonably small amounts of data on computers. True, there were things like user data that “great browser masters from Microsoft(r)” gave us, or things like Local Shared Objects in Flash, but this was far from perfect.

It is therefore reasonable that one of the first basic HTML5 features adopted by developers was Web Storage. However, be alert that Web Storage is not secure. It is better than using cookies because it will not be transmitted over the wire, but it is not encrypted. You should definitely never store security tokens there. Your security policy should never rely on data stored in Web Storage as a malicious user can easily modify her localStorage and sessionStorage values at any time.

To get more insight on Web Storage and how to use it, I suggest reading this post.

Common mistake #2: Expecting compatibility among browsers

HTML5 is much more than a simple markup language. It has matured enough to combine behavior together with layout, and you should consider HTML5 as extended HTML with advanced JavaScript on top. If you look at all the trouble we had before just to make a static combination of HTML+CSS look identical on all browsers, it is fair to assume that more complexity will just mean more effort assuring cross-browser compatibility.

HTML5 interpretation on different browsers is far from identical, just like the case was with JavaScript. All major players in the browser wars lended a hand in the HTML5 spec, so it’s fair to assume that deviations between browsers should reduce over time. But even now some browsers decided to fully ignore certain HTML5 elements making it very difficult to follow a baseline and common set of features. If we add more internet connected devices and platforms to the equation, the situation gets even more complicated, causing problems with HTML5.

For example Web Animations are great feature that is supported only by Chrome and Opera, while Web Notification feature that allows alerting the user outside the context of a web page of an occurrence, such as the delivery of email, is fully ignored by Internet Explorer.

To learn more about HTML5 features and support by different browsers check out the HTML5 guide at www.caniuse.com.

So the fact remains that even though HTML5 is new and well specified, we should expect a great deal of cross-browser compatibility issues and plan for them in advance. There are just too many gaps that browsers need to fill in, and it is fair to expect that they cannot overcome all of the differences between platforms well.

Common mistake #3: Assuming high performance

Regardless of the fact that HTML5 is still evolving, it is a very powerful technology that has many advantages over alternate platforms used before its existence. But, with great power comes great responsibility, especially for HTML5 beginners. HTML5 has been adopted by all major browsers on desktop and mobile platforms. Having this in mind, many development teams pick HTML5 as their preferred platform, hoping that their applications will run equally on all browsers. However, assuming sensible performance on both desktop and mobile platforms just because HTML5 specification says so, is not sensible. Lots of companies (khm! Facebook khm!) placed their bets on HTML5 for their mobile platform and suffered from HTML5 not working out as they planned.

Again, however, there are some great companies that rely heavily on HTML5. Just look at the numerous online game development studios that are doing amazing stuff while pushing HTML5 and browsers to their limits. Obviously, as long as you are aware of the performance issues and are working around these, you can be in a great place making amazing applications.

Common mistake #4: Limited accessibility

Web has become a very important part of our lives. Making applications accessible to people who rely upon assistive technology is important topic that is often put aside in software development. HTML5 tries to overcome this by implementing some of the advanced accessibility features. More than a few developers accepted this to be sufficient and haven’t really spent any time implementing additional accessibility options in their applications. Unfortunately, at this stage HTML5 has issues that prevent it from making your applications available to everyone and you should expect to invest additional time if you want to include a broader range of users.

You can check this place for more information about accessibility in HTML5 and the current state of the most common accessibility features.

Common mistake #5: Not using HTML5 enhancements

HTML5 has extended the standard HTML/XHTML set of tags significantly. In addition to new tags, we got quite a few new rules and behaviors. Too many developers picked up just a few enhancements and have skipped some of the very cool new features of HTML5.

One of the coolest things in HTML5 is client-side validation. This feature was probably one of the earliest elements of HTML5 that web browsers picked up. But, unfortunately, you can find more than a few developers who add novalidate attribute to their forms by default. The reasons for doing this are reasonable, and I’m quite sure we will have a debate about this one. Due to backward compatibility, many applications implemented custom JavaScript validators and having out-of-the-box validation done by browsers on top of that is inconvenient. However, it is not too difficult to assure that two validation methods will not collide, and standardizing user validation will ensure common experience while helping to resolve accessibility issues that I mentioned earlier.

Another great feature is related to the way user input is handled in HTML5. Before HTML5 came, we had to keep all of our form fields contained inside the <form></form> tag. New form attributes make it perfectly valid to do something like this:

<form action="demo_form.asp" id="form1">
  First name: <input type="text" name="fname"><br>
  <input type="submit" value="Submit">

Last name: <input type="text" name="lname" form="form1">

Even if lname is not inside the form, it will be posted together with fname.

For more information about new form attributes and enhancements, you can check the Mozilla Developer Network.

Wrap up

I understand that web developers are collateral damage in the browser wars as many of the above mistakes are a direct consequence of problematic HTML5 implementation in different browsers. However, it is still crucial that we avoid common HTML5 issues and spend some time understanding the new features of HTML5. Once we have it all under control, our applications will utilize great new enhancements and take the web to the next level.

Source https://www.toptal.com


7 Chmod Command Examples for Beginners

In this article, let us review how to use symbolic representation with chmod.

Following are the symbolic representation of three different roles:

  • u is for user,
  • g is for group,
  • and o is for others.

Following are the symbolic representation of three different permissions:

  • r is for read permission,
  • w is for write permission,
  • x is for execute permission.

Following are few examples on how to use the symbolic representation on chmod.

1. Add single permission to a file/directory

Changing permission to a single set. + symbol means adding permission. For example, do the following to give execute permission for the user irrespective of anything else:

$ chmod u+x filename

2. Add multiple permission to a file/directory

Use comma to separate the multiple permission sets as shown below.

$ chmod u+r,g+x filename

3. Remove permission from a file/directory

Following example removes read and write permission for the user.

$ chmod u-rx filename

4. Change permission for all roles on a file/directory

Following example assigns execute privilege to user, group and others (basically anybody can execute this file).



$ chmod a+x filename

5. Make permission for a file same as another file (using reference)

If you want to change a file permission same as another file, use the reference option as shown below. In this example, file2’s permission will be set exactly same as file1’s permission.

$ chmod --reference=file1 file2

6. Apply the permission to all the files under a directory recursively

Use option -R to change the permission recursively as shown below.

$ chmod -R 755 directory-name/

7. Change execute permission only on the directories (files are not affected)

On a particular directory if you have multiple sub-directories and files, the following command will assign execute permission only to all the sub-directories in the current directory (not the files in the current directory).

$ chmod u+X *

Note: If the files has execute permission already for either the group or others, the above command will assign the execute permission to the user

Source http://www.thegeekstuff.com


9 Essential Responsive Web Design Tools

These days, every new site I visit features responsive web design. It feels like this flexible approach to building for a multi-device web has become the norm rather than the exception. To this end, if you consider it your job to build a more future-friendly web, then all of these tools will help you accomplish that goal. Some of them are more specific to the kinds of challenges we face in responsive work. Some are less directly related, but still quite relevant.

Finally, these tools are not ordered by priority. Instead, I’ve tried to select tools that span all the tasks we face, from design to development. So comparing them is difficult and, frankly, not very useful. Right, let’s jump in …

01. Content Priority Guide

An example guide from our recent redesign. You can see a working version at netm.ag/redesign-268
An example guide from our recent redesign. You can see a working version if you click on the image

When I first started making websites, I was happily oblivious to the importance of content in building for the web. I quickly learned that I was missing a critical part of the process – I was like a sculptor with no clay. If you find yourself struggling to incorporate content strategy techniques into your process, this simple tool is the perfect way to introduce some content-first thinking into your workflow.

The Content Priority Guide was created by Emily Gray as a kind of hybrid tool to use in situations where it’s not possible to get all the content in, but you need to begin some form of UX design.

“As a project manager, I needed a solid place to start UX design. But as a content strategist, I didn’t have all the content, only the ideas behind the content. I developed the guide to fill in those gaps, ” explains Gray. The concept mixes ideas from content modelling with simple wireframing.

In order to use this guide, you need to walk through each unique page in your system, identifying the applicable content types and breaking them down to their smallest parts. Additionally, Gray creates these as a Google Doc and shares them with her clients. This encourages a more collaborative approach when it comes to establishing content types, as well as starting the process of prioritisation. In responsive work, understanding the hierarchy of content and functionality, independent of viewport width, is critical. This tool helps you establish that early on in the process.

Content priority guides help you identify reusable content types, which leads to a solid understanding of how to structure your CMS. And, if you’re using modern frontend development techniques (see tool 2), it also gives you a framework for how to modularise your markup.

02. Pattern Lab

The demo Pattern Lab, created by Brad Frost and Dave Olsen, is available for you to play with if you click on the image

Dave Olsen and Brad Frost have developed a beautiful tool for facilitating pattern-driven design. Pattern Lab is based on the idea that you should break your design down to the smallest parts (‘atoms’) and then use those to construct larger, reusable components (‘molecules’ and ‘organisms’). These three core building blocks can then be used to create templates, and injected with real content to create pages.

At its core, Pattern Lab is a static site generator. In practice, it’s much more. The structure that this way of building provides results in a much more modular system – one that’s easier to integrate with any content management system.

03. Sketch

The Sketch interface was built from the ground up for web design, not photo editing or print design
The Sketch interface was built from the ground up for web design, not photo editing or print design

When it comes to design, a lot of folks are talking about Sketch. Where tools like Photoshop and InDesign have their roots in design for other mediums, Sketch was built from the ground up with the web in mind.

Allowing for multiple pages per file and multiple artboards per page makes the task of organising your design significantly simpler. Features such as mirroring allow you to easily view your work on connected iOS devices, taking the element of guesswork out of designing for small screens.

Sketch has tools for both vector and raster editing, but seems to be set up in favour of vectors. Couple this with its simple and powerful exporting features and you’re ready to support the variety of high-pixel-density screens out there now, and (fingers crossed) whatever tomorrow brings.

04. Sass

A screenshot of my desktop while editing my personal site. You can see the use of the sb-media mixin (replacing the @media keyword) and style nesting in the code to the right
A screenshot of my desktop while editing my personal site. You can see the use of the sb-media mixin (replacing the @media keyword) and style nesting in the code to the right

I remember being sceptical of CSS preprocessing when it first started to show up. I’m a bit old school – I’ve been writing CSS for a long time now – so the idea that my CSS was going to be generated by some ‘preprocessing event’ made the hairs on the back of my neck stand up. It also didn’t help that I was fairly disappointed with the generated CSS from the first few iterations of Less and Sass.

But, like most of you, I’ve since seen the light. Sass has given us things we’ve always dreamed of: variables, nesting and maths to name a few. But it’s also opened up whole new ways to work with features like mixins, media query bubbling and the ability to create partial and aggregate files (find out more about all these at here). I can honestly say that Sass has completely changed the way that I work with style on the web. I create more modular and maintainable code and much of that is possible (or at least, much easier) because of Sass.

05. PostCSS

If you think CSS preprocessing is cool, you should check out what’s being done with PostCSS. The idea is simple: it’s a JavaScript-based tool for ‘transforming’ your CSS. The most popular PostCSS plugin seems to be Autoprefixer, which parses your CSS and adds vendor prefixes to the necessary rules by checking Can I Use for support. This makes a large portion of frameworks such as Compass unnecessary (although Compass does do some other pretty amazing things, like creating your sprites for you).

What I love about the concept of PostCSS is that we can polyfill any of the coming CSS features based on their not-yet-implemented specifications. This is incredibly exciting and you can do it today using cssnext – another PostCSS plugin.

06. Chrome DevTools

Chrome’s device mode reveals an amazing set of features
Chrome’s device mode reveals an amazing set of features

I will never forget the first time I used FireBug to debug my CSS. The ability to modify, in real time, the HTML and CSS on my page completely blew my mind. It also revolutionised how I worked on the web. Since those days, I’ve made the shift to Chrome and the Chrome DevTools.

Just as most modern browsers are constantly being developed, so are the developer tools that ship with them. Recently, Chrome has added a few features which I’ve found very useful in responsive web design. Here, I want to highlight three specific features which I think are worth looking at.

These are most easily accessed by toggling ‘device mode’ in the Chrome DevTools. You get there by inspecting the page and clicking the little ‘phone’ icon in the top-left of the inspector, next to the search icon.

Selecting from the network throttling drop-down will allow you to simulate various network speeds
Selecting from the network throttling drop-down will allow you to simulate various network speeds

Network Simulator

Poor performance has been a huge criticism of responsive web design over the past few years. Some of the reasons for this are outlined in a post by Guy Podjarny. While I find myself more aligned with Tim Kadlec, who suggests the real fault is in the implementation, not the technique, our implementations obviously need some work (see ‘Performance Budgets’ boxout). Part of the conversation needs to be around tooling that helps bring performance into the conversation earlier. Chrome’s network simulator does just this.

With the network simulator, you can – you guessed it – simulate various network speeds. The idea itself isn’t new; we’ve been able to do this for a while now with other tools. However, having this ability right in Chrome DevTools brings it immediately to the forefront of the development process – where it should be.

This enables you to emulate the screen of a specific device
This enables you to emulate the screen of a specific device

Screen Emulator

The screen emulator does more than just change the size of the viewport to match the size of a specific device – it also specifies the user agent string and pixel ratio. There are a handful of very useful presets, and you can also create your own and save them for later use. For more information on how Chrome’s device mode works, check the Chrome developer site.

Media Query Inspector

Much of our work in responsive web design involves the serving of style adjustments to specific subsets of media types and features. You and I call those media queries, and the media query inspector in Chrome’s DevTools makes it very easy to dig into those queries.

You can bring this feature on-screen by clicking the ‘media query’ icon. Assuming the site you’re inspecting has media queries, you’ll see a series of coloured lines across the top of the inspector. Blue represents max-width queries and orange represents min-width queries, while green represents query widths within a range.

Clicking these coloured bars resizes the mock-viewport and shows you the width at which those styles apply. You can also right-click the coloured bars to reveal where the queries they represent live in the source. All quite useful!

07. BrowserSync

A screenshot of my desktop while testing my personal site using BrowserSync. Chrome, Safari, and Firefox are all syncing scroll position, clicks and more
A screenshot of my desktop while testing my personal site using BrowserSync. Chrome, Safari, and Firefox are all syncing scroll position, clicks and more

One big challenge these days is how we accommodate the growing population of devices on the market. Even if you only focus on the top six or eight devices, testing and debugging can feel like a serious hindrance to your productivity.

Enter BrowserSync; a free, open source tool for testing and debugging sites across all the devices in your testing lab. Not only does it sync the URL of any browser, it also syncs scroll position, clicks, form inputs, toggles and submits, and refreshes or injects CSS as you make changes. It comes with a slick UI (available by default at localhost:3001) and includes the open source project weinre which enables you to remote debug any of the devices connected to your localhost.

BrowserSync has an official Grunt plugin and works seamlessly with Gulp. It’s also got an API for those looking to develop other integrations. When you’re dealing with today’s web, BrowserSync is seriously useful. [For more on BrowserSync, check out our tutorial in the next issue].

08. Skitch

Skitch by Evernote is a fantastic screen capture and annotation tool that found its way into my workflow in a matter of minutes
Skitch by Evernote is a fantastic screen capture and annotation tool that found its way into my workflow in a matter of minutes

Skitch is a native app for Mac, iPad and iPhone (sorry, all you non-Mac users), which makes grabbing and annotating screenshots amazingly simple. It’s built by the folks at Evernote, but you can use it even if you’re not an Evernote user.

I’ve included it here because of how easily it allows you to provide feedback on a prototype. Plus, you can drag straight from Skitch into GitHub issues to attach your annotated screen capture. We use Skitch relentlessly when we’re testing. Providing helpful, detailed feedback has never been easier.

09. RICG

The Responsive Issues Community Group (formerly, the Responsive Images Community Group) has forged a new path in bringing our community together to create change
The Responsive Issues Community Group (formerly, the Responsive Images Community Group) has forged a new path in bringing our community together to create change

My final tool is not actually a tool at all. The folks that participate in this group are largely responsible for the new element, along with the new srcset, and sizes syntax. What has me really excited, is that the next project they intend to tackle is a solution for element queries.

Element queries will enable us to selectively apply styles based on the width of the containing element, not just the width of the viewport or device. Combine this with the ideas of pattern-driven design and new specs like Web Components and the future is looking pretty bright!

Wrapping it up

It’s a great time to be working on the web. The landscape is challenging, but we’re finding good solutions. Our tooling is evolving as fast as the problems we’re trying to solve.

In the end, though, tools alone are not enough. Remember that most of the reasons projects fail are not for technical reasons, but because of the people involved. Now, go make the web!

This article originally appeared in issue 268 of net magazine.


Top 25 Best Linux Performance Monitoring and Debugging Tools

I’ve compiled 25 performance monitoring and debugging tools that will be helpful when you are working on Linux environment. This list is not comprehensive or authoritative by any means.

However this list has enough tools for you to play around and pick the one that is suitable your specific debugging and monitoring scenario.

1. SAR

Using sar utility you can do two things: 1) Monitor system real time performance (CPU, Memory, I/O, etc) 2) Collect performance data in the background on an on-going basis and do analysis on the historical data to identify bottlenecks.

Sar is part of the sysstat package. The following are some of the things you can do using sar utility.

  • Collective CPU usage
  • Individual CPU statistics
  • Memory used and available
  • Swap space used and available
  • Overall I/O activities of the system
  • Individual device I/O activities
  • Context switch statistics
  • Run queue and load average data
  • Network statistics
  • Report sar data from a specific time
  • and lot more..

The following sar command will display the system CPU statistics 3 times (with 1 second interval).

The following “sar -b” command reports I/O statistics. “1 3” indicates that the sar -b will be executed for every 1 second for a total of 3 times.

$ sar -b 1 3
Linux 2.6.18-194.el5PAE (dev-db)        03/26/2011      _i686_  (8 CPU)

01:56:28 PM       tps      rtps      wtps   bread/s   bwrtn/s
01:56:29 PM    346.00    264.00     82.00   2208.00    768.00
01:56:30 PM    100.00     36.00     64.00    304.00    816.00
01:56:31 PM    282.83     32.32    250.51    258.59   2537.37
Average:       242.81    111.04    131.77    925.75   1369.90




2. Tcpdump

tcpdump is a network packet analyzer. Using tcpdump you can capture the packets and analyze it for any performance bottlenecks.

The following tcpdump command example displays captured packets in ASCII.

$ tcpdump -A -i eth0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
14:34:50.913995 IP valh4.lell.net.ssh > yy.domain.innetbcp.net.11006: P 1457239478:1457239594(116) ack 1561461262 win 63652
14:34:51.423640 IP valh4.lell.net.ssh > yy.domain.innetbcp.net.11006: P 116:232(116) ack 1 win 63652

Using tcpdump you can capture packets based on several custom conditions. For example, capture packets that flow through a particular port, capture tcp communication between two specific hosts, capture packets that belongs to a specific protocol type, etc.

3. Nagios

Nagios is an open source monitoring solution that can monitor pretty much anything in your IT infrastructure. For example, when a server goes down it can send a notification to your sysadmin team, when a database goes down it can page your DBA team, when the a web server goes down it can notify the appropriate team.

You can also set warning and critical threshold level for various services to help you proactively address the issue. For example, it can notify sysadmin team when a disk partition becomes 80% full, which will give enough time for the sysadmin team to work on adding more space before the issue becomes critical.

Nagios also has a very good user interface from where you can monitor the health of your overall IT infrastructure.

The following are some of the things you can monitor using Nagios:

  • Any hardware (servers, switches, routers, etc)
  • Linux servers and Windows servers
  • Databases (Oracle, MySQL, PostgreSQL, etc)
  • Various services running on your OS (sendmail, nis, nfs, ldap, etc)
  • Web servers
  • Your custom application
  • etc.

4. Iostat

iostat reports CPU, disk I/O, and NFS statistics. The following are some of iostat command examples.

Iostat without any argument displays information about the CPU usage, and I/O statistics about all the partitions on the system as shown below.

$ iostat
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.68    0.00    0.52    2.03    0.00   91.76

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             194.72      1096.66      1598.70 2719068704 3963827344
sda1            178.20       773.45      1329.09 1917686794 3295354888
sda2             16.51       323.19       269.61  801326686  668472456
sdb             371.31       945.97      1073.33 2345452365 2661206408
sdb1            371.31       945.95      1073.33 2345396901 2661206408
sdc             408.03       207.05       972.42  513364213 2411023092
sdc1            408.03       207.03       972.42  513308749 2411023092

By default iostat displays I/O data for all the disks available in the system. To view statistics for a specific device (For example, /dev/sda), use the option -p as shown below.

$ iostat -p sda
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.68    0.00    0.52    2.03    0.00   91.76

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             194.69      1096.51      1598.48 2719069928 3963829584
sda2            336.38        27.17        54.00   67365064  133905080
sda1            821.89         0.69       243.53    1720833  603892838

5. Mpstat

mpstat reports processors statistics. The following are some of mpstat command examples.

Option -A, displays all the information that can be displayed by the mpstat command as shown below. This is really equivalent to “mpstat -I ALL -u -P ALL” command.

$ mpstat -A
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011      _x86_64_        (4 CPU)

10:26:34 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
10:26:34 PM  all    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00   99.99
10:26:34 PM    0    0.01    0.00    0.01    0.01    0.00    0.00    0.00    0.00   99.98
10:26:34 PM    1    0.00    0.00    0.01    0.00    0.00    0.00    0.00    0.00   99.98
10:26:34 PM    2    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
10:26:34 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00

10:26:34 PM  CPU    intr/s
10:26:34 PM  all     36.51
10:26:34 PM    0      0.00
10:26:34 PM    1      0.00
10:26:34 PM    2      0.04
10:26:34 PM    3      0.00

10:26:34 PM  CPU     0/s     1/s     8/s     9/s    12/s    14/s    15/s    16/s    19/s    20/s    21/s    33/s   NMI/s   LOC/s   SPU/s   PMI/s   PND/s   RES/s   CAL/s   TLB/s   TRM/s   THR/s   MCE/s   MCP/s   ERR/s   MIS/s
10:26:34 PM    0    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    7.47    0.00    0.00    0.00    0.00    0.02    0.00    0.00    0.00    0.00    0.00    0.00    0.00
10:26:34 PM    1    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    4.90    0.00    0.00    0.00    0.00    0.03    0.00    0.00    0.00    0.00    0.00    0.00    0.00
10:26:34 PM    2    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.04    0.00    0.00    0.00    0.00    0.00    3.32    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00
10:26:34 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.

mpstat Option -P ALL, displays all the individual CPUs (or Cores) along with its statistics as shown below.

$ mpstat -P ALL
Linux 2.6.32-100.28.5.el6.x86_64 (dev-db)       07/09/2011      _x86_64_        (4 CPU)

10:28:04 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
10:28:04 PM  all    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00   99.99
10:28:04 PM    0    0.01    0.00    0.01    0.01    0.00    0.00    0.00    0.00   99.98
10:28:04 PM    1    0.00    0.00    0.01    0.00    0.00    0.00    0.00    0.00   99.98
10:28:04 PM    2    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
10:28:04 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00

6. Vmstat

vmstat reports virtual memory statistics. The following are some of vmstat command examples.

vmstat by default will display the memory usage (including swap) as shown below.

$ vmstat
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0 305416 260688  29160 2356920    2    2     4     1    0    0  6  1 92  2  0

To execute vmstat every 2 seconds for 10 times, do the following. After executing 10 times, it will stop automatically.
$ vmstat 2 10
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0      0 537144 182736 6789320    0    0     0     0    1    1  0  0 100  0  0
 0  0      0 537004 182736 6789320    0    0     0     0   50   32  0  0 100  0  0

iostat and vmstat are part of the sar utility. You should install sysstat package to get iostat and vmstat working.

7. PS Command

Process is a running instance of a program. Linux is a multitasking operating system, which means that more than one process can be active at once. Use ps command to find out what processes are running on your system.

ps command also give you lot of additional information about the running process which will help you identify any performance bottlenecks on your system.

The following are few ps command examples.

Use -u option to display the process that belongs to a specific username. When you have multiple username, separate them using a comma. The example below displays all the process that are owned by user wwwrun, or postfix.

$ ps -f -u wwwrun,postfix
postfix   7457  7435  0 Mar09 ?        00:00:00 qmgr -l -t fifo -u
wwwrun    7495  7491  0 Mar09 ?        00:00:00 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf
wwwrun    7496  7491  0 Mar09 ?        00:00:00 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf
wwwrun    7497  7491  0 Mar09 ?        00:00:00 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf
wwwrun    7498  7491  0 Mar09 ?        00:00:00 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf
wwwrun    7499  7491  0 Mar09 ?        00:00:00 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf
wwwrun   10078  7491  0 Mar09 ?        00:00:00 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf
wwwrun   10082  7491  0 Mar09 ?        00:00:00 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf
postfix  15677  7435  0 22:23 ?        00:00:00 pickup -l -t fifo -u

The example below display the process Id and commands in a hierarchy. –forest is an argument to ps command which displays ASCII art of process tree. From this tree, we can identify which is the parent process and the child processes it forked in a recursive manner.

$ ps -e -o pid,args --forest
  468  \_ sshd: root@pts/7
  514  |   \_ -bash
17484  \_ sshd: root@pts/11
17513  |   \_ -bash
24004  |       \_ vi ./790310__11117/journal
15513  \_ sshd: root@pts/1
15522  |   \_ -bash
 4280  \_ sshd: root@pts/5
 4302  |   \_ -bash

8. Free

Free command displays information about the physical (RAM) and swap memory of your system.

In the example below, the total physical memory on this system is 1GB. The values displayed below are in KB.

# free
       total   used    free   shared  buffers  cached
Mem: 1034624   1006696 27928  0       174136   615892
-/+ buffers/cache:     216668      817956
Swap:    2031608       0    2031608

The following example will display the total memory on your system including RAM and Swap.

In the following command:

  • option m displays the values in MB
  • option t displays the “Total” line, which is sum of physical and swap memory values
  • option o is to hide the buffers/cache line from the above example.
# free -mto
                  total       used      free     shared    buffers     cached
Mem:          1010        983         27              0         170           601
Swap:          1983            0    1983
Total:          2994        983     2011

9. TOP

Top command displays all the running process in the system ordered by certain columns. This displays the information real-time.

You can kill a process without exiting from top. Once you’ve located a process that needs to be killed, press “k” which will ask for the process id, and signal to send. If you have the privilege to kill that particular PID, it will get killed successfully.

PID to kill: 1309
Kill PID 1309 with signal [15]:
 1309 geek   23   0 2483m 1.7g  27m S    0 21.8  45:31.32 gagent
 1882 geek   25   0 2485m 1.7g  26m S    0 21.7  22:38.97 gagent
 5136 root    16   0 38040  14m 9836 S    0  0.2   0:00.39 nautilus

Use top -u to display a specific user processes only in the top command output.

$ top -u geek

While unix top command is running, press u which will ask for username as shown below.

Which user (blank for all): geek
 1309 geek   23   0 2483m 1.7g  27m S    0 21.8  45:31.32 gagent
 1882 geek   25   0 2485m 1.7g  26m S    0 21.7  22:38.97 gagent

10. Pmap

pmap command displays the memory map of a given process. You need to pass the pid as an argument to the pmap command.

The following example displays the memory map of the current bash shell. In this example, 5732 is the PID of the bash shell.

$ pmap 5732
5732:   -bash
00393000    104K r-x--  /lib/ld-2.5.so
003b1000   1272K r-x--  /lib/libc-2.5.so
00520000      8K r-x--  /lib/libdl-2.5.so
0053f000     12K r-x--  /lib/libtermcap.so.2.0.8
0084d000     76K r-x--  /lib/libnsl-2.5.so
00c57000     32K r-x--  /lib/libnss_nis-2.5.so
00c8d000     36K r-x--  /lib/libnss_files-2.5.so
b7d6c000   2048K r----  /usr/lib/locale/locale-archive
bfd10000     84K rw---    [ stack ]
 total     4796K

pmap -x gives some additional information about the memory maps.

$  pmap -x 5732
5732:   -bash
Address   Kbytes     RSS    Anon  Locked Mode   Mapping
00393000     104       -       -       - r-x--  ld-2.5.so
003b1000    1272       -       -       - r-x--  libc-2.5.so
00520000       8       -       -       - r-x--  libdl-2.5.so
0053f000      12       -       -       - r-x--  libtermcap.so.2.0.8
0084d000      76       -       -       - r-x--  libnsl-2.5.so
00c57000      32       -       -       - r-x--  libnss_nis-2.5.so
00c8d000      36       -       -       - r-x--  libnss_files-2.5.so
b7d6c000    2048       -       -       - r----  locale-archive
bfd10000      84       -       -       - rw---    [ stack ]
-------- ------- ------- ------- -------
total kB    4796       -       -       -

To display the device information of the process maps use ‘pamp -d pid’.

11. Netstat

Netstat command displays various network related information such as network connections, routing tables, interface statistics, masquerade connections, multicast memberships etc.,

The following are some netstat command examples.

List all ports (both listening and non listening) using netstat -a as shown below.

# netstat -a | more
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
tcp        0      0 localhost:30037         *:*                     LISTEN
udp        0      0 *:bootpc                *:*                                

Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   Path
unix  2      [ ACC ]     STREAM     LISTENING     6135     /tmp/.X11-unix/X0
unix  2      [ ACC ]     STREAM     LISTENING     5140     /var/run/acpid.socket

Use the following netstat command to find out on which port a program is running.

# netstat -ap | grep ssh
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        1      0 dev-db:ssh         CLOSE_WAIT  -
tcp        1      0 dev-db:ssh         CLOSE_WAIT  -

Use the following netstat command to find out which process is using a particular port.

# netstat -an | grep ':80'

12. IPTraf

IPTraf is a IP Network Monitoring Software. The following are some of the main features of IPTraf:

  • It is a console based (text-based) utility.
  • This displays IP traffic crossing over your network. This displays TCP flag, packet and byte counts, ICMP, OSPF packet types, etc.
  • Displays extended interface statistics (including IP, TCP, UDP, ICMP, packet size and count, checksum errors, etc.)
  • LAN module discovers hosts automatically and displays their activities
  • Protocol display filters to view selective protocol traffic
  • Advanced Logging features
  • Apart from ethernet interface it also supports FDDI, ISDN, SLIP, PPP, and loopback
  • You can also run the utility in full screen mode. This also has a text-based menu.

13. Strace

Strace is used for debugging and troubleshooting the execution of an executable on Linux environment. It displays the system calls used by the process, and the signals received by the process.

Strace monitors the system calls and signals of a specific program. It is helpful when you do not have the source code and would like to debug the execution of a program. strace provides you the execution sequence of a binary from start to end.

Trace a Specific System Calls in an Executable Using Option -e

Be default, strace displays all system calls for the given executable. The following example shows the output of strace for the Linux ls command.

$ strace ls
execve("/bin/ls", ["ls"], [/* 21 vars */]) = 0
brk(0)                                  = 0x8c31000
access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
mmap2(NULL, 8192, PROT_READ, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb78c7000
access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY)      = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=65354, ...}) = 0

To display only a specific system call, use the strace -e option as shown below.

$ strace -e open ls
open("/etc/ld.so.cache", O_RDONLY)      = 3
open("/lib/libselinux.so.1", O_RDONLY)  = 3
open("/lib/librt.so.1", O_RDONLY)       = 3
open("/lib/libacl.so.1", O_RDONLY)      = 3
open("/lib/libc.so.6", O_RDONLY)        = 3
open("/lib/libdl.so.2", O_RDONLY)       = 3
open("/lib/libpthread.so.0", O_RDONLY)  = 3
open("/lib/libattr.so.1", O_RDONLY)     = 3
open("/proc/filesystems", O_RDONLY|O_LARGEFILE) = 3
open("/usr/lib/locale/locale-archive", O_RDONLY|O_LARGEFILE) = 3

14. Lsof

Lsof stands for ls open files, which will list all the open files in the system. The open files include network connection, devices and directories. The output of the lsof command will have the following columns:

  • COMMAND process name.
  • PID process ID
  • USER Username
  • FD file descriptor
  • TYPE node type of the file
  • DEVICE device number
  • SIZE file size
  • NODE node number
  • NAME full path of the file name.

To view all open files of the system, execute the lsof command without any parameter as shown below.

# lsof | more
COMMAND     PID       USER   FD      TYPE     DEVICE      SIZE       NODE NAME
init          1       root  cwd       DIR        8,1      4096          2 /
init          1       root  rtd       DIR        8,1      4096          2 /
init          1       root  txt       REG        8,1     32684     983101 /sbin/init
init          1       root  mem       REG        8,1    106397     166798 /lib/ld-2.3.4.so
init          1       root  mem       REG        8,1   1454802     166799 /lib/tls/libc-2.3.4.so
init          1       root  mem       REG        8,1     53736     163964 /lib/libsepol.so.1
init          1       root  mem       REG        8,1     56328     166811 /lib/libselinux.so.1
init          1       root   10u     FIFO       0,13                  972 /dev/initctl
migration     2       root  cwd       DIR        8,1      4096          2 /

To view open files by a specific user, use lsof -u option to display all the files opened by a specific user.

# lsof -u ramesh
vi      7190 ramesh  txt    REG        8,1   474608   475196 /bin/vi
sshd    7163 ramesh    3u  IPv6   15088263               TCP dev-db:ssh->abc-12-12-12-12.

To list users of a particular file, use lsof as shown below. In this example, it displays all users who are currently using vi.

# lsof /bin/vi
vi      7258  root   txt    REG    8,1 474608 475196 /bin/vi
vi      7300  ramesh txt    REG    8,1 474608 475196 /bin/vi

15. Ntop

Ntop is just like top, but for network traffic. ntop is a network traffic monitor that displays the network usage.

You can also access ntop from browser to get the traffic information and network status.

The following are some the key features of ntop:

  • Display network traffic broken down by protocols
  • Sort the network traffic output based on several criteria
  • Display network traffic statistics
  • Ability to store the network traffic statistics using RRD
  • Identify the identify of the users, and host os
  • Ability to analyze and display IT traffic
  • Ability to work as NetFlow/sFlow collector for routers and switches
  • Displays network traffic statistics similar to RMON
  • Works on Linux, MacOS and Windows

16. GkrellM

GKrellM stands for GNU Krell Monitors, or GTK Krell Meters. It is GTK+ toolkit based monitoring program, that monitors various sytem resources. The UI is stakable. i.e you can add as many monitoring objects you want one on top of another. Just like any other desktop UI based monitoring tools, it can monitor CPU, memory, file system, network usage, etc. But using plugins you can monitoring external applications.

17. w and uptime

While monitoring system performance, w command will hlep to know who is logged on to the system.

$ w
09:35:06 up 21 days, 23:28,  2 users,  load average: 0.00, 0.00, 0.00
USER     TTY      FROM          LOGIN@   IDLE   JCPU   PCPU WHAT
root     tty1     :0            24Oct11  21days 1:05   1:05 /usr/bin/Xorg :0 -nr -verbose
ramesh   pts/0  Mon14    0.00s  15.55s 0.26s sshd: localuser [priv]
john     pts/0  Mon07    0.00s  19.05s 0.20s sshd: localuser [priv]
jason    pts/0  Mon07    0.00s  21.15s 0.16s sshd: localuser [priv]

For each and every user who is logged on, it displays the following info:

  • Username
  • tty info
  • Remote host ip-address
  • Login time of the user
  • How long the user has been idle
  • JCPU and PCUP
  • The command of the current process the user is executing

Line 1 of the w command output is similar to the uptime command output. It displays the following:

  • Current time
  • How long the system has been up and running
  • Total number of users who are currently logged on the system
  • Load average for the last 1, 5 and 15 minutes

If you want only the uptime information, use the uptime command.

$ uptime
 09:35:02 up 106 days, 28 min,  2 users,  load average: 0.08, 0.11, 0.05

Please note that both w and uptime command gets the information from the /var/run/utmp data file.

18. /proc

/proc is a virtual file system. For example, if you do ls -l /proc/stat, you’ll notice that it has a size of 0 bytes, but if you do “cat /proc/stat”, you’ll see some content inside the file.

Do a ls -l /proc, and you’ll see lot of directories with just numbers. These numbers represents the process ids, the files inside this numbered directory corresponds to the process with that particular PID.

The following are the important files located under each numbered directory (for each process):

  • cmdline – command line of the command.
  • environ – environment variables.
  • fd – Contains the file descriptors which is linked to the appropriate files.
  • limits – Contains the information about the specific limits to the process.
  • mounts – mount related information

The following are the important links under each numbered directory (for each process):

  • cwd – Link to current working directory of the process.
  • exe – Link to executable of the process.
  • root – Link to the root directory of the process.

19. KDE System Guard

This is also called as KSysGuard. On Linux desktops that run KDE, you can use this tool to monitor system resources. Apart from monitoring the local system, this can also monitor remote systems.

If you are running KDE desktop, go to Applications -> System -> System Monitor, which will launch the KSysGuard. You can also type ksysguard from the command line to launch it.

This tool displays the following two tabs:

  • Process Table – Displays all active processes. You can sort, kill, or change priority of the processes from here
  • System Load – Displays graphs for CPU, Memory, and Network usages. These graphs can be customized by right cliking on any of these graphs.

To connect to a remote host and monitor it, click on File menu -> Monitor Remote Machine -> specify the ip-address of the host, the connection method (for example, ssh). This will ask you for the username/password on the remote machine. Once connected, this will display the system usage of the remote machine in the Process Table and System Load tabs.

20. GNOME System Monitor

On Linux desktops that run GNOME, you can use the this tool to monitor processes, system resources, and file systems from a graphical interface. Apart from monitoring, you can also use this UI tool to kill a process, change the priority of a process.

If you are running GNOME desktop, go to System -> Administration -> System Monitor, which will launch the GNOME System Monitor. You can also type gnome-system-monitor from the command line to launch it.

This tool has the following four tabs:

  • System – Displays the system information including Linux distribution version, system resources, and hardware information.
  • Processes – Displays all active processes that can be sorted based on various fields
  • Resources – Displays CPU, memory and network usages
  • File Systems – Displays information about currently mounted file systems

21. Conky

Conky is a system monitor or X. Conky displays information in the UI using what it calls objects. By default there are more than 250 objects that are bundled with conky, which displays various monitoring information (CPU, memory, network, disk, etc.). It supports IMAP, POP3, several audio players.

You can monitor and display any external application by craeting your own objects using scripting. The monitoring information can be displays in various format: Text, graphs, progress bars, etc. This utility is extremly configurable.

22. Cacti

Cacti is a PHP based UI frontend for the RRDTool. Cacti stores the data required to generate the graph in a MySQL database.

The following are some high-level features of Cacti:

  • Ability to perform the data gathering and store it in MySQL database (or round robin archives)
  • Several advanced graphing featurs are available (grouping of GPRINT graph items, auto-padding for graphs, manipulate graph data using CDEF math function, all RRDTool graph items are supported)
  • The data source can gather local or remote data for the graph
  • Ability to fully customize Round robin archive (RRA) settings
  • User can define custom scripts to gather data
  • SNMP support (php-snmp, ucd-snmp, or net-snmp) for data gathering
  • Built-in poller helps to execute custom scripts, get SNMP data, update RRD files, etc.
  • Highly flexible graph template features
  • User friendly and customizable graph display options
  • Create different users with various permission sets to access the cacti frontend
  • Granular permission levels can be set for the individual user
  • and lot more..

23. Vnstat

vnstat is a command line utility that displays and logs network traffic of the interfaces on your systems. This depends on the network statistics provided by the kernel. So, vnstat doesn’t add any additional load to your system for monitoring and logging the network traffic.

vnstat without any argument will give you a quick summary with the following info:

  • The last time when the vnStat datbase located under /var/lib/vnstat/ was updated
  • From when it started collecting the statistics for a specific interface
  • The network statistic data (bytes transmitted, bytes received) for the last two months, and last two days.
# vnstat
Database updated: Sat Oct 15 11:54:00 2011

   eth0 since 10/01/11

          rx:  12.89 MiB      tx:  6.94 MiB      total:  19.82 MiB

                     rx      |     tx      |    total    |   avg. rate
       Sep '11     12.90 MiB |    6.90 MiB |   19.81 MiB |    0.14 kbit/s
       Oct '11     12.89 MiB |    6.94 MiB |   19.82 MiB |    0.15 kbit/s
     estimated        29 MiB |      14 MiB |      43 MiB |

                     rx      |     tx      |    total    |   avg. rate
     yesterday      4.30 MiB |    2.42 MiB |    6.72 MiB |    0.64 kbit/s
         today      2.03 MiB |    1.07 MiB |    3.10 MiB |    0.59 kbit/s
     estimated         4 MiB |       2 MiB |       6 MiB |

Use “vnstat -t” or “vnstat –top10” to display all time top 10 traffic days.

$ vnstat --top10

 eth0  /  top 10

    #      day          rx      |     tx      |    total    |   avg. rate
    1   10/12/11       4.30 MiB |    2.42 MiB |    6.72 MiB |    0.64 kbit/s
    2   10/11/11       4.07 MiB |    2.17 MiB |    6.24 MiB |    0.59 kbit/s
    3   10/10/11       2.48 MiB |    1.28 MiB |    3.76 MiB |    0.36 kbit/s

24. Htop

htop is a ncurses-based process viewer. This is similar to top, but is more flexible and user friendly. You can interact with the htop using mouse. You can scroll vertically to view the full process list, and scroll horizontally to view the full command line of the process.

htop output consists of three sections 1) header 2) body and 3) footer.

Header displays the following three bars, and few vital system information. You can change any of these from the htop setup menu.

  • CPU Usage: Displays the %used in text at the end of the bar. The bar itself will show different colors. Low-priority in blue, normal in green, kernel in red.
  • Memory Usage
  • Swap Usage

Body displays the list of processes sorted by %CPU usage. Use arrow keys, page up, page down key to scoll the processes.

Footer displays htop menu commands.

25. Socket Statistics – SS

ss stands for socket statistics. This displays information that are similar to netstat command.

To display all listening sockets, do ss -l as shown below.

$ ss -l
Recv-Q Send-Q   Local Address:Port     Peer Address:Port
0      100      :::8009                :::*
0      128      :::sunrpc              :::*
0      100      :::webcache            :::*
0      128      :::ssh                 :::*
0      64       :::nrpe                :::*

The following displays only the established connection.

$ ss -o state established
Recv-Q Send-Q   Local Address:Port   Peer Address:Port
0      52    timer:(on,414ms,0)

The following displays socket summary statistics. This displays the total number of sockets broken down by the type.

$ ss -s
Total: 688 (kernel 721)
TCP:   16 (estab 1, closed 0, orphaned 0, synrecv 0, timewait 0/0), ports 11

Transport Total     IP        IPv6
*         721       -         -
RAW       0         0         0
UDP       13        10        3
TCP       16        7         9
INET      29        17        12
FRAG      0         0         0

What tool do you use to monitor performance on your Linux environment?

Source http://www.thegeekstuff.com


10 Web Design Tools you can’t live without

This year has been a really exciting one for new design tools entering the market. Photoshop has become something of a dirty word, and although it’s still one of my weapons of choice, internet-makers are fighting back with a plethora of tools that aim to make your life as a designer that little bit better.

Static mockups are becoming less useful and the lines between designer and developer are becoming increasingly blurred. As we work more collaboratively with one another the tools we use are having to change.

The era of the HiDPI screen is here and dominating our devices, and designers are having to find new processes that help them produce designs that are more accessible across these devices.

Here, I’ve picked out top 10 tools that I believe are changing the way we think about design, and are key to helping us establish these new processes…

01. Pixate

Pixate is an app that has been created for making intuitive, interactive prototypes for iOS and Android. What sets Pixate apart from other tools I’ve used is its drag-and-drop animation and interaction panel.

To make an element interactive, all you need to do is simply select the interaction you’d like to use, be that double-tap, drag, tap or one of the other native interactions it provides.

Drag this onto your element, set some variables, and voilà – using the Pixate iOS or Android app gives you a real-time working prototype on your device.

The app allows you to build up an asset library really quickly. To start creating your prototype, just drag and drop your assets into the app and you’ll find them in your layers palette ready to use.

What I really enjoyed about Pixate is how simple it is. Every part of the app is really clearly and thoughtfully designed which makes it easy to understand.

And if you’re not quite sure on what something does, the live feed onto your device allows you to experiment and see how it effects the design in real time.

02. Affinity

Affinity by Serif has been dubbed the ‘Photoshop killer’ by some, and it’s easy to see why. My first impressions are that the app is incredibly well designed and that it feels like it’s been made to be a dedicated web and graphic design tool.

There were a few features that I really enjoyed, including adjustable nondestructive layers – which essentially means you can adjust images or vectors without damaging them.

The 1,000,000 percent zoom was just bliss (very often Photoshop’s 32,000 feels like it’s just not enough). This is especially useful when working with vector art, as you can really get in close. The undo and history features are also really handy – Affinity allows you to go back over 8,000 steps!

When it comes to designing, the UI feels familiar. When moving from Photoshop, everyone seems to want to start over, which can pose a real challenge.

What Affinity has done is kept the layout familiar while tightening everything up and hiding distractions. I was easily able to jump straight in and get designing.

Overall, Affinity feels like it could be a real competitor to Photoshop, Illustrator and Sketch, and at just £34.99 it’s a real bargain!

03. Avocode

Avocode makes it extremely easy for a frontend developer to code websites or apps from Photoshop or Sketch designs. It’s built by the same team that brought us CSS Hat and PNG Hat, so it’s not surprising they’ve taken the exporting process one step further here.

Although previous apps have allowed you to export assets, what makes Avocode really special is that you can use its Photoshop plugin to sync your PSD into Avocode with just one click.

Avocode quickly and automatically analyses your PSD or Sketch file and brings everything into a beautifully designed UI. You then have full control over how you export assets, including SVG exporting as standard.

You can also click elements in the design, and copy and paste the code into a text editor of your choice. It looks like there’s going to be Atom integration soon, which will make this process even slicker.

“It gives users everything they need for coding – a preview of the design, and access to all layers and export assets,” says Avocode co-founder Vu Hoang Anh. “The best thing is that developers won’t need Photoshop or Sketch at all. The current workflow really sucks and that’s why we created Avocode.”

Overall, I’m not 100 percent sure any app can ever replicate a developer. However, I’d happily use this to turn PSDs and Sketch files into interactive designs that could form the foundations for the website build.

04. Antetype

Antetype is a new tool for creating responsive UIs for apps and websites. It feels like its been built to do just one job really well: to create high-fidelity prototypes, but not production files. This is actually a good thing – the team is focused on exactly what it is creating, and it’s not trying to make an app that replaces developers.

“Frustrated using photo editing tools for UX design, we wanted a tool designed for UX design with a better understanding of content and layout,” says Tim Klauck, UX Director at Antetype. “Auto-layout and widgets allow us to rapidly iterate our designs while maintaining consistency.”

On download you’re given a fairly basic widget library, which you can use to quickly create prototypes and start designing. Antetype provides a library of devices and OS designs including iOS, Android and Windows to start with. There is also an active community section on the site, where you can download UI kits from other Antetype users.

The thing that sets Antetype apart from other prototyping tools I’ve used is that you can actually create responsive prototypes and add some neat interactions – ideal for presenting ideas to a client.

It feels a bit too ‘drag-and-drop’ for my liking, and the UI isn’t stunning but it is very efficient and quick to learn. Spend some time with it and you can create very effective prototypes, very rapidly.

05. Sketch

Sketch has gained a massive following since it launched in 2009. The speed at which Bohemian Coding (the creator of Sketch) is moving is very impressive. The latest version includes great new features such as improved exporting, symbols and simplified vector modes.

“When we set out to build Sketch, we envisioned an app for the modern digital designer,” says Pieter Omvlee, founder of Bohemian Coding.

“We have tried to do that with key improvements to basic functionality and radical new features. We’ve been humbled by the enthusiasm with which people have started using Sketch and the amazing work they have created already.”

I had used the last version of Sketch for a limited time but found it to be buggy and not to my liking. However, after using Sketch 3 for a few hours I was pleased to see those bugs had been ironed out and the app overall much improved. It’s great to see features such as the new grid tool, which makes it easy to create grids for your designs.

I also really like how Sketch has incorporated CSS logic into the app. This makes converting your designs into CSS much easier, as you have to use CSS logic when applying styles. Another feature which is really handy for speeding up the design/development crossover is Automatic Slicing.

Without having to manually add slices, Sketch can create assets using one-click export, which will be exported at 0.5x, 1x, 2x and 3x and in various formats such as PNG, JPG and TIFF. I’m looking forward to seeing what Sketch does next.

06. Form

Form is a prototyping tool like no other I’ve tried. It’s not a typical design tool in that there’s no tools or layers palette. Using the app feels like a mix of design and code.

While you can’t actually create graphics in the app, you can insert them and use what Form calls ‘patches’ to add gestures and interactions. The Mac app requires you to also use the iOS app so you can view your prototype in real time and interact with it.

“Form is an app design and prototyping tool with the goal of producing designs that are closer to what you get in production,” explains Relative Wave creative coder Max Weisel. “We want designers to work directly on the production side of an app, and at the same time free up engineers to focus on more complex problems.”

There are some great tutorials on how to use Form, but the process is rather complex if, like me, you are used to creating visuals in Photoshop. Moving an image to the centre of your device, for example, is achieved using Superview variables and Match Patches.

Once in place you use maths to divide the width and height and connect them to the X and Y positions in Image View. Group those together, rename the variables and adjust the X and Y anchor points. I found this process fairly complex.

However, once you get your head around the processes, you can create stunning prototypes. Having access to the device’s camera and other sensors means the prototypes you create are just as powerful as the coded app would be.

07. UXPin

UXPin is a wireframing and prototyping tool that has been designed to not be limited to just one thing. You can use it to create rapid low-fidelity wireframes just as easily as hi-fidelity interactive prototypes. Using UXPin is a really pleasant experience – you can tell it has been built by a UX Team!

I love that the team allows so much flexibility within the app – you can opt to start with an empty iPhone app or responsive website template, where you design everything in the app.

Or, if you have a design you want to prototype and add interactivity, there are tools to import a project from Photoshop or Sketch.

If you’re starting from scratch then you can use one of the many element libraries, which include UX patterns from frameworks such as Bootstrap, Foundation, iOS and many more. This is particularly handy if you want to make a rapid wireframe.

Making high-fidelity prototypes with interactions is also really easy and super-smart. Select an element such as a button, and you are then guided through a step-by-step process of adding the interaction. It’s really easy to do and very effective.

Finally, sharing and commenting on designs is really simple too. A point-and-click interface makes it easy for clients to add comments on certain elements and have them reviewed. I’d definitely recommend giving this a try for your next wireframe or prototyping project.

08. Macaw

Macaw was built with designers in mind, and without touching any code you can create responsive designs that look and work great across all devices. I was immediately blown away by the simplicity of its design, and the app feels nice and familiar.

After watching a few videos I got stuck into designing a simple page layout. Within 30 minutes I was able to create a responsive template that worked pretty well.

I really enjoyed how the app allowed you to bring up options to adjust the layout and would represent this in real time, enabling you to see how the changes would actually effect the design.

This was even more useful when setting breakpoints. As the layout broke, I was able to tweak it to create a responsive layout without touching any code.

The code the app produces is actually really well constructed and semantic. This is usually where apps like these fall down, but Macaw has managed some kind of code magic and got it right! I’m very excited to see where this goes.

09. Marvel

“We wanted to lower the barrier to bringing your digital ideas to life, so we created Marvel, a ‘code-free’ prototyping tool that transforms images and sketches into interactive prototypes that look and feel like real apps and websites,” says Murat Mutlu, who co-founded the tool.

When you first open Marvel you’re asked to connect it with your Dropbox, which allows the app to grab the files it needs to create your project(s). If you don’t use Dropbox, there’s no other way to get your files into Marvel.

Once you have your PSDs in place, you can use Marvel’s (beautifully designed) UI to hotlink your pages together.

There are also some really handy features, including being able to create transitions between links/pages and quickly preview how these will look in the browser.

Another great feature is that you can choose the environment for your project. So, if you want to create an iOS app, you just select it from the settings and the preview is automatically adjusted.

I really like how simple the app is to use, and the ability to share it with staff or clients is really helpful. I’ll definitely be using this for my client work!

10. Webflow

Webflow is a web app that allows you to design production-ready websites without coding. It has an unobtrusive UI that allows you to focus on the design. There’s an option to view the design at popular breakpoints, and a preview mode, which gives you full control over the viewport size.

There are some familiar tools that allow you to design elements, but it’s all drag-and-drop (there’s no drawing tool) so you are limited with what you can create.

“Webflow was born out of the frustrating workflow between my brother Sergie (a designer) and I (a coder) while we were building sites for clients,” explains co-founder Vlad Magdalin. “It now helps web designers create and publish custom responsive websites much faster.”

This app takes code-free design to the next level. However, you do still have to understand how code works in order to create something that functions well. You couldn’t jump in as someone who knows nothing about design and just build a website, for example.

This is definitely a good thing – Webflow makes you think about the code, without having to worry about it. It also allows you to publish your site and will host it for you for a monthly fee. Definitely one to try, even if just for prototyping.

Source http://www.creativebloq.com


10 Free Tools for Creating Infographics

For all the importance we place on text, it’s an indisputable fact that images are processed in the brain faster than words. Hence the rise and rise of the infographic which, at its best, transforms complex information into graphics that are both easy to grasp and visually appealing. No wonder magazine readers and web visitors love the best infographics.

The only problem is, infographics that look like they were simple to make are often anything but. Creating something beautiful and instantly understandable in Photoshop CC is often beyond the limits that time allows. Which is why it’s occasionally useful to use a quick and dirty infographics tool to speed up the process.

We’ve selected our favourites here. They’re all free, or offer free versions. Let us know which ones you get on best with…

01. Canva Infographic Maker

Canva’s drag-and-drop interface is perfect for making infographics

Canva’s a powerful and easy-to-use online tool that’s suitable for all manner of design tasks, from brochures to presentations and much more besides, with a vast library of images, icons, fonts and features to choose from.

And it features a dedicated infographic maker that you can use for free, with hundreds of free design elements and fonts at your fingertips, and many more premium elements that you can buy for up to $1. You can either use it in the browser or download the Canva iPad app to design on the move.

02. Vizualize


This generator could be the start of how ‪résumé‬s will be portrayed in the future

After the success of our post on an infographic ‪resume‬, it was only a matter of time before this infographic ‪resume generator turned up. You can visualise your resume in one click and also take a look at previous examples. Enabling people to express their professional accomplishments in a simple yet compelling personal visualisation, we think this is the start of something big.

03. Google Developers


Display real live data with Google Developers

Google chart tools are powerful, simple to use, and free. You can choose from a variety of charts and configure an extensive set of options to perfectly match the look and feel of your website. By connecting your data in real time, Google Developers is the perfect infographic generator for your website.

04. Easel.ly

Free infographics tools: Easel.ly

Easel.ly offers a dozen free templates to start you off

This free web-based infographic tool offers you a dozen free templates to start you off, which are easily customisable.

You get access to a library of things like arrows, shapes and connector lines, and you can customize the text with range of fonts, colours, text styles and sizes. The tool also lets you upload your graphics and position them with one touch.

05. Piktochart

Free infographics tools: Picktochart

The Piktochart’s customizable editor lets you modify colour schemes and fonts

Piktochart is an infographic and presentation tool enabling you to turn boring data into engaging infographics with just a few clicks. Piktochart’s customizable editor lets you do things like modify colour schemes and fonts, insert pre-loaded graphics and upload basic shapes and images. Its grid lined templates also make it easy to align graphical elements and resize images proportionally. There’s a free version offering three basic themes, while a pro account costs $29 per month or $169 for a year.

06. Infogr.am

Free infographics tools: Infogr.am

Customising the data that makes up the infographic takes place in an Excel style spreadsheet

Infogr.am is a great free tool which offers access to a wide variety of graphs, charts and maps as well as the ability to upload pictures and videos to create cool infographics.

Customising the data that makes up the infographic takes place in an Excel style spreadsheet and can easily be edited, watching the software automatically change the look of the infographic to perfectly represent your data. When you’re happy with your infographic you can publish it to the Infogram website for all to enjoy and even embed it in to your own website or share it via social media.

07. InFoto Free

Free infographics tools: Infoto Free

Build infographics based on your photo taking habits

This one’s a bit niche, but if you take a lot of photos with your Android phone it’s worth checking out. InFoto takes the EXIF data attached to your photos and builds nice-looking infographics from it. It’s got a great interface, and the paid-for version (which comes without ads) only costs 99 cents.

08. Venngage

infographic tools

Looking for an easy-to-use tool? Venngage is your best bet!

Venngage is a great tool for creating and publishing infographics because it’s so simple and easy to use. You can choose from templates, themes, and hundreds of charts and icons as well as uploading your own images and backgrounds, or customize a theme to suit your brand. You can animate them too!

09. Dipity

infographic tools

Upgrade to the Premium plan for custom branding

Dipity is a great way to create, share, embed, and collaborate on interactive, visually engaging timelines that integrate video, audio, images, text, links, social media, location, and time stamps. You can join for free but premium plans offer custom branding and backgrounds, analytics, and custom iPhone apps.

10. Get About

This free Windows app lets you monitor your social media activity and generate infographics that help you visualize how you connect and share with your network.

Source http://www.creativebloq.com


How to configure networking in Linux

Connecting your Linux computer to a network is pretty straightforward, except when it is not. In this article I discuss the main network configuration files for Red Hat-based Linux distributions, and take a look at the two network startup services: the venerable network startup, and the controversial NetworkManager.

Linux easily manages multiple network interface adapters. Laptops typically include both wired and wireless interfaces, and may also support WiMax interfaces for cellular networks. Linux desktop computers also support multiple network interfaces, and you can use your Linux computer as a multi-network client, or as a router for internal networks; such is the case with a couple of my own systems.

Interface configuration files

Every network interface has its own configuration file in the /etc/sysconfig/network-scripts directory. Each interface has a configuration file named ifcfg-<interface-name>X, where X is the number of the interface, starting with zero or 1 depending upon the naming convention in use; for example /etc/sysconfig/network-scripts/ifcfg-eth0 for the first Ethernet interface.

Most of the other files in the /etc/sysconfig/network-scripts directory are scripts used to start, stop and perform various network configuration activities.

Each interface configuration file is bound to a specific physical network interface by the MAC address of the interface.

Network interface naming conventions

The naming conventions for network interfaces used to be simple, uncomplicated, and, I thought, easy. Using ethX made sense to me and was easy to type. It did not require extra steps to figure out what long and obscure name belonged to an interface. Unfortunately, adding a new interface often forced the renaming of network interfaces, which broke scripts and configurations. That has all changed, more than once.

After a short stint with some long and unintelligible network interface names, we now have a third set of naming conventions which seem only marginally better.  Names like ethX are still used by CentOS 6.x. The newest naming conventions, with names like eno1 and enp0s3 are used by RHEL 7, CentOS 7, and more recent releases of Fedora. The interface naming convention for RHEL 6 and CentOS 6 is described in Appendix A. Consistent Network Device Naming of the Red Hat Deployment Guide. RHEL 7, CentOS 7, and current versions of Fedora are described in Chapter 8. Consistent Network Device Naming of the Red Hat Networking Guide.

Configuration file examples

This example network interface configuration file, ifcfg-eth0, defines a static IP address configuration for a CentOS 6 server installation.

# Intel Corporation 82566DC-2 Gigabit Network Connection

This file starts the interface on boot, assigns it a static IP address, defines a domain and network gateway, specifies two DNS servers, and does not allow non-root users to start and stop the interface.

The following interface configuration file, ifcfg-eno1, provides a DHCP configuration for a desktop workstation.


In this second configuration file example, the DHCP entries, IP address, the search domain, and all other network information are not defined because they are supplied by the DHCP server. Configuration items like the DNS servers can be overridden in the interface configuration file by adding DNS1 and DNS2 lines, as in the previous static configuration example.

Note that this second example contains a UUID line. As far as I can determine, this line has no effect on the functionality of the configuration file. I usually comment it out or even delete it and have never experienced a detrimental effect on my network.

In both interface configuration files, the HWADDR line specifies the MAC address of the physical network interface. This binds the physical interface to the interface configuration file.  You must change the MAC address in the file if you replace the interface.

There are a couple ways to locate the MAC address of a NIC. I usually use the ifconfig command which shows all installed NICs, thier MAC address and various statistics. Many new NICs have their MAC address printed on the box or labelled on the NIC itself. However, most interface configuration files are generated automatically during installation or when the NIC is first detected after being newly installed, and the MAC address is included as part of the new interface configuration file.

The ONBOOT line specifies that the interface is to be activated at startup time. If this line is changed to “no” the interface will have to be activated either manually, or by NetworkManager after a user logs in.

The USERCTL line specifies that non-privileged users cannot manage the interface; that is they cannot turn it on and off. Setting this parameter to “yes” allows regular users to activate and deactivate the interface.

Notice that the lines in the interface configuration files are not sequence-sensitive, and work just fine in any order. By convention, the option names are in uppercase and the values are in lowercase. Option values can be enclosed in quotes, but that is not necessary unless the value is more than a single word or number.

Configuration options

There are many configuration options for the interface configuration files. These are some of the more common options:

  • DEVICE: The logical name of the device, such as eth0 or enp0s2.
  • HWADDR: The MAC address of the NIC that is bound to the file, such as 00:16:76:02:BA:DB
  • ONBOOT: Start the network on this device when the host boots. Options are yes/no. This is typically set to “no” and the network does not start until a user logs in to the desktop. If you need the network to start when no one is logged in, set this to “yes”.
  • IPADDR: The IP Address assigned to this NIC such as
  • BROADCAST: The broadcast address for this network such as
  • NETMASK: The netmask for this subnet such as the class C mask
  • NETWORK: The network ID for this subnet such as the class C ID
  • SEARCH: The DNS domain name to search when doing lookups on unqualified hostnames such as “example.com”
  • BOOTPROTO: The boot protocol for this interface. Options are static, DHCP, bootp, none. The “none” option defaults to static.
  • GATEWAY: The network router or default gateway for this subnet, such as
  • ETHTOOL_OPTS: This option is used to set specific interface configuration items for the network interface, such as speed, duplex state, and autonegotiation state. Because this option has several independent values, the values should be enclosed in a single set of quotes, such as: “autoneg off speed 100 duplex full”.
  • DNS1: The primary DNS server, such as, which is a server on the local network. The DNS servers specified here are added to the /etc/resolv.conf file when using NetworkManager, or when the peerdns directive is set to yes, otherwise the DNS servers must be added to /etc/resolv.conf manually and are ignored here.
  • DNS2: The secondary DNS server, for example, which is one of the free Google DNS servers. Note that a tertiary DNS server is not supported in the interface configuration files, although a third may be configured in a non-volatile resolv.conf file.
  • TYPE: Type of network, usually Ethernet. The only other value I have ever seen here was Token Ring but that is now mostly irrelevant.
  • PEERDNS: The yes option indicates that /etc/resolv.conf is to be modified by inserting the DNS server entries specified by DNS1 and DNS2 options in this file. “No” means do not alter the resolv.conf file. “Yes” is the default when DHCP is specified in the BOOTPROTO line.
  • USERCTL: Specifies whether non-privileged users may start and stop this interface. Options are yes/no.
  • IPV6INIT: Specifies whether IPV6 protocols are applied to this interface. Options are yes/no.

If the DHCP option is specified, most of the other options are ignored. The only required options are BOOTPROTO, ONBOOT and HWADDR. Other options that you might find useful, that are not ignored, are the DNS and PEERDNS options if you want to override the DNS entries supplied by the DHCP server.

The deprecated network file

There is one old and now deprecated file you might encounter. The network file usually contains only a single comment line for current releases of Fedora, RHEL, and CentOS. It is located in /etc/sysconfig and was used in the past to enable or disable networking. It was also used to set the networking host name as shown in the following example:


This file has been present but unused in Fedora since release 19. It is still used in RHEL/CentOS 6.x, but is no longer used in RHEL/CentOS 7.x. The network hostname is now set in the /etc/hostname file.

Other network files

The /etc/sysconfig/network-files directory contains many other files, all of which are usually executable BASH scripts rather than configuration files. This is, to me at least, one of the bothersome exceptions to the Linux Filesystem Hierarchical Standard (FHS) which explicitly states that only configuration files and not executable files are to be located in the /etc tree.

One other network configuration file you might find in /etc/sysconfig/network-filesis the route-<interface> file. If you want to set up static routes on a multi-homed system, create a route file for each interface.  For example, the following file is named route-eth0, and defines routes for that specific interface to both networks and individual hosts.

default dev eth0 via dev eth0 via dev eth0 via via via via

The use of this file is uncommon, unless you are using the host as a router with some complex routing needs. See the Networking Guide for your version of Red Hat Enterprise Linux or CentOS for more details on configuring routing.

Network startup

With the advent of wireless networks and mobile devices, reconfiguring the network interfaces for each new wireless network became complicated and time-consuming. It may also create naming conflicts when adding a new network interface.


The old network service was used by default on Red Hat based distributions until 2004 to manage network startup and stop tasks. A SystemV start script used static configuration files to start the wired or wireless network at boot time, or with a simple command like service network start command from the command line. This service is still available, although the commands are redirected through systemd.

The network service is unable to monitor pluggable devices or changing wireless networks, so it cannot be used easily on laptops or netbook-style computers.

There were also issues with configuring wired networks with laptops, and servers or desktops with multiple network interfaces. I encountered problems when replacing a defective interface in a host with multiple network interfaces, as the interface name often changed during system startup.


Red Hat introduced NetworkManager in 2004 to simplify and automate network configuration and connections, especially wireless connections. The intent is to prevent the user from having to manually configure each new wireless network. Although it is intended to make network management easier for non-technical users, it requires some adjustment by the Linux administrator because many of the familiar configuration functions are now handled by a new layer of configuration files and scripts. This means that the standard configuration files are subject to being overwritten by NetworkManager every time the network is restarted, including every reboot.

The udev daemon is a  kernel device manager which is supposed to provide consistent and persistent device naming for all devices, including network devices and removable mass storage devices. It is also used to match network device names, i.e., eth0 or eno1, for example, to the MAC address on the network interface.

How it works, sort of

The udev device manager detects when a new device has been added to the system, such as a new network interface, and creates a rule to identify and name it if one does not already exist. The details of how this works have changed in more recent versions of Fedora, CentOS and RHEL. The current device naming procedure is described in detail in the RHEL 7 Networking Guide, along with a description of how the names are derived.

The current convention is to use the contents of the interface configuration files to generate the rules. However, if an interface configuration file does not exist, plugging in a new device or connecting with a new wireless network causes udev to notify NetworkManager of the new device or wireless connectrion. NetworkManager then creates the new interface configuration file.
The udev daemon creates an entry for each network interface in the network rules file. NetworkManager uses these entries, along with information in the interface configuration files in the /etc/sysconfig/network-scripts/ directory to initialize each interface.

Admins’ choice

It is possible to use the older network service and disable NetworkManager. There are plenty of howtos that describe how to disable NetworkManager and return to using the network service, including this one, so I won’t go into those details here.
I personally recommend against returning to the older network service because, once I understood how NetworkManager and udev work together provide a consistent naming scheme and automatic configuration for both existing and new devices, it all made sense and made managing network interfaces easier. In my opinion, NetworkManager is the best of both worlds.

As always with Linux, the decision of which service to use is your choice.

Source https://opensource.com