Posts

IoT is largely transitioning from hype to implementation with the growth of smart and connected devices spanning across all industries including building automation, energy, healthcare and manufacturing. The automotive industry has given some of the most tangible examples of both the promise and risk of IoT, with Tesla’s ability to deploy over-the-air software updates a prime example of forward-thinking efficiency. On the other side, the Jeep Cherokee hack in July 2015 displayed the urgent need for security to be a top priority for embedded devices as several security lapses made it vulnerable and gave hackers the ability to remotely control the vehicle. One of the security lapses included the firmware update of the head unit (V850) not having the proper authenticity checks.

The growing number of embedded Linux devices coming online can impact the life and health of people, communities, and nations. And given the upward trajectory of security breaches coinciding with the increasing number of connected devices, the team at Mender decided to address this growing need.

Mender is an open source project to make it easier to deploy over-the-air (OTA) software updates for connected Linux devices (Internet of Things). Mender is end-to-end, providing both the backend management server for campaign management for controlled rollouts of software updates and the client on the device that checks for available updates. Both backend and client are licensed under the Apache License, Version 2.0.

Mender recently became a corporate member of the Linux Foundation. Here, we sit down with their team to learn more about their goals and open source commitment.

Linux.com: What does Mender do?

Thomas Ryd, CEO of Mender: our mission is to secure the world’s connected devices. Our team is focusing the project to be an accessible and inexpensive approach to securing their connected devices. Our goal is to build a comprehensive security solution that is not only inexpensive to use, but easy to implement and use. That will naturally drive Mender to be the de facto standard for securing connected Linux devices.

Eystein Stenberg, CTO of Mender: our first application is an over-the-air software updater for embedded Linux and our first production-ready version will focus on an atomic, dual file system approach to ensure robustness — in case of a failed update due to power failure or poor network connectivity, the device will automatically roll back to the previous working state.

Linux.com: How and why is open source important to Mender?

Ralph Nguyen, Head of Community Development: When we initially ventured into this problem, there were very little OTA solutions that were end-to-end open source. There were limits to some end-to-end vendors for their backend, while others were simply incomplete and didn’t have either a backend or client. There are many proprietary software products targeting the automotive industry, but none provided the level of openness we anticipated. And most of the embedded Linux folks we’ve spoken to implemented a homegrown updater. It was quite common that they had a strong distaste for maintaining it! This was a recurring theme that sealed our initial direction with OTA updates.

And the accessibility of our project for embedded Linux developers is important from a larger perspective: security is a major, tangible threat given recent events such as the Mirai botnet DDoS attack and developers shouldn’t be faced with vendor lock-in to address these very real challenges.

Linux.com: Why did Mender join the Linux Foundation?

Ryd: The Linux Foundation supports a diverse and inclusive ecosystem of technologies and is helping to fix the internet’s most critical security problems. We felt it was only natural to join and become a member to solidify our commitment to open source. We hope it will be an arena for learning and collaboration for the Mender project.

Linux.com: What are some of the benefits of collaborative development for such projects and how does such collaboration benefit Mender’s customers or users?

Nguyen: Our team has a background in open source, and we get that the more eyes there are, the security and quality of the code will increase accordingly. A permissive open source license such as ours encourages a thriving open source community which in turn provides a healthy peer review mechanism that closed source or other restrictive licenses simply cannot compete with. We anticipate the Mender project will improve vastly from a thriving, collaborative community which we hope to encourage and support properly.

Linux.com: What interesting or innovative trends are you witnessing and what role does Linux or open source play in them?

Stenberg: The core mechanisms required for almost any IoT deployment, for example within smart home, smart city, smart energy grids, agriculture, manufacturing, and transportation, is to collect data from sensor networks, analyze the data in the cloud and then manage systems based upon it.

A simple use case from the home automation industry is to open your home from your smartphone. It typically requires the states of the locks in your home to be published to the cloud (data collection), the cloud to visualize the overall state to your smartphone, open or locked (analyze) and give you the ability to change the overall state (manage).

The capabilities of the IoT devices vary, it can be a very heterogeneous environment, but they can generally be split into 1) low-energy sensors that run a small RTOS (Real Time Operating System) firmware of tens or hundreds of kilobytes and 2) local gateways that aggregate, control and monitor these sensors, as well as provide internet connectivity.

Linux plays a large and increasingly important role in the more intelligent IoT devices, such as these local gateways. Historically, the majority of device vendors developed their own proprietary operating systems for these devices, but this is changing due to the increasing software complexity. For example, developing a bluetooth or TCP/IP stack, web server or cryptographic toolkit does not add any differentiation to a product, while it does add significant cost. This is an area where the promise of open source collaboration is working very well, as even competitors are coming together to design and implement the best solution for the community.

Cost and scale are two important keywords for the IoT. Embedded development has historically required a lot of customizations and consulting, but in the future we will see off-the-shelf products with very large deployments, both in terms of hardware and software.

Linux.com: Anything else important or upcoming that you’d like to share?

Ryd: We have been working on Mender for two years and it has been a market-driven approach. Our team has engaged with over a hundred embedded Linux developers in various capacities, including many many user tests to ensure we were building a comprehensive solution to address software updates for IoT. What has become clear is the state of the union is downright scary. There have and will forever be bugs in software. Shipping connected products that can impact people’s lives and health not having a secure and reliable way to update software should soon be a thing of the past.

Start exploring Linux Security Fundamentals by downloading the free sample chapter today. DOWNLOAD NOW

In this exercise, we learn about two of the most useful tools for troubleshooting networks. These tools will show what is happening as network traffic is transmitted and received. The tools are tcpdump and wireshark.

These are passive tools; they simply listen to all traffic exposed to the system by the networking infrastructure.

A fair amount of network traffic is broadcasted to all the devices that are connected to the networking gear. Much of the traffic is simply ignored by the individual systems because the traffic’s destination does not match the system’s address. The tools tcpdump and wireshark can “see”  all of the traffic on the connection and display the traffic in a format that can be analyzed.

tcpdump is a command-line, low-level tool that is generally available as part of a Linux distribution’s default package installation. tcpdump has a filtering capability as described in the pcap-filter man page; both tcpdump and wireshark use the pcap libraries to capture and decipher traffic data.

tcpdump lacks a graphical component as well as the ability to analyze the traffic it captures. For this reason, it is typically used to capture network traffic during an interesting session and then the resulting capture files are copied to a workstation for analysis using the wireshark utility.

Packet capture also requires placing the network interfaces into promiscuous mode, which requires root permissions.

Set up your system

Access to The Linux Foundation’s lab environment is only possible for those enrolled in the course. However, we’ve created a standalone lab for this tutorial series to run on any single machine or virtual machine which does not need the lab setup to be completed. The commands will be altered to comply with the standalone environment.  

To make this lab exercise standalone, let’s add a couple of IP aliases to the default adapter.

To add a temporary IP alias, determine the default adapter:

$ sudo ip a | grep "inet "

The result should be similar to:

   inet 127.0.0.1/8 scope host lo

   inet 192.168.0.16/24 brd 192.168.0.255 scope global dynamic enp0s3

   inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

This system shows several adapters: the “lo” is the loopback device, “enp0s3” is the adapter with the address assigned by the DHCP server and is the default adapter. The “virbr0” adapter is a network bridge adapter used by the hypervisor, we will not use this one.  

To add IP aliases on adapter enp0s3:

$ sudo ip addr add 192.168.52.101 dev enp0s3

Then add the following to /etc/hosts:

192.168.52.101 main

This /etc/hosts entry should be removed after the exercise is completed.

On our testing system the commands looked like:

Setup.png

Start the exercise

Open a terminal and run the command:

$ sudo tcpdump -D

Notice that the “adapters” are shown by device name not by IP address. We will be using the adapter we added the extra IP addresses to. In the case of our test system “enp0s3” would be the logical choice. However, because we have a single system with IP aliases we will use the interface “any” for our monitoring. If you had several interfaces you could select traffic monitoring from any specific interface.  Below is the output from our test system.

tcpdump-D.png

$ sudo tcpdump -i any 

This will print a brief summary of each packet that the system sees on the interface, regardless of whether it is intended for the system “main”. Leave the process running and open a second terminal. In this second terminal, run ping, first pinging “main” and then pinging the broadcast address,(this is the same network as your adapter but with a host number of ”255”, something like 192.168.56.255.

$ ping -c4 main

$ ping -c4 -b  192.168.56.255

There may be extra packets displayed that are not related to our purpose. As an example, the command “ping -c4 www.google.com“ will generate traffice on the interface we are listening to “-i any”.  We can add a pcap filter to our tcpdump command to ignore packets that are not related to our subnet. The command would be:

$sudo tcpdump -i any net 192.168.52.0/24 

The tcpdump output from the “ping -c2 main” as captured by our test system is listed below:

ping-host.png

The tcpdump output from the “ping -c2 -b 192.168.52.255” as captured by our test system is listed below:

ping-broadcast.png

Notice that our system can see the broadcast ping coming in but there is no reply, this is because of a system tunable.  Broadcast pings could be used as a denial of service attack so are disabled by default.

Next, explore the pcap-filter and tcpdump man pages. We are going to construct a tcpdump command that captures HTTP traffic on our interface and save that traffic to a file.
Run the following commands:

For Fedora, RHEL, CentOS systems:

$ sudo yum install httpd elinks 

$ sudo systemctl start httpd

For Ubuntu and Debian systems:

$ sudo apt-get install apache2 elinks

$ sudo systemctl start apache2

For all distributions, create a test file:

$ sudo su -c ‘echo "test page" > /var/www/html/test.html’

Note: If your system has the “firewalld” service running you may need to open some ports.

To test if firewalld is running:

$ sudo systemctl status firewalld 

To open the http port:

$ sudo -i  

# firewall-cmd --zone=public --add-port=80/tcp --permanent

# firewall-cmd --reload

Start tcpdump listening for traffic on port 80:

$ sudo tcpdump -i any port 80

We could be more specific and say:

$ sudo tcpdump -i amy port 80 and host main 

Now let’s generate some HTTP traffic to test, first with a http get of a missing page then a good page:

$ elinks -dump http://main/no-file.html

$ elinks -dump http://main/file.html

Observe the output of tcpdump then terminate tcpdump command with a “ctl-c”

tcpdump-404.png

Analyze with wireshark

First lets create some information to analyse, on one terminal session:

$ sudo tcpdump -i any port 80 -w http-dump.pcap 

And on another terminal session issue the following commands:

Generates a “404 not found” error:

$ elinks -dump http://main/no-file.html

Should return the text of the file we created earlier:

$ elinks -dump http://main/file.html

Terminate the http://main/no-file.html tcpdump command and verify the file “http-dump.pcap exists and has bytes in it.

Next, we will analyze the captured data with wireshark. Verify wireshark is installed:

$ sudo  which wireshark

If the previous command fails, you will have to install the utility.

On RHEL-based systems:

$ sudo yum install wireshark wireshark-gnome

Or Debian based systems:

$ sudo apt-get install wireshark-gtk wireshark-qt 

You can launch it by running /usr/sbin/wireshark or finding it the application menus on your desktop, e.g., under Applications -> Internet menu, you may find the Wireshark Network Analyzer. If wireshark is launched from the GUI, go to the File -> Open dialog and browse to the capture file created above. Or launch wireshark with the capture file from the command line:

wireshark  http-dump.pcap

wireshark-404.png

Explore the wireshark output.  Wireshark can be run in an interactive mode without the requirement of tcpdump, but requires a GUI. A text version of wireshark exists called “tshark”. The process of capturing with tcpdump and analysing with wireshark, possibly on a different machine is handy for production type systems without GUI or console access.

Cleanup

Please remember to remove the entries from /etc/hosts. A reboot will remove the network alias we added.

Stay one step ahead of malicious hackers with The Linux Foundation’s Linux Security Fundamentals course. Download a sample chapter today!

Read the other articles in the series:

Linux Security Threats: The 7 Classes of Attackers

Linux Security Threats: Attack Sources and Types of Attacks

Linux Security Fundamentals Part 3: Risk Assessment / Trade-offs and Business Considerations

Linux Security Fundamentals: Estimating the Cost of a Cyber Attack

Linux Security Fundamentals Part 6: Introduction to nmap

Start exploring Linux Security Fundamentals by downloading the free sample chapter today. DOWNLOAD NOW

Last week, we learned to begin a risk assessment by first evaluating the feasibility of a potential attack and the value of the assets you’re protecting. These are important steps to determining what and how much security will be required for your system.

You must also then weigh these considerations against the potential business impacts of a security compromise with the costs of protecting them.

Costs – How Much?

It is hard to calculate the Return on Investment that managers need in order to make decisions about how to mitigate a risk. How much value does a reputation have?

Estimating the cost of a cyber attack can be difficult, if not impossible. There is little data on how often various industries suffer from different types of intrusions. Until recent laws were passed, companies would often conceal attacks even from law enforcement.

These factors cause difficulties in making rational decisions about how to address the different risks. Security measures may result in the loss of usability, performance, and even functionality. Often, if usability concerns are not addressed in the design of a secure system, users respond by circumventing security mechanisms.

Still, you can get a good idea of the costs associated with a potential loss of business assets, as well as the costs involved in protecting them, to make an informed decision.

Business Impact

The following questions should be evaluated on a regular basis in order to ensure that the security position is optimal for the environment:

• What is the cost of system repair/replacement?

• Will there be lost business due to disruption?

• How much lost productivity will there be for employees?

• Will there be a loss of current customers?

• Will this cause a loss of future customers?

• Are business partners impacted?

• What is your legal liability?

Security Costs

There are many aspects to the costs associated with securing an IT environment. You should consider all of them carefully:

• Software

• Staff

• Training

• Time for implementation

• Impact to customers, users, workers

• Network, Compute, and Storage resources

• Support

• Insurance.

So far in this series, we’ve covered the types of hackers who might try to compromise your Linux system, where attacks might originate, the kinds of attacks to expect, and some of the business tradeoffs to consider around security. The final two parts of this series will cover how to install and use common security tools: tcpdump, wireshark, and nmap.

Stay one step ahead of malicious hackers with The Linux Foundation’s Linux Security Fundamentals course. Download a sample chapter today!

Read the other articles in the series:

Linux Security Threats: The 7 Classes of Attackers

Linux Security Threats: Attack Sources and Types of Attacks

Linux Security Fundamentals Part 3: Risk Assessment / Trade-offs and Business Considerations

Linux Security Fundamentals Part 5: Introduction to tcpdump and wireshark

Linux Security Fundamentals Part 6: Introduction to nmap

Start exploring Linux Security Fundamentals by downloading the free sample chapter today. DOWNLOAD NOW

Earlier in this series, you learned the types of hackers who might try to compromise your Linux system, where attacks might originate, and the kinds of attacks to expect. The next step is to assess the security risks to your own system and the costs of both securing, and not securing, your assets in order to begin formulating a security plan.

Focusing on likely threats to the highest value assets is a reasonable place to start your risk assessment. A common method for determining likelihood is to create a use case from the point of view of a malicious actor attempting to cause harm to the system.

Next, calculating the value of the assets will help determine the amount of security that should be implemented to protect those assets. It may not always be cost-effective to protect everything. Many types of attacks can be mitigated by implementing minimal security. It is not likely possible to protect all assets, all of the time.

And finally, knowing the potential impact to business operations is also essential in determining the level of security required for any particular asset. If the business is severely impacted due to a compromise, then more resources should be dedicated to maintaining the security of the assets. Another business consideration is the impact of adding additional security to the environment, possibly creating a performance challenge.

Let’s look at each of these areas in turn and some important factors to consider and questions to ask as you’re evaluating the trade-offs.

Likelihood

Evaluating the feasibility of a potential attack is important. Is the threat real or theoretical? You can begin to asses the risk by asking:

• Method: Are the skills, knowledge, tools, etc. available?

• Opportunity: Is there time and access?

• Motive: Is it an intentional act or an accidental damage?

Recently, it has been demonstrated that fingerprint scanners on smart phones can be fooled into thinking an authorized user has scanned their fingerprint. The researchers claimed that the attack was rather easy to accomplish. The reality is that the particular attack required a fair amount of specific things to happen in proper order to be successful. This is rather unlikely.

Even if the methods are well-known, if the tools are difficult to acquire, only the most resource-wealthy will be able to perpetrate the attack. Access and opportunity are also areas that can be designed into a system, such that attacks can only be accomplished during certain windows. By limiting the opportunity to certain situations, time-based or access-based, security costs can be reduced outside of those situations.

Asset Value

A thorough inventory of business assets will be the basis for the valuation required when determining what and how much security will be required.

Most environments handle this process via an Asset Management System. The roles of each asset will also determine the importance of the asset in the business operations. Components that are not expensive and yet carry large responsibility for operations should be considered highly valuable. Estimating the impact of a service outage, damage to the infrastructure, or compromise will also be necessary in determining the value of the assets.

To determine asset value, you should:

• Identify network/system/service assets

• Determine asset roles and relationships

• Evaluate the impact of asset damage/failure/loss.

In part four we’ll consider the difficulty of estimating the cost of a cyber attack and give you some questions to ask when weighing the cost of protecting your business assets, with the business impact of a potential security compromise.

Stay one step ahead of malicious hackers with The Linux Foundation’s Linux Security Fundamentals course. Download a sample chapter today!

Read the other articles in the series:

Linux Security Threats: The 7 Classes of Attackers

Linux Security Threats: Attack Sources and Types of Attacks

Linux Security Fundamentals: Estimating the Cost of a Cyber Attack

Linux Security Fundamentals Part 5: Introduction to tcpdump and wireshark

Linux Security Fundamentals Part 6: Introduction to nmap

This week in open source and Linux news, open source industry leaders and executives have been vocally against President Trump’s immigration ban, the newly-announced KDE laptop could cost you more than 1.3k, and more! Keep reading to stay on top of this busy news week. 

open-source-immigration.png

Open source standpoint

Open source leaders such as Jim Zemlin and Abby Kearns voice objection to President Trump’s immigration ban in official organization statements.

1) Open source industry leaders- including Jim Zemlin, Jim Whitehurst, and Abby Kearns- are firing back at President Trump’s immigration ban with firm opposition.

Linux Leadership Stands Against Trump Immigration Ban– ZDNet

Trump’s Executive Order on Immigration: Open Source Leaders Respond– CIO

Linux, OpenStack, Citrix Add Their Voices in Opposition to Immigration Ban– SDxCentral

2) KDE announces new partnership with Slimbook to produce a laptop designed for running KDE Neon.

Would You Pay $800 For a Linux Laptop?-The Verge

3) The Linux Foundation has grown over the past 17 years to encompass much more than just Linux.

How The Linux Foundation Goes Beyond the Operating System to Create the Largest Shared Resource of Open-Source Technology– HostingAdvice.com

4) American Express to contribute code and engineers to Hyperledger as newest backer.

AmEx Joins JPMorgan, IBM in Hyperledger Effort– Bloomberg

This week in open source news, a study from Black Duck suggests the potential for open source malware is set to skyrocket in 2017, longtime undetected Mac malware exposed, and more! Read our digest for the recent stories you need to hear:

1) The Linux Foundation and Amdocs are partnering up to accelerate adoption of the open source Enhanced Control, Orchestration, Management and Policy (ECOMP) platform from AT&T.

Amdocs, Linux Foundation to Accelerate Service Provider, Developer Adoption of Open Source ECOMP– FierceTelecom

2) Black Duck Software is predicting an increase in open source threats this year.

Report: Attacks Based on Open Source Vulnerabilities Will Rise 20 Percent This Year– CSO

3) “Microsoft is adding support for yet another Linux distribution on Azure.”

Clear Linux OS Now Available On Azure– ZDNet

4) “Apple issues MacOS update that automatically protects infected machines.”

Newly Discovered Mac Malware Found in the Wild Also Works Well On Linux -Ars Technica

5) “Starting today we are accepting applications from open source projects who would like to serve as mentor organizations for enthusiastic student developers,” says Google.

Open Source Organizations Can Now Apply For Google Summer of Code 2017– betanews

This week in Linux and OSS news, Steven J. Vaughn-Nichols explains why Linux is forcing Windows to up its gaming game, blockchain is especially important in the current sociopolitical climate, and more! Read on to keep on top of the most important tech stories.

1)  Linux can take some credit for improving Windows gaming, writes Steven J. Vaughan-Nichols

Developer Claims Linux Forced Microsoft to Up Its Windows Game Support– ZDNet

2) The Depository Trust and Clearing Corporation to begin using The Linux Foundation’s Hyperledger Project.

Blockchain Will Secure Global Derivatives Trading– CyberScoop

3) It’s the “impact of blockchain on […] finance and business that’s fostering innovation and opportunities for her company and clients.”

Why IBM CEO Ginni Rometty Believes in Blockchain– brandchannel

4) A hacker has published an open source tool for helping admins strengthen their network security.

Hacker Publishes Open Source Tool For Finding Secret Keys On GitHub– FOSSbytes

5) “Microsoft’s Windows Subsystem for Linux is evolving into a credible alternative to running Linux inside Windows on VMs.”

Bash On Windows is Becoming Linux For Windows Users– InfoWorld

To help you better understand containers, container security, and the role they can play in your enterprise, The Linux Foundation recently produced a free webinar hosted by John Kinsella, Founder and CTO of Layered Insight. Kinsella covered several topics, including container orchestration, the security advantages and disadvantages of containers and microservices, and some common security concerns, such as image and host security, vulnerability management, and container isolation.

In case you missed the webinar, you can still watch it online. In this article, Kinsella answers some of the follow-up questions we received.

john-kinsella.jpg

John Kinsella

John Kinsella, Founder CTO of Layered Insight

Question 1: If security is so important, why are some organizations moving to containers before having a security story in place?

Kinsella: Some groups are used to adopting technology earlier. In some cases, the application is low-risk and security isn’t a concern. Other organizations have strong information security practices and are comfortable evaluating the new tech, determining risks, and establishing controls on how to mitigate those risks.

In plain talk, they know their applications well enough that they understand what is sensitive. They studied the container environment to learn what risks an attacker might be able to leverage, and then they avoided those risks either through configuration, writing custom tools, or finding vendors to help them with the problem. Basically, they had that “security story” already.

Question 2: Are containers (whether Docker, LXC, or rkt) really ready for production today? If you had the choice, would you run all production now on containers or wait 12-18 months?

Kinsella: I personally know of companies who have been running Docker in production for over two years! Other container formats that have been around longer have also been used in production for many years. I think the container technology itself is stable. If I were adopting containers today, my concern would be around security, storage, and orchestration of containers. There’s a big difference between running Docker containers on a laptop versus running a containerized application in production. So, it comes down to an organization’s appetite for risk and early adoption. I’m sure there are companies out there still not using virtual machines…

We’re running containers in production, but not every company (definitely not every startup!) has people with 20 years of information security experience.

Question 3: We currently have five applications running across two Amazon availability zones, purely in EC2 instances. How should we go about moving those to containers?

Kinsella: The first step would be to consider if the applications should be “containerized.” Usually people consider the top benefits of containers to be quick deployment of new features into production, easy portability of applications between data centers/providers, and quick scalability of an application or microservice. If one or more of those seems beneficial to your application, then next would be to consider security. If the application processes highly sensitive information or your organization has a very low appetite for risk, it might be best to wait a while longer while early adopters forge ahead and learn the best ways to use the technology. What I’d suggest for the next 6 months is to have your developers work with containers in development and staging so they can start to get a feel for the technology while the organization builds out policies and procedures for using containers safely in production.

Early adopter? Then let’s get going! There’s two views on how to adopt containers, depending on how swashbuckling you are: Some folks say start with the easiest components to move to containers and learn as you migrate components over. The alternative is to figure out what would be most difficult to move, plan out that migration in detail, and then take the learnings from that work to make all the other migrations easier. The latter is probably the best way but requires a larger investment of effort up front.

Question 4: What do you mean by anomaly detection for containers?

Kinsella: “Anomaly detection” is a phrase we throw around in the information security industry to refer to technology that has an expectation of what an application (or server) should be doing, and then responds somehow (alerting or taking action) when it determines something is amiss. When this is done at a network or OS level, there’s so many things happening simultaneously that it can be difficult to accurately determine what is legitimate versus malicious, resulting in what are called “false positives.”

One “best practice” for container computing is to run a single process within the container. From a security point of view, this is neat because the signal-to-noise ratio is much better, from an anomaly detection point of view. What type of anomalies are being monitored for? It could be network or file related, or maybe even what actions or OS calls the process is attempting to execute. We can focus specifically on what each container should be doing and keep it within much more narrow boundary for what we consider anomalous for its behavior.

Question 5: How could one go and set up containers in a home lab? Any tips? Would like to have a simpler answer for some of my colleagues. I’m fairly new to it myself so I can’t give a simple answer.

Kinsella: Step one: Make sure your lab machines are running a patched, modern OS (released within the last 12 months).

Step two: Head over to http://training.docker.com/self-paced-training and follow their self-paced training. You’ll be running containers within the hour! I’m sure lxd, rkt, etc. have some form of training, but so far Docker has done the best job of making this technology easy for new users to adopt.

Question 6: You mentioned using Alpine Linux. How does musl compare with glibc?

Kinsella: musl is pretty cool! I’ve glanced over the source — it’s so much cleaner than glibc! As a modern rewrite, it probably doesn’t have 100 percent compatibility with glibc, which has support for many CPU architectures and operating systems. I haven’t run into any troubles with it yet, personally, but my use is still minimal. Definitely looking to change that!

Question 7: Are you familiar with OpenVZ? If so, what would you think could be the biggest concern while running an environment with multiple nodes with hundreds of containers?

Kinsella: Definitely — OpenVZ has been around for quite a while. Historically, the question was “Which is more secure — Xen/KVM or OpenVZ?” and the answer was always Xen/KVM, as they provide each guest VM with hardware-virtualized resources. That said, there have been very few security vulnerabilities discovered in OpenVZ over its lifetime.

Compared to other forms of containers, I’d put OpenVZ in a similar level of risk. As it’s older, it’s codebase should be more mature with fewer bugs. On the other hand, since Docker is so popular, more people will be trying to compromise it, so the chance of finding a vulnerability is higher. A little bit of security-through-obscurity, there. In general, though, I’d go through a similar process of understanding the technology and what is exposed and susceptible to compromise. For both, the most common vector will probably be compromising an app in a container, then trying to burrow through the “walls” of the container. What that means is you’re really trying to defend against local kernel-level exploits: keep up-to-date and be aware of new vulnerability announcements for software that you use.

John Kinsella is the Founder CTO of Layered Insight, a container security startup based in San Francisco, California. His nearly 20-year background includes security and network consulting, software development, and datacenter operations. John is on the board of directors for the Silicon Valley chapter of the Cloud Security Alliance, and has long been active in open source projects, including recently as a contributor, member of the PMC and security team for Apache CloudStack.

Check out all the upcoming webinars from The Linux Foundation.

This was an exciting year for webinars at The Linux Foundation! Our topics ranged from network hardware virtualization to Microsoft Azure to container security and open source automotive, and members of the community tuned in from almost every corner of the globe. The following are the top 5 Linux Foundation webinars of 2016:

  1. Getting Started with OpenStack

  2. No More Excuses: Why you Need to Get Certified Now

  3. Getting Started with Raspberry Pi

  4. Hyperledger: Blockchain Technologies for Business

  5. Security Top 5: How to keep hackers from eating your Linux machine

Curious to watch all the past webinars in our library? You can access all of our webinars for free by registering on our on-demand portal. On subsequent visits, click “Already Registered” and use your email address to access all of the on-demand sessions.


Getting Started with OpenStack

Original Air Date: February 25, 2016

Cloud computing software represents a change to the enterprise production environment from a collection of closed, proprietary software to open source software. OpenStack has become the leader in Cloud software supported and used by small and large companies alike. In this session, guest speaker Tim Serewicz addressed the most common OpenStack questions and concerns including:

  • I think I need it but where do I even start?

  • What are the problems that OpenStack solves?

  • History & Growth of OpenStack: Where’s it been and where is it going?

  • What are the hurdles?

  • What are the sore points?

  • Why is it worth the effort?

Watch Replay >>


No More Excuses: Why you Need to Get Certified Now

Original Air Date: June 9, 2016

According to the 2016 Open Source Jobs Report, 76% of open source professionals believe that certifications are useful for their careers. This webinar session focused on tips, tactics, and practical advice to help professionals build the confidence to take the leap to commit to, schedule, and pass their next certification exam. This session, covered:

  • How certifications can help you reach your career goals

  • Which certification is right for you: Linux Foundation Certified SysAdmin or Certified Engineer?

  • Strategies to thoroughly prepare for the exam

  • How to avoid common exam mistakes

  • The ins and outs of the performance certification process to boost your exam confidence

  • And more…

Watch Replay >>


Getting Started with the Raspberry Pi

Original Air Date: December 14, 2016

Maybe you bought a Raspberry Pi a year or two ago and never got around to using it. Or you built something interesting once, but now there’s a new Pi and new add-ons, and you want to know if they could make your project even better? The Raspberry Pi has grown from its original purpose as a teaching tool to become the tiny computer of choice for many makers, allowing those with varied Linux and hardware experience to have a fully functional computer the size of a credit card powering their ideas. Regardless of where you are in Pi experience, this session with guest speaker Ruth Suehle, had some great tricks for getting the most out of the Raspberry Pi and showcased dozens of great projects to get you inspired.

Watch Replay >>


Hyperledger: Blockchain Technologies for Business

Original Air Date: December 1, 2016

Curious about the foundations of distributed ledger technologies, smart contracts, and other components that comprise the modern blockchain technology stack? In this session, guest speaker Dan O’Prey from Digital Asset, provided an overview of the Hyperledger Project at The Linux Foundation, the main use cases and requirements for the technology for commercial applications, as well as an overview on the history and projects in the Hyperledger umbrella and how you can get involved.

Watch Replay >>


Security Top 5: How to keep hackers from eating your Linux machine

Original Air Date: November 15, 2016

There is nothing a hacker likes more than a tasty Linux machine available on the Internet. In this session, a professional pentester talked tactics, tools and methods that hackers use to invade your space. Learn the 5 easiest ways to keep them out, and know if they have made it in. The majority of the session focused on answering audience questions from both advanced security professionals and those just starting in security.

Watch Replay >>

Don’t forget to view our upcoming webinar calendar to participate in our upcoming live webinars with top open source experts.

This week in Linux and OSS news, Microsoft joins the Linux Foundation as a Platinum Member; a powerful move that signifies its commitment to open source, 498 out of 500 supercomputers run Linux, and more! This week was a big one for open source. Make sure you’re caught up with our weekly digest.

1) Microsoft has joined The Linux Foundation as a Platinum Member, ushering in a new era of open source community building.

Microsoft Goes Linux Platinum, Welcomes Google To .NET Foundation– Forbes

Microsoft—Yes, Microsoft—Joins The Linux Foundation– Ars Technica

2) “With 498 out of 500 supercomputers running Linux, it is evident that this operating system provides the capability and security such machines direly need.”

Nearly Every Top 500 Supercomputer Runs On Linux– The Merkle

3) The Core Infrastructure Initiative renews financial support for The Reproducible 

Builds Project, which ensures binaries produced from open source software projects are tamper-free.

Linux Foundation Doubles Down on Support for Tamper-Free Software– InfoWorld

4) Beyond it’s cost-effectiveness, officials around the world view it as a means of speeding up innovation in the public sector.

Open Source in Government IT: It’s About Savings But That’s Not the Whole Story– ZDNet

5) Enter key vulnerability reveals “major Linux security hole gaps.”

Press the Enter Key For 70 Seconds To Bypass Linux Disk Encryption Authentication– TechWorm