Posts

OpenStack

The OpenStack Foundation team has been thinking about what “open” means for the project. Learn more.

In his keynote at OpenStack Summit in Australia, Jonathan Bryce (Executive Director of the OpenStack Foundation) stressed on the meaning of both “Open” and “Stack” in the name of the project and focused on the importance of collaboration within the OpenStack ecosystem.

OpenStack has enjoyed unprecedented success since its early days. It has excited the IT industry about applications at scale and created new ways to consume cloud. The adoption rate of OpenStack and the growth of its community exceeded even the biggest open source project on the planet, Linux. In its short life of 6 years, OpenStack has achieved more than Linux did in a similar time span.

So, why does OpenStack need to redefine the meaning of the project and stress collaboration? Why now?

“We have reached a point where the technology has proven itself,” said Mark Collier, the CTO of the OpenStack Foundation. “You have seen all the massive use case of OpenStack all around the globe.”

Collier said that the OpenStack community is all about solving problems. Although they continue to refine compute, storage, and networking, they also look beyond that.

With big adoption and big growth, come new challenges. The OpenStack community and the OpenStack Foundation responded to those challenges and the project transformed along with changing market dynamics — evolving from integrated release to big tent to composability.

OpenStack community

One of the things that the Foundation team has been doing this year is thinking about what “open” means for the project. In the past five years, OpenStack has built a great community around it. There are more than 82,000 people from around the globe who are part of this huge community. The big question for the Foundation was, what’s next for the coming five years? The first thing that they looked at was what got them to this position.

When you put this all into context, Bryce’s stress on openness and collaboration makes sense. In an interview with The Linux Foundation, Bryce said, “We haven’t really talked a lot about our attitude around openness. I think that it’s a little bit overdue because when you look into the technology industry right now you see the term ‘open’ thrown around constantly. The word open gets attached to different products, it gets attached to different vendor conferences because who doesn’t want something that’s open.”

“One of the key things has been those four opens that we use as the pillars of our community:  how we write our code, how we design our systems, how we manage our development process, and how we interact as a community,” said Bryce.

When you look at the stack part of OpenStack, there is no single component that builds the OpenStack cloud; there are many different components that come from different independent open source projects. These components are the part of the stack. “We’re building technology stack but it’s not a rigid stack and it’s not a single approach to doing things. It’s actually a flexible programmable infrastructure technology stack,” Bryce said.

What’s really interesting about these different open source projects is that in most cases they work in silos. Whether it’s KVM or Open vSwitch or Kubernetes, they are developed independently of each other.

“And that’s not a bad thing, actually,” Byce said, “because you want experts in a topic who are focused on that. This expertise gives you a really good container orchestration system, a really good distributed storage system, a software defined networking system. But users don’t run those things independently. There isn’t a single OpenStack cloud on the planet that only runs software that we wrote in the OpenStack community.”

Staying in sync

One big problem that the OpenStack community saw was big gaps between these projects.

“There are issues to keep in sync between these different open source projects that have different release cadence,” said Bryce. “So far, we’ve left it to users to solve those problems, but we realized we can do better than that. And that’s where the focus is in terms of collaboration.”

The OpenStack community has been working with other communities from day one. Collaboration has always been the core of the project. Bryce used the example of KVM project, one of the many projects that OpenStack users use.

“When we started the OpenStack project, KVM was not widely considered a production-ready hypervisor,” said Bryce. “There were a lot of features that were new, unstable and totally unreliable. But OpenStack became a big driver for KVM usage. OpenStack developers contributed upstream to KVM and that combination ended up helping both Nova and KVM mature because we were jointly delivering real use cases.”

It’s happening all across the board now. For example, Bryce mentioned a report from Research 451 that said that companies that already have OpenStack were adopting containers three times faster than those who don’t.

Yes, the collaboration has been happening, but there is huge potential in refining that collaboration. Collier said that the OpenStack community members who have been gluing these different projects together have gained expertise in doing so. The OpenStack Foundation plans to help members of the community share this expertise and experience with each other.

“The Open Source community loves to give back,” said Collier. “This collaboration is about sharing the playbook — both software and operational know how — that allows you to take this innovation and put it into production.”

“Those are the missing links, the last mile of open infrastructure the users have had to do on their own. We’re bringing that into the community and that’s where I think the collaboration becomes critical,” added Collier.

“How do you deliver that collaboration?” said Bryce. “Writing software is hard, but it becomes less hard when you get people together. That’s something people forget in the open source community as we work remotely, collaborating online, from different parts of the world.”

Face to Face Collaboration

Physical events like OpenStack Summit, Open Source Summit, KubeCon, and many others bring these people together, face to face.

“Meeting each other in person is extremely valuable. It builds trust and when we go back to our remote location and collaborate online, that trust makes us even more productive,” said Bryce.

Going forward, OpenStack Foundation plans to make its events inclusive of all those technologies that matter to OpenStack users. They have started events like OpenStack Days that include projects such as Ceph, Ansible, Kubernetes, Cloud Foundry, and more.

“When you meet people,  spend time with them and work together, you naturally start to understand each other better and figure out how to work together,” said Bryce. “And that to me is a really important part of how you actually make collaboration happen.”

OpenStack Summit Sydney

OpenStack Summit Sydney offers 11+ session tracks and plenty of educational workshops, tutorials, panels. Start planning your schedule now.

Going to OpenStack Summit Sydney? While you’re there, be sure stop by The Linux Foundation training booth for fun giveaways and a chance to win a Raspberry Pi kit. The drawing for prizes will take place 1 week after the conference on November 15.

Giveaways include The Linux Foundation projects’ stickers, and free ebooks: The SysAdmin’s Essential Guide to Linux Workstation Security, Practical GPL Compliance, A Guide to Understanding OPNFV & NFV, and the Open Source Guide Volume 1.

With 11+ session tracks to choose from, and plenty of educational workshops, tutorials, panels — start planning your schedule at OpenStack Summit in Sydney now.

Session tracks include:

  • Architecture & Operations
  • Birds of a Feather
  • Cloud & OpenStack 101
  • Community & Leadership
  • Containers & Cloud-Native Apps
  • Contribution & Upstream Development
  • Enterprise
  • Forum
  • Government
  • Hands-on Workshop
  • Open Source Days
  • And More.

View the full OpenStack Summit Sydney schedule here.

Cloud Native Computing Foundation and Cloud Foundry will also have a booth at OpenStack Summit Sydney. Get your pass to OpenStack and stop by to learn more!

This week in open source and Linux news, open source industry leaders and executives have been vocally against President Trump’s immigration ban, the newly-announced KDE laptop could cost you more than 1.3k, and more! Keep reading to stay on top of this busy news week. 

open-source-immigration.png

Open source standpoint

Open source leaders such as Jim Zemlin and Abby Kearns voice objection to President Trump’s immigration ban in official organization statements.

1) Open source industry leaders- including Jim Zemlin, Jim Whitehurst, and Abby Kearns- are firing back at President Trump’s immigration ban with firm opposition.

Linux Leadership Stands Against Trump Immigration Ban– ZDNet

Trump’s Executive Order on Immigration: Open Source Leaders Respond– CIO

Linux, OpenStack, Citrix Add Their Voices in Opposition to Immigration Ban– SDxCentral

2) KDE announces new partnership with Slimbook to produce a laptop designed for running KDE Neon.

Would You Pay $800 For a Linux Laptop?-The Verge

3) The Linux Foundation has grown over the past 17 years to encompass much more than just Linux.

How The Linux Foundation Goes Beyond the Operating System to Create the Largest Shared Resource of Open-Source Technology– HostingAdvice.com

4) American Express to contribute code and engineers to Hyperledger as newest backer.

AmEx Joins JPMorgan, IBM in Hyperledger Effort– Bloomberg

Start exploring Essentials of OpenStack Administration by downloading the free sample chapter today. DOWNLOAD NOW

DevStack is a GitHub-based deployment of OpenStack, provided by OpenStack.org, that allows for easy testing of new features. This tutorial, the last in our series from The Linux Foundation’s Essentials of OpenStack Administration course, will cover how to install and configure DevStack.

While DevStack is easy to deploy, it should not be considered for production use. There may be several configuration choices for new or untested code used by developers, which would not be appropriate for production.

DevStack is meant for developers, and uses a bash shell installation script instead of a package-based installation. The stack.sh script runs as a non-root user. You can change the default values by creating a local.conf file.

Should you make a mistake or want to test a new feature, you can easily unstack, clean, and stack again quickly. This makes learning and experimenting easier than rebuilding the entire system.

Setting up the Lab

One of the difficulties of learning OpenStack is that it’s tricky to install, configure and troubleshoot. And when you mess up your instance it’s usually painful to fix or reinstall it.

That’s why Linux Foundation Training introduced on-demand labs which offer a pre-configured virtual environment. Anyone enrolled in the course can click to open the exercise and then click to open a fully functional OpenStack server environment to run the exercise. If you mess it up, simply reset it. Each session is then available for up to 24 hours. It’s that easy.

Access to the lab environment is only possible for those enrolled in the course. However, you can still try this tutorial by first setting up your own AWS instance with the following specifications:

Deploy an Ubuntu Server 14.04 LTS (HVM), SSD Volume Type – ami-d732f0b7

with a m4.large (2 vcpus, 8GiB ram) instance type, increase the root disk to 20G, and open up all the network ports.

See Amazon’s EC2 documentation for more direction on how to set up an instance.

Verify the System

Once you are able to log into the environment verify some information:

1. To view and run some commands we may need root privilege. Use sudo to become root:

  ubuntu@devstack-cc:~$ sudo -i

2. Verify the Ubuntu user has full sudo access in order to install the software:


    root@devstack-cc:~# grep ubuntu /etc/sudoers.d/*

    /etc/sudoers.d/90-cloud-init-users:# User rules for ubuntu

    /etc/sudoers.d/90-cloud-init-users:ubuntu ALL=(ALL) NOPASSWD:ALL

3. We are using a network attached to eth2 for our cloud connections. You will need the public IP, eth0, to access the OpenStack administrative web page after installing DevStack. From the output find the inet line and make note of the IP Address. In the following example the IP address to write down would be: 166.78.151.57 Your IP address will be different. If you restart the lab the IP address may change.   


 root@devstack-cc:~# ip addr show eth0

    2: eth0:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

        link/ether bc:76:4e:04:b5:9b brd ff:ff:ff:ff:ff:ff

        inet 166.78.151.57/24 brd 166.78.151.255 scope global eth0

           valid_lft forever preferred_lft forever

        inet6 2001:4800:7812:514:be76:4eff:fe04:b59b/64 scope global

           valid_lft forever preferred_lft forever

        inet6 fe80::be76:4eff:fe04:b59b/64 scope link

           valid_lft forever preferred_lft forever


     root@devstack-cc:~# ip addr show eth2

    4: eth2:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

        link/ether bc:76:4e:06:10:32 brd ff:ff:ff:ff:ff:ff

        inet 192.168.97.1/24 brd 192.168.97.255 scope global eth2

           valid_lft forever preferred_lft forever

        inet6 fe80::be76:4eff:fe06:1032/64 scope link

           valid_lft forever preferred_lft forever
Public IP eth0
Internal IP eth2

4. When the previous command finishes return to being the Ubuntu user:


    root@devstack-cc:~# exit

    logout

    ubuntu@devstack-cc:~$

Install the git command and DevStack software

DevStack is not typically considered safe for production, but can be useful for testing and learning. It is easy to configure and reconfigure. While other distributions may be more stable they tend to be difficult to reconfigure, with a fresh installation being the easiest option. DevStack can be rebuilt in place with just a few commands.

DevStack is under active development. What you download could be different from a download made just minutes later. While most updates are benign, there is a chance that a new version could render a system difficult or impossible to use. Never deploy DevStack on an otherwise production machine.

1. Before we can download the software we will need to update the package information and install a version control system command, git.    


    ubuntu@devstack-cc:~$ sudo apt-get update

    

    ubuntu@devstack-cc:~$ sudo apt-get install git

    

    After this operation, 21.6 MB of additional disk space will be used.

    Do you want to continue? [Y/n] y

    

2. Now to retrieve the DevStack software:


    ubuntu@devstack-cc:~$ pwd

    /home/ubuntu

    ubuntu@devstack-cc:~$ git clone https://git.openstack.org/openstack-dev/devstack -b stable/liberty

    Cloning into ’devstack’...

    

3. The newly installed software can be found in a new sub-directory named devstack. Installation of the script is by a shell script called stack.sh. Take a look at the file:

    ubuntu@devstack-cc:~$ cd devstack

    ubuntu@devstack-cc:~/devstack$ less stack.sh

4. There are several files and scripts to investigate. If you have issues during installation and configuration you can use theunstack.sh and clean.sh script to (usually) return the system to the starting point:

    ubuntu@devstack-cc:~/devstack$ less unstack.sh

    ubuntu@devstack-cc:~/devstack$ less clean.sh

5. We will need to create a configuration file for the installation script. A sample has been provided to review. Use the contents of the file to answer the following question.

    ubuntu@devstack-cc:~/devstack$ less samples/local.conf

6. What is the location of script output logs? _____________

7. There are several test and exercise scripts available, found in sub-directories of the same name. A good, general test is the run_ tests.sh script.

Due to the constantly changing nature of DevStack these tests are not always useful or consistent. You can expect to see errors but be able to use OpenStack without issue. For example missing software should be installed by the upcoming stack.sh script.

Keep the output of the tests and refer back to it as a place to start troubleshooting if you encounter an issue.

    ubuntu@devstack-cc:~/devstack$ ./run_tests.sh

While there are many possible options we will do a simple OpenStack deployment. Create a ~/devstack/local.conf file. Parameters not found in this file will use default values, ask for input at the command line or generate a random value.

Create a local.conf file

1. We will create a basic configuration file. In our labs we use eth2 for inter-node traffic. Use eth2 and it’s IP address when you create the following file.


    ubuntu@devstack-cc:~devstack$ vi local.conf

    

3. Navigate to the System -> Hypervisors page. Use the Hypervisor and Compute Host sub-tabs to answer the following questions. a. How many hypervisors are there?

b. How many VCPUs are used?

c. How many VCPUs total?

d. How many compute hosts are there?

e. What is its state?

4. Navigate to the System -> Instances page.

a. How many instances are there currently?

5. Navigate to the Identity -> Projects page.

a. How many projects exist currently?

6. Navigate through the other tabs and subtabs to become familiar with the BUI.

Solutions

Task 2

6. $DEST/logs/stack.sh.log

Task 5

3. a. 1 b. 0 c. 2 d. 1 e. up

4. a. 0 5. a. 6

The Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!

Read the other articles in the series:

Essentials of OpenStack Administration Part 1: Cloud Fundamentals

Essentials of OpenStack Administration Part 2: The Problem With Conventional Data Centers

Essentials of OpenStack Administration Part 3: Existing Cloud Solutions

Essentials of OpenStack Administration Part 4: Cloud Design, Software-Defined Networking and Storage

Essentials of OpenStack Administration Part 5: OpenStack Releases and Use Cases

Start exploring Essentials of OpenStack Administration by downloading the free sample chapter today. DOWNLOAD NOW

OpenStack has come a long way since 2010 when NASA approached Rackspace for a project. With 1,600 individual contributors to OpenStack and a six-month release cycle, there are a lot of changes and progress. This amount of change and progress is not without its drawbacks. In the Juno release, there were something like 10,000 bugs. In the next release, Kilo, there were 13,000 bugs. But as OpenStack is deployed in more environments, and more people are interested in it, the community grows both in users and developers.

In part 5 of our series from the Essentials of OpenStack Administration course sample chapter, we discuss the OpenStack project in more detail: its community of contributors, release cycle, and use cases. Download the full sample chapter now.

History of OpenStack

In 2010, Engineers at NASA approached some friends at Rackspace to build an open cloud for NASA and hopefully other government organizations as part of an Open Government initiative. At that time, there were only proprietary and expensive offerings available. Project Nebula was born. Rackspace was interested in moving their software toward open source and saw Nebula as a good place to begin.

Together they started working on something called Nova, known now as OpenStack Compute. At the time, Nova was the project that did everything. It did storage, and network, and virtual machines. Now, new projects have taken over some of those duties.

Since then, the number of projects has grown incredibly. If you go to the OpenStack.org website and look at the projects page, you’ll notice there are more than 35 different projects. Each project is made up of one or more services to the cloud. Each of the projects is developed separately.

Although NASA has stopped major work on OpenStack, a large and growing group of supporters still remains. Each component of OpenStack has a dedicated project. Each project has an official name, as well as a more well-known code-name. The project list has been growing with each release. Some projects are considered core, others are newer and in incubation stages. See a list of the current projects.

There are several distributions of OpenStack available as well, from large IT companies and start-ups alike. DevStack is a deployment of OpenStack available from the www.openstack.org website. It allows for easy testing of new features, but is not considered production-safe. Red Hat, Canonical, Mirantis and several other companies also provide their own deployment of OpenStack, similar to the many options to install Linux.

OpenStack Release Pattern

The first release of the project was code-named Austin, in October of 2010. Since then, a major release has been deployed every six months. There are code features and proposals that are evaluated every two months or so, as well as code sprints planned on a regular basis.

The quick release schedule and large number of developers working on code does not always lead to smooth transitions. The Kilo release was the first one to address an upgrade path, with its success yet to be known. In fact, there were approximately 10 percent more bugs in the Kilo release than the first Juno release.

OpenStack Use Cases

The ability to deploy and redeploy various instances allows for software development at the speed of the developer, without downtime waiting for IT to handle a ticket.

Testing can be easily done in parallel with various flavors, or system configurations, and operating system configurations. These choices are also within the reach of the end user to lessen interaction with the IT team.

Using both a Browser User Interface (BUI) or a command line, much of the common IT requests can be delegated to the users. The IT staff can focus on higher-level functions and problems instead of more common requests.

The flexibility of OpenStack through various software-defined layers allows for more options, instead of fewer, as has happened with server consolidation.

The next, and final, article in this series is a tutorial on installing DevStack, a simple way for developers to test-drive OpenStack.

The Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!

Read the other articles in the series:

Essentials of OpenStack Administration Part 1: Cloud Fundamentals

Essentials of OpenStack Administration Part 2: The Problem With Conventional Data Centers

Essentials of OpenStack Administration Part 3: Existing Cloud Solutions

Essentials of OpenStack Administration Part 4: Cloud Design, Software-Defined Networking and Storage

Start exploring Essentials of OpenStack Administration by downloading the free sample chapter today. DOWNLOAD NOW

There are a number of open source cloud solutions such as Eucalyptus, OpenQRM, OpenNebula, and of course, OpenStack. These implementations typically share some design concepts, and services, which we’ll cover in this article — part of our ongoing series from The Linux Foundation’s Essentials of OpenStack Administration course. Download the full sample chapter now.

Design Concepts

First, cloud platforms are expected to grow: platform providers must be able to add resources at any time, with little hassle and with no downtime.

Cloud platforms also have a special interest in providing open APIs (Application Program Interfaces): this brings third-party developers, which in turn bring more users. Publicly available and well-documented APIs make this easier by the order of magnitudes.

Open APIs also ensure a basic level of flexibility and transparency, among other things making it easier for companies to decide for or against a specific platform.

RESTful interfaces are accessible via the ubiquitous HTTP protocol, making them readily scalable. It’s also easy to write software that communicates using them. Plus, many cloud platforms and providers use REST, so programmers developing for one will find it relatively easy to do it for another.

Software-Defined Networking

Historically, the networking infrastructure has been a relatively static component of data centers. Even simple things like IP address provisioning are typically manual, error-prone affairs. Modern DCs (data centers) rely on advanced functions like VLANs or trunking, but they still happen on the networking level and require manual switch configuration.

We have established that cloud platforms require end users to configure networking, such as IP address requests, private networks, and gateway access. The cloud requires this to be flexible and open, hence the term software-defined networking, or SDN.

Software-defined networking is an area of OpenStack with a lot of attention and change. The goal of software-defined networking, or SDN, is to completely manage my network from within OpenStack. There are two general approaches to deploying SDN. One is to use the existing switch architecture. The OpenStack software then uses proprietary code to make a request to the switch. The other manner of SDN implementation is to replace the control plane of the switch with open software. This solution would mean that end-to-end the communication would be open and transparent. As well, there would be no vendor lock with a particular switch manufacturer.

A similar concept is network function virtualization (NFV). Where SDN is virtualization of the network and separation of control and data plane, NFV is the virtualization of historic appliances such as routers, firewalls load balancers, and accelerators. These would be functions, then, that exist in a particular virtual machine. Some customers, such as telephone companies, can then deploy these services as virtual machines, removing the need for multiple different proprietary implementations.

Software-Defined Storage

In conventional setups, storage is typically designed around SANs (storage area networks) or SAN-like software constructs. Like conventional networking, these are often difficult and expensive to scale, and, as such, are unsuited to cloud environments.

Storage is a central part of clouds, and (you guessed it!), it must be provided to the user in fully automated fashion. Once again, the best way to achieve this is to introduce an abstraction layer in the software, a layer that needs to be scalable and fully integrated with both the cloud platform itself and the underlying storage hardware.

Flexible storage is another area essential for a cloud provider. Historically the solution was a SAN. A storage-area network uses proprietary hardware and tends to be expensive. Cloud providers are looking towards Ceph which allows for distributed access to commodity hardware across the network. Ceph uses standard network connections and allows for parallel access of thousands of clients. Without a single point of failure, it is becoming the default choice for back end storage.

In part 5 of this series, we’ll delve more into the OpenStack project: its open source community, release cycles, and use cases.

The Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!

Read the other articles in the series:

Essentials of OpenStack Administration Part 1: Cloud Fundamentals

Essentials of OpenStack Administration Part 2: The Problem With Conventional Data Centers

Essentials of OpenStack Administration Part 3: Existing Cloud Solutions

Essentials of OpenStack Administration Part 5: OpenStack Releases and Use Cases

Start exploring Essentials of OpenStack Administration by downloading the free sample chapter today. DOWNLOAD NOW

Infrastructure providers aim to deliver excellent customer service and provide a flexible and cost-efficient infrastructure, as we learned in part one of this series.

Cloud Computing, then, is driven by a very simple motivation from the infrastructure providers’ perspective: “Do as much work as possible only once and automate it afterwards.”

In cloud environments, the provider will simply provide infrastructure that allows customers to do most of the work on their own through a simple interface. After the initial setup, the provider’s main task is to ensure that the whole setup has enough resources. If the provider runs out of resources, they will simply add more capacity. Thus another advantage of automation is that it can facilitate flexibility.

In this article, we’ll contrast what we learned in part two about conventional, un-automated infrastructure offerings with what happens in the cloud.

The Fundamental Components of Clouds

From afar, clouds are automated virtualization and storage environments. But if you look closer, you’ll start seeing a lot more details. So let’s break the cloud down into its fundamental components.

First and foremost, a cloud must be easy to use. Starting and stopping virtual machines (VMs) and commissioning online storage is easy for professionals, but not for the Average Joe! Users must be able to start VMs by pointing and clicking. So any cloud software must provide a way for users to do just that, but without the learning curve.

Installing a fresh operating system on a newly created virtual machine is a tedious process, once again, hard to achieve for non-professionals. Thus, clouds need pre-made images, so that users do not have to install operating systems on their own.

Conventional data centers are heterogeneous environments which grow to meet the organic needs of an organization. While components may have some automation tools available, there is not a consistent framework to deploy resources. Various teams such as storage, networking, backup, and security, each bring their own infrastructure, which must be integrated by hand. A cloud deployment must integrate and automate all of these components.

Customer organizations typically have their own organizational hierarchy. A cloud environment must provide an authorization scheme that is flexible enough to match that hierarchy. For instance, there may be managers who are allowed to start and stop VMs or to add administrator accounts, while interns might only be allowed to browse them.

When a user starts a new VM, presumably from the aforementioned easy-to-use interface, it must be set up automatically. When the user terminates it, the VM itself must be deleted, also automatically.

A bonus of the work to implement this particular kind of automation is that with a little more effort, usually involving the implementation of a component that knows which VMs are running on which servers, the cloud can provide automatic load-balancing.

Online storage is an important part of the cloud. As such, it must be fully automated and easy to use (like Dropbox or Gdrive).

There are a number of cloud solutions, such as Eucalyptus, OpenQRM, OpenNebula, and of course, OpenStack. Open source implementations typically share some design concepts, which we will discuss in part 4.

Various cloud solutions have been in existence since the mid-1960s. Mainframes provide virtualized resources but tend to be proprietary, expensive, and difficult to manage. Since then there have been midrange and PC architecture solutions. They also tend to be expensive and proprietary. These interim solutions also may not provide all of the resources now available through OpenStack.

The Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!

Read the other articles in the series:

Essentials of OpenStack Administration Part 1: Cloud Fundamentals

Essentials of OpenStack Administration Part 2: The Problem With Conventional Data Centers

Essentials of OpenStack Administration Part 4: Cloud Design, Software-Defined Networking and Storage

Essentials of OpenStack Administration Part 5: OpenStack Releases and Use Cases

This was an exciting year for webinars at The Linux Foundation! Our topics ranged from network hardware virtualization to Microsoft Azure to container security and open source automotive, and members of the community tuned in from almost every corner of the globe. The following are the top 5 Linux Foundation webinars of 2016:

  1. Getting Started with OpenStack

  2. No More Excuses: Why you Need to Get Certified Now

  3. Getting Started with Raspberry Pi

  4. Hyperledger: Blockchain Technologies for Business

  5. Security Top 5: How to keep hackers from eating your Linux machine

Curious to watch all the past webinars in our library? You can access all of our webinars for free by registering on our on-demand portal. On subsequent visits, click “Already Registered” and use your email address to access all of the on-demand sessions.


Getting Started with OpenStack

Original Air Date: February 25, 2016

Cloud computing software represents a change to the enterprise production environment from a collection of closed, proprietary software to open source software. OpenStack has become the leader in Cloud software supported and used by small and large companies alike. In this session, guest speaker Tim Serewicz addressed the most common OpenStack questions and concerns including:

  • I think I need it but where do I even start?

  • What are the problems that OpenStack solves?

  • History & Growth of OpenStack: Where’s it been and where is it going?

  • What are the hurdles?

  • What are the sore points?

  • Why is it worth the effort?

Watch Replay >>


No More Excuses: Why you Need to Get Certified Now

Original Air Date: June 9, 2016

According to the 2016 Open Source Jobs Report, 76% of open source professionals believe that certifications are useful for their careers. This webinar session focused on tips, tactics, and practical advice to help professionals build the confidence to take the leap to commit to, schedule, and pass their next certification exam. This session, covered:

  • How certifications can help you reach your career goals

  • Which certification is right for you: Linux Foundation Certified SysAdmin or Certified Engineer?

  • Strategies to thoroughly prepare for the exam

  • How to avoid common exam mistakes

  • The ins and outs of the performance certification process to boost your exam confidence

  • And more…

Watch Replay >>


Getting Started with the Raspberry Pi

Original Air Date: December 14, 2016

Maybe you bought a Raspberry Pi a year or two ago and never got around to using it. Or you built something interesting once, but now there’s a new Pi and new add-ons, and you want to know if they could make your project even better? The Raspberry Pi has grown from its original purpose as a teaching tool to become the tiny computer of choice for many makers, allowing those with varied Linux and hardware experience to have a fully functional computer the size of a credit card powering their ideas. Regardless of where you are in Pi experience, this session with guest speaker Ruth Suehle, had some great tricks for getting the most out of the Raspberry Pi and showcased dozens of great projects to get you inspired.

Watch Replay >>


Hyperledger: Blockchain Technologies for Business

Original Air Date: December 1, 2016

Curious about the foundations of distributed ledger technologies, smart contracts, and other components that comprise the modern blockchain technology stack? In this session, guest speaker Dan O’Prey from Digital Asset, provided an overview of the Hyperledger Project at The Linux Foundation, the main use cases and requirements for the technology for commercial applications, as well as an overview on the history and projects in the Hyperledger umbrella and how you can get involved.

Watch Replay >>


Security Top 5: How to keep hackers from eating your Linux machine

Original Air Date: November 15, 2016

There is nothing a hacker likes more than a tasty Linux machine available on the Internet. In this session, a professional pentester talked tactics, tools and methods that hackers use to invade your space. Learn the 5 easiest ways to keep them out, and know if they have made it in. The majority of the session focused on answering audience questions from both advanced security professionals and those just starting in security.

Watch Replay >>

Don’t forget to view our upcoming webinar calendar to participate in our upcoming live webinars with top open source experts.

Start exploring Essentials of OpenStack Administration by downloading the free sample chapter today. DOWNLOAD NOW

In part 1 of this series, we defined cloud computing and discussed different cloud services models and the needs of users and platform providers. This time we’ll discuss some of the challenges that conventional data centers face and why automation and virtualization, alone, cannot fully address these challenges. Part 3 will cover the fundamental components of clouds and existing cloud solutions.

For more on the basic tenets of cloud computing and a high-level look at OpenStack architecture, download the full sample chapter from The Linux Foundation’s online Essentials of OpenStack Administration course.

Conventional Data Centers

Conventional data centers are known for having a lot of hardware that is, by current standards at least, grossly underutilized. In addition to that, all that hardware (and the software that runs on it) is usually managed with relatively little automation.

Even though many things happen automatically these days (configuration deployment systems such as Puppet and Chef help here), the overall level of automation is typically not very high.

With conventional data centers it is very hard to find the right balance between capacity and utilization. This is complicated by the fact that many workloads do not fully utilize a modern server: for instance, some may use a lot of CPU but little memory, or a lot of disk IO but little CPU. Still, data centers will want enough capacity to handle spikes in load, but don’t want the cost of idle hardware

Whatever the case, it is clear that modern data centers require a lot of physical space, power, and cooling. The more efficient they run, the better for all parties involved.

mPXG1nmdkzEB0TDMlBvUDh5ZeHI6CzEsqoVDI0BR

Figure 1: In a conventional data center some servers may use a lot of CPU but little memory (MEM), or a lot of disk IO but little CPU.

A conventional data center may have several challenges to efficiency. Often there are several silos, or divisions of duties among teams. You may have a systems team that handles the ongoing maintenance of operating systems. A hardware team that does the physical and plant maintenance. Database and network teams. Perhaps even storage and backup teams as well. While this does allow for specialization in a particular area the efficiency of producing a new instance for the customer requirements is often low.

As well, a conventional data center tends to grow in an organic method. By that I mean, it may not be a well thought-out change. If it’s 2 a.m. and something needs doing, a person from that particular team may make the changes that they think are necessary. Without the proper documentation the other teams are then unaware of those changes and to figure it out in the future requires a lot of time, and energy, and resources which further lowers efficiency.

Manual Intervention

One of the problems arises when a data center needs to expand: new hardware is ordered, and, once it arrives, it’s installed and provisioned manually. Hardware is likely specialized, making it expensive. Provisioning processes are manual and, in turn, costly, slow, and inflexible.

“What is so bad about manual provisioning?” Think about it: network integration, monitoring, setting up high availability, billing… There is a lot to do, and some of it is not simple. These are things that are not hard to automate, but up until recently, this was hardly ever done.

Automation frameworks such as Puppet, Chef, JuJu, Crowbar, or Ansible can take care of a fair amount of the work in modern data centers and automate it. However, even though the frameworks exist, there are many tasks in a data center they cannot do or do not do well.

Virtualization

A platform provider needs automation, flexibility, efficiency, and speed, all at low cost. We have automation tools, so what is the missing piece? Virtualization!

Virtualization is not a new thing. It has been around for years, and many people have been using it extensively. Virtualization comes with the huge advantage of isolating the hardware from the software being used. Modern server hardware can be used much more efficiently when being combined with virtualization. Also, virtualization allows for a much higher level of automation than standard IT setups do.

bDtr1KvuvNJMSduZhCRKoF81ayc1M-n_31H9pR-v

Figure 2: Virtualization flexibility.

Virtualization and Automation

For instance, deploying a new system in a virtualized environment is fairly easy, because all it takes is creating a new Virtual Machine (VM). This helps us plan better when buying new hardware, preparing it, and integrating it into the platform provider’s data center. Typical virtualization environments such as VMWare, KVM on Linux, or Microsoft Hyper-V are good examples.

Yet, the situation is not ideal, because in standard virtualized environments, many things need to still be done by hand.

Customers will typically not be able to create new VMs on their own; they need to wait for the provider to do it for them. The infrastructure provider will first create storage (such as Ceph, SAN, or iSCSI LUN), attach it to the VM, and then perform OS installation and basic configuration.

In other words, standard virtualization is not enough to fulfill either providers’ or their customers’ needs. Enter cloud computing!

In Part 3 of this series, we’ll contrast what we’ve learned about conventional, un-automated infrastructure offerings with what happens in the cloud.

Read the other articles in this series: 

Essentials of OpenStack Administration Part 4: Cloud Design, Software-Defined Networking and Storage

Essentials of OpenStack Administration Part 5: OpenStack Releases and Use Cases

The Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!

Start exploring Essentials of OpenStack Administration by downloading the free sample chapter today. Download Now

 

OpenStack and cloud computing is a way of automating and virtualizing a traditional data center that allows for a single point of control and a single view of what resources are being used.

Cloud computing is an important part of today’s data center and having skills to deploy, work with, and troubleshoot a cloud are essential for sysadmins today.

Some 51 percent of hiring managers say experience with or knowledge of OpenStack and CloudStack are driving open source hiring decisions, according to the Open Source Jobs Report from The Linux Foundation and Dice.

The Linux Foundation’s online Essentials of OpenStack Administration course teaches everything you need to know to create and manage private and public clouds with OpenStack. In this tutorial series, we’ll give you a sneak preview of the second session in the course on Cloud Fundamentals. Or you can download the entire chapter now.

The series covers the basic tenets of cloud computing and takes a high-level look at the architecture. You’ll also learn the history of OpenStack and compare cloud computing to a conventional data center.

By the end of the tutorial series, you should be able to:

• Understand the solutions OpenStack provides

• Differentiate between conventional and cloud data center deployments

• Explain the federated nature of OpenStack projects

In part 1, we’ll define cloud computing and discuss different cloud services models and the needs of users and platform providers.

What is cloud computing?

Cloud Computing is a blanket term that may mean different things in different contexts. For example, in science it refers simply to distributed computing, where you run an application simultaneously on two or more connected computers. However, in common usage it might refer to anything from the Internet itself to a certain class of services offered by a single company.

Users and platform providers typically mean different things when they discuss the cloud. Users think of a place on the Internet where they can upload things. For platform providers, clouds are infrastructure projects that allow data centers to be much more efficient than they were previously. The latter is the focus of the Essentials of OpenStack Administration class.

You may have also heard of the following terms:

• Infrastructure as a Service (IaaS)

• Platform as a Service (PaaS)

• Software as a Service (SaaS)

The three terms refer to three common service models offered by cloud vendors such as Amazon or Rackspace, where IaaS is the most basic but flexible one, and the others progressively mask the “dirty details” from the user, trading flexibility for ease-of-use.

Platform Services

Platform Providers have goals when providing IT services, such as:

• Delivering excellent customer service.

• Providing a flexible and cost-efficient infrastructure.

If a provider fails to deliver excellent customer service, customers will look for alternatives. Cost-efficiency is always the bottom line. No one wants to spend millions on infrastructure that is static.

Infrastructure service customers will also have some requirements of their own:

• Stability, reliability, flexibility of the service…

• … for as little money as possible.

The phrase “wire once, deploy many” sums up the goal of an infrastructure provider. From the customer perspective, all of the various components are presented through an easy-to-use software interface. The use of this interface allows the customer to start new virtual machines, attach storage, attach network resources, and shut the instances down, all without having to open a ticket. This allows for more flexibility for the customer. The infrastructure provider can then focus on providing good customer service, lowering costs through consolidation and on meeting the ongoing resource requirements of one or more customers.

Catering to Both Providers and Customers

As you can see, both platform providers and their customers have very similar requirements. The key to cater to both is automation: it facilitates both flexibility and cost-effectiveness. We will get into a lot more detail on it later on.

In Part 2 of this series, we’ll see what conventional, un-automated infrastructure offerings look like, and Part 3 looks at existing cloud solutions. 

Read the other parts of this series: 

Essentials of OpenStack Administration Part 4: Cloud Design, Software-Defined Networking and Storage

Essentials of OpenStack Administration Part 5: OpenStack Releases and Use Cases

The Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!