Posts

The TARS Foundation: The Formation of a Microservices Ecosystem

Introduction

During the 1960s and 1970’s, software developers typically used monolithic architectures on mainframes and minicomputers for software development, and no single application was able to satisfy the needs of most end-users. Vertical industries used software with a smaller code footprint with simpler interfaces to other applications, and scalability was not a priority at the time.

With the rise and development of the Internet, developers gradually separated the service layer from these monolithic architectures, followed by RPC and then Client/Server.

But existing architectures were unable to keep up with the needs of larger enterprises and exploding data traffic. Beginning in the middle of the 1990s, distributed architectures began to rise in popularity, with service-oriented architectures (known as SOA) becoming increasingly dominant.

In the mid-2000s, microservices began to appear, and a set of popular frameworks based on microservice architectures were developed, with TARS appearing in 2008. After being used at scale and enhanced for 10 years, TARS became a Linux Foundation project in 2018.

Interest in microservices has grown exponentially, as demonstrated by search trends on Google

Figure 1.  Interest in microservices has grown exponentially, as demonstrated by search trends on Google.

Introducing the TARS Foundation

Today, on March 10th, 2020, The Linux Foundation is excited to announce that the TARS project has transitioned into the TARS Foundation. The TARS Foundation is an open source microservice foundation to support the rapid growth of contributions and membership for a community focused on building an open microservices platform.

A Neutral Home for Open Source Microservices Projects

The TARS Foundation is a nonprofit foundation that focuses on open source technology that helps businesses embrace microservices architecture as they innovate into new areas and scale their applications.

It will continue to support the TARS project by growing the community that has been operating under the Linux Foundation since 2018. The Linux Foundation offers a neutral home for infrastructure, open governance, and community engagement support, aiding open source microservices projects to empower any industry to turn ideas into applications at scale quickly.

The TARS Foundation is working on addressing the problems that may occur in using microservices, including reducing the difficulties of development and service governance. It seeks to solve multi-programming language interoperability, data transfer issues, consistency of data storage, and ensuring high performance while supporting massive requests.

The TARS Foundation wishes to accommodate a variety of bottom-up content to build a better microservice ecosystem. It will include but will not be limited to, infrastructure, storage, development framework, service governance, DevOps, and applications based on any programming languages.

It Begins With a Mature Microservice Framework

The modern enterprise is in need of a better microservices platform for their modern applications to support development through DevOps best practices, comprehensive service governance, high-performance data transfer, storage scalability with massive data requests, and built-in cross-language interoperability (e.g., Golang, Java, C++, PHP, Node.js).

In support of these growing requirements, the TARS project provides a mature, high-performance RPC framework that supports multiple programming languages developed by Tencent (0700.HK). Since the initial open source contribution by Tencent, many other organizations have made significant contributions to extending the platform’s features and value.

The TARS Project Microservice Ecosystem

Figure 2. The TARS Project Microservice Ecosystem.

TARS can quickly build systems and automatically generate code, taking into account ease of use and high performance. At the same time, TARS supports multiple programming languages, including C++, Golang, Java, Node.js, PHP, and Python. TARS can help developers and enterprises to quickly build their own stable and reliable distributed applications in a microservices manner, in order to focus on business logic to effectively improve operational efficiency.

The advantages of multi-language support, agile research and development, high availability, and efficient operation make TARS an enterprise-grade product out of the box. TARS has been used and refined in Tencent for the past ten years and has been widely used in Tencent’s QQ and WeChat social network, financial services, edge computing, automotive, video, online games, maps, application market and security, and other hundreds of core businesses. The scale of microservices has reached over one million nodes, perfecting the practice of the industry-standard DevOps philosophy and Tencent’s mass service approach.

Why Should Projects Choose The TARS Foundation?

Joining the TARS Foundation will provide member organizations and projects with the following benefits:

Community Engagement
  • The TARS Foundation will host a constellation of open source projects. Members of the TARS Foundation will leverage many programs to engage with project ecosystems and share their ideas and use cases.
Thought Leadership
  • Members of the TARS Foundation will be able to network and help shape the evolving microservices ecosystem.
Marketing Amplification and Brand Awareness
  • Members can broaden their project’s reach and awareness in the community with TARS Foundation marketing programs.

As the TARS Foundation has been created to develop and foster the open microservices ecosystem, it will establish different functional mailing lists to support its user communities.

The TARS Foundation will also establish a series of mechanisms for the incubation and development of new projects. After a project has agreed to join the Foundation, the appropriate incubation and maturation route will be tailored according to the project circumstances.

After meeting all incubation requirements, the TARS Foundation will announce the project’s graduation. In addition to providing a technical oversight committee and a user community, the governing board will look after these projects by reviewing each project’s unique situation, providing strategic decisions, and assisting with their overall development.

Partner Commitments to the TARS Foundation

The TARS Foundation aims to empower any industry vertical to realize their ideas with their implementation of microservices. To date, TARS has worked with many industries, including fintech, e-sports, edge computing, online video, e-commerce, and education, among others.

As a result of over a decade of industry leadership in developing open microservices projects, many companies from different industries, such as Arm, Tencent, AfterShip, Ampere, API7, Kong, and Zenlayer, have committed to and have joined The TARS Foundation as members and partners.

Tencent

TARS has been developed, hardened, and enhanced within Tencent for more than ten years. It is widely used in Tencent’s QQ and WeChat social, video, e-Sports, maps, application market and security, and other hundreds of core businesses. The scale of microservices has reached over one million nodes, perfecting the practice of the industry-standard DevOps philosophy and Tencent’s mass service approach.

Arm

Arm is the world’s leading semiconductor intellectual property (IP) provider. Arm has been working with Tencent over the last year to undertake a complete port of TARS microservices to the Arm architecture. That porting effort is now complete and is available through the Akraino Blueprint ecosystem. The first two Arm deployments within Tencent are AR/VR and autonomous vehicle use cases for internal Tencent use.

AfterShip

AfterShip was established in 2012 with its headquarters located in Hong Kong. The company provides SaaS solutions to over 10,000 eCommerce businesses in the world. AfterShip’s solutions include shipment tracking, returns management, sales, and marketing.  AfterShip is a market leader in shipment tracking solutions.

“Our company has been adopting microservices for years, and we believe the TARS Foundation will help us excel in using microservices in the future.”

Ampere

Ampere focuses on cloud-native hardware. As such, it needs to ensure that any software used on that hardware runs exceedingly well to meet the demands of their customers’ expectations.

“Microservices have become very popular for several years, so we think cooperation with the TARS Foundation and focusing on microservices will allow us to achieve our vision.”

API7

API7 is an open source software startup company delivering a cloud-native microservices API gateway that aims to deliver the ultimate performance, security, open source, and scalable platform for all APIs and microservices. Compared with traditional API gateways, it has dynamic routing and plug-in hot loading, which is especially suitable for API management under a microservices-based system.

Kong

Kong is the world’s most popular open source microservice API gateway. Kong is used to secure, manage, and orchestrate microservice APIs.

“We look forward to collaborating with the TARS Foundation members to drive microservices adoption and innovation across businesses of all industries.”

Zenlayer

Zenlayer is an edge cloud services provider that enables businesses to improve digital user experiences quickly and globally, particularly in emerging markets.

“Integration of microservices with edge computing is now widespread. We look forward to doing more research on that and with the TARS Foundation.”

Conclusion

The TARS Foundation can help make the microservices ecosystem more effective, building a more aligned community of contributors and supporters. As more technology-first companies deploy microservices in production, we expect the trend to extend to traditional industries that are transforming. We hope that more people and companies will participate in the TARS Foundation and welcome everyone to contribute to a better and more open microservice ecosystem.

“The TARS Foundation will accelerate innovation for the microservices ecosystem through an open governance model that allows for rapid and high-quality contributions and collaboration. The Linux Foundation is very happy to support this work and enable its growth.” — Jim Zemlin, Executive Director of the Linux Foundation

The Linux Foundation’s new Kubernetes training course is now available for developers and system administrators who want to learn container orchestration using this popular open source tool.

Kubernetes is quickly becoming the de facto standard to operate containerized applications at scale in the data center. As its popularity surges, so does demand for IT practitioners skilled in Kubernetes.

“Kubernetes is rapidly maturing in development tests and trials and within production settings, where its use has nearly tripled in the last eight months,” said Dan Kohn, executive director, the Cloud Native Computing Foundation.

Kubernetes Fundamentals (LFS258) is a self-paced, online course that teaches students how to use Kubernetes to manage their application infrastructure. Topics covered include:

  • Kubernetes architecture

  • Deployment

  • How to access the cluster

  • Tips and tricks

  • ConfigMaps, and more.

Developed by The Linux Foundation and the Cloud Native Computing Foundation, home of the Kubernetes open source project, developers and admins will learn the technology straight from the source.

Students will learn the fundamentals needed to understand Kubernetes and get quickly up to speed to start building distributed applications that will scale, be fault-tolerant, and simple to manage.

The course distills key principles, such as pods, deployments, replicasets and services, and give students enough information to start using Kubernetes on their own. And it’s designed to work with a wide range of Linux distributions, so students will be able to apply the concepts learned regardless of their distribution of choice.

LFS258 also will help prepare those planning to take the Kubernetes certification exam, which will launch later this year. Updates are planned for the course ahead of the certification exam launch, which will be specifically designed to assist with preparation for the exam.

The course, which has been available for pre-registration since November, is available to begin immediately. The $199 course fee provides unlimited access to the course for one year to all content and labs. Sign up now!

In previous articles, we’ve discussed four notable trends in cloud computing and how the rise of microservices and the public cloud has led to a whole new class of open source cloud computing projects. These projects leverage the elasticity of the public cloud and enable applications designed and built to run on it.

Early on in cloud computing, there was a migration of existing applications to Amazon Web Services, Google, and Microsoft’s Azure. Virtually any app that ran on hardware in private data centers could be virtualized and deployed to the cloud. Now with a mature cloud market, more applications are being written and deployed directly to the cloud and are often referred to as being cloud native.

Here we’ll explore three emerging cloud technologies and mention a few key projects in each area. For a more in-depth explanation and to see a full list of all the projects across six broad categories, download our free 2016 Guide to the Open Cloud report.  

Cloud Native Applications

While there is no textbook definition, the self-described cloud native in its simplest definition indicates applications that have been designed to run on modern distributed systems environments capable of scaling to tens of thousands of nodes. The old mantra, “No one ever got fired for buying IBM (or Microsoft),” has changed to the new slogan of “No one is going to get fired for moving to the cloud.” Rather than looking at hard and fast qualifiers for cloud-native, we need to look at the design patterns that are being applied to this evolving breed of applications.

In the pre-cloud days we saw virtualization take hold where entire operating systems were portable inside of virtual machines. That way a machine could move from server to server based on its compatibility with hypervisors like VMware, KVM or Xen Project. In recent years we have seen the level of abstraction at the application layer where applications are container-based and run in portable units that are easily moved from server to server regardless of hypervisor due to container technologies like Docker and CoreOS-sponsored rkt (pronounced rocket).

Containers

A more recent addition in the cloud era is the rise of the container, most notably Docker and rkt. These application hosts are an evolution of previous innovations including Linux control groups (cgroups) and LXC, and an even further abstraction to make applications more portable. This allows them to be moved from development environments to production without the need for reconfiguration.

Applications are now deployed either from registries or through continuous deployment systems to containers that are orchestrated using tools like Ansible, Puppet, or Chef.

Finally, to scale out these applications, the use of schedulers such as Kubernetes, Docker Swarm, Mesos, and Diego coordinate these containers across machines and nodes.

Unikernels

Another emerging technology that bears some similarity to containers is that of unikernels. A unikernel is a pared-down operating system, which is combined with a single application into a unikernel application and which is typically run within a virtual machine. Unikernels are sometimes called library operating systems, because they include libraries that enable applications to use hardware and network protocols in combination with a set of policies for access control and isolation of the network layer. There were systems in the 1990s called Exokernel and Nemesis, but today popular unikernels include MirageOS and OSv. Because unikernel applications can be used independently and deployed across diverse environments, unikernels can create highly specialized and isolated services and have become increasingly used for developing applications in a microservices architecture.

In the series that follows, we’ll dive into each category of open source cloud technology and list the most useful, influential, and promising open source projects with which IT managers and practitioners can build, manage, and monitor their current and future mission-critical cloud resources.  

Learn more about trends in open source cloud computing and see the full list of the top open source cloud computing projects. Download The Linux Foundation’s Guide to the Open Cloud report today!

Read the other articles in this series:

4 Notable Trends in Open Source Cloud Computing

Trends in the Open Source Cloud: A Shift to Microservices and the Public Cloud

Why the Open Source Cloud Is Important

 

Cloud computing is the cornerstone of the digital economy. Companies across industries now use the cloud — private, public or somewhere in between — to deliver their products and services.

A recent survey of industry analysis and research that we conducted for our 2016 Guide to the Open Cloud report produced overwhelming evidence of this.

Forty-one percent of all enterprise workloads are currently running in some type of public or private cloud, according to 451 Research. That number is expected to rise to 60 percent by mid-2018. And Rightscale reports that some 95 percent of companies are at least experimenting in the cloud. Enterprises are continuing to shift workloads to the cloud as their expertise and experience with the technology increases.

As we mentioned last week, companies in diverse industries — from banking and finance to automotive and healthcare — are facing the reality that they’re now in the technology business. In this new reality, cloud strategies can make or break an organization’s market success. And successful cloud strategies are built on Linux and open source software.

But what does that cloud strategy look like today and what will it look like in the future?

Short Term: Hybrid Cloud Architectures

While deployment and management remain a challenge, microservices architecture is now becoming mainstream. In a recent Nginx survey of 1,800 IT professionals, 44 percent said they’re using microservices in development or in production. Adoption was highest among small and medium-sized businesses. Not coincidentally, the use of public cloud is also predominant among SMBs, which are more nimble and faster to respond to market changes than large enterprises with legacy applications and significant on-premise infrastructure investments.   

Many reports tout hybrid cloud as a fast-growing segment of the cloud. Demand is growing at a compound rate of 27 percent, “far outstripping growth of the overall IT market,” according to researcher MarketsandMarkets. And IDC predicts that more than 80 percent of enterprise IT organizations will commit to hybrid cloud architectures by 2017.

However, hybrid cloud growth is happening predominantly among large enterprises with legacy applications and the budget and staffing to build private clouds. They turn to cloud for storage and scale-out capabilities, but keep most critical workloads on premise.  

In the mid-market, hybrid cloud adoption stands at less than 10 percent, according to 451 Research. Hybrid cloud is, then, a good transition point for legacy workloads and experimenting with cloud implementation. But it suffers from several challenges with more advanced cloud implementations, including management complexity and cost.

“Most organizations are already using a combination of cloud services from different cloud providers. While public cloud usage will continue to increase, the use of private cloud and hosted private cloud services is also expected to increase at least through 2017. The increased use of multiple public cloud providers, plus growth in various types of private cloud services, will create a multi-cloud environment in most enterprises and a need to coordinate cloud usage using hybrid scenarios.

“Although hybrid cloud scenarios will dominate, there are many challenges that inhibit working hybrid cloud implementations. Organizations that are not planning to use hybrid cloud indicated a number of concerns, including: integration challenges, application incompatibilities, a lack of management tools, a lack of common APIs and a lack of vendor support,” according to Gartner’s 2016 Public Cloud Services worldwide forecast.

Long term: Microservices on the Public Cloud

Over the long term, workloads are shifting away from hybrid cloud to a public cloud market dominated by providers like AWS, Azure, and Google Compute. “The share of enterprise workloads moved to the public cloud is expected to triple over the next five years,” from 16 percent to 41.3 percent of workloads runnin g in the public cloud, according to a recent JP Morgan survey of enterprise CIOs. Among this group, 13 percent said they view AWS as “intrinsic to future growth.”

By the end of 2016 the public cloud services market will reach $208.6 billion in revenue, growing by 172 percent from $178 billion in 2015, according to Gartner. Cloud application services (software-as-a-service or SaaS) is one of the largest segments of that and is expected to grow by 21.7 percent in 2016 to reach $38.9 billion while Infrastructure-as-a-Service (IaaS) is projected to see the most growth at 42.8 percent in 2016.

The public cloud itself is largely built on open source software. Offerings including Amazon EC2, Google Compute Engine and OpenStack are all built on open source technologies. They provide APIs that are well documented. They also provide a framework that is consistent enough to allow users to duplicate their infrastructure from one cloud to another without a significant amount of customization.

This allows for application portability, or the ability to move from one system to another without significant effort. The less complex the application the more likely that it can remain portable across cloud providers. And so the development practice that seems to be most suited for this is to abstract things into their simplest parts — a microservices architecture.

A whole new class of open source cloud computing projects has now begun to leverage the elasticity of the public cloud and enable applications designed and built to run on it. Organizations should become familiar with these open source projects, with which IT managers and practitioners can build, manage, and monitor their current and future mission-critical cloud resources.

Learn more about trends in open source cloud computing and see a list of the top open source cloud computing projects. Download The Linux Foundation’s Guide to the Open Cloud report today!

Read the other articles in the series:

4 Notable Trends in Open Source Cloud Computing

3 Emerging Cloud Technologies You Should Know

Why the Open Source Cloud Is Important

 

Some of the most successful public companies today are built around cloud-native applications — a fashionable term that simply means they’re designed to run in the cloud. Netflix, Facebook, LinkedIn, Twitter, and Amazon have all leveraged open source components within a distributed, microservices-based architecture to quickly deliver new products and services that are cost-effective and responsive to market demands and changes.

By breaking applications up into microservices, or distinct, single-purpose services that are loosely coupled with dependencies and explicitly described through service endpoints, they have significantly increased the overall agility and maintainability of applications and used that to gain competitive advantage.

The rest of the market has scrambled to replicate this architecture and approach, cobbling together their own solutions using custom scripts and open source software — often using the open source versions of these web giants’ own infrastructure (i.e., Google’s Borg, which became Kubernetes; Twitter’s Mesos project, VMware’s Cloud Foundry, etc.).

This experimentation has set off a chain of innovation with four notable trends, still playing out today:

1. Increasing consumption of public cloud resources

2. Adoption of container technologies like Docker and others (Fifty-three percent of organizations are either investigating or using containers in development or in production, according to a recent Cloud Foundry report)

3. The rise of DevOps as the most effective method for application delivery in the cloud

4. An explosion in available open source tooling as user companies like Walmart and Capital One release their management software under open source licenses.

From banking and finance to automotive and healthcare, companies are facing the reality that they’re now in the technology business. In this new reality, cloud strategies can make or break an organization’s market success. And successful cloud strategies are built on Linux and open source software.

As cloud adoption grows, open source technologies will continue to be the source of innovation and the foundation for new markets and ecosystems. For each of the trends, above, there are open source projects actively involved in creating the future of IT infrastructure on which companies will deliver their products and services, in the coming year and beyond.

Organizations that wish to succeed should become familiar with these projects, the categories of technology in which they are influential, and the ways in which they can help companies remain competitive in this age of digital transformation.

In our next installment in this cloud series, we’ll discuss the trend toward microservices architectures and public cloud usage.

Learn more about trends in open source cloud computing and see a list of the top open source cloud computing projects. Download The Linux Foundation’s Guide to the Open Cloud report today!

Read the other articles in this series:

Trends in the Open Source Cloud: A Shift to Microservices and the Public Cloud

3 Emerging Cloud Technologies You Should Know

Why the Open Source Cloud Is Important

The first cloud native-focused event hosted by The Cloud Native Computing Foundation will gather leading technologists from open source cloud native communities in Toronto on Aug. 25, 2016, to further the education and advancement of cloud native computing.

Co-located with LinuxCon and ContainerCon North America, CloudNativeDay will feature talks from IBM, 451 Research, CoreOS, Red Hat, Cisco and more. For a sneak peek at the event’s speakers and their presentations, read on.

For Linux.com readers only; get 20% off your CloudNativeDay tickets with code CND16LNXCM. Register now.

Scaling Containers from Sandbox to Production

There is an industry IT renaissance occurring as we speak around cloud, data and mobile technology and it’s driven by open source code, community and culture.

IBM’s VP Cloud Architecture & Technology, Dr. Angel Diaz, opens up CloudNativeDay with a keynote on “Scaling Containers from Sandbox to Production,” where he will discuss how the digital disruption in today’s market is largely driven by containers and other open technologies. With a container-centric approach, developers are able to quickly stand up containers, iterate, and change their architectures. Dr. Diaz will provide insight on how enterprises are able to transform the way they grow, maintain, and rapidly expand container and microservice-based applications across multiple clouds. Dr. Diaz will also discuss the role of CNCF in creating a new set of common container management technologies informed by technical merit and end user value.

Real-World Examples of Containers and Microservices Architectures

Enabling DevOps are two of the fastest-growing trends in technology: containers and microservices. With rapid growth comes rapid confusion. Who is using the technology? How did they build their architectures? What is the ROI of the technology?

Having real-world examples of how leading-edge companies are building containers and microservices architectures will help answer these burning questions. 451 Research’s Development, DevOps, & IT Ops channel Research Director, Donnie Berkholz will provide these examples in his talk Cloud Native in the Enterprise: Real-World Data on Container and Microservice Adoption.”

Berkholz’s current research is steeped in the latest innovative technologies employed for software development and software life cycle management to drive business growth. His research will shape this session exploring the state of cloud-native prerequisites in the enterprise, the container ecosystem including current adoption, and data on companies moving to cloud-native platforms.

When Security and Cloud Native Collide

In one world, the cloud native approach is redefining how applications are architected, throwing many traditional assumptions out of the window. In the other world, traditional security teams ensure projects in the enterprise meet a rigid set of security rules in order to proceed. What happens when these two worlds collide?

Apprenda Senior Director Joseph Jacks, Box Sight Reliable Engineer Michael Ansel, Tigera Founder and CEO Christopher Liljenstolpe join forces to discuss “Whither Security in a Cloud-Native World?

This panel will diving into how applications will be secured, who will define security policies, and how these policies will be enforced across hybrid environments – both private and public clouds, and traditional bare metal / VM and cloud-native, containerized workloads.

Peek Inside The Cloud Foundry Service Broker API

Services are integral to the success of a platform. For Cloud Foundry, the ability to connect to and manage services is a crucial piece of its platform.

Abby Kearns, VP of industry strategy for Cloud Foundry Foundation, will discuss why they created a cross-foundation working group with The Cloud Native Computing Foundation to determine how the Cloud Foundry Service Broker API can be opened up and leveraged as an industry-standard specification for connecting services to platforms.

In her presentation, “How Cloud Foundry Foundation & Cloud Native Computing Foundation Are Collaborating to Make the Cloud Foundry Service Broker API the Industry Standard,” Kearns will share the latest progress on a proof of concept that allows services to write against a single API, and be accessible to a variety of platforms.

Innovative Open Source Strategies Key to Cloud Native in the Enterprise

As IT spending on cloud services reaches $114 billion this year and grows to $216 billion in the year 2020 (according to a report released by Gartner), cloud-native apps are becoming commonplace across enterprises of all sizes.

Enterprises are investing in people and process to enable cloud native technologies.  Adoption of collaborative and innovative open source technologies have become a key factor in their success, according to Vice President and Chief Technologist of Red Hat, Chris Wright.

Wright’s closing keynote at CloudNativeDay, “Bringing Cloud Native Innovations into the Enterprise,” will discuss the open source strategies and organizations driving this success. After more than a decade serving as a Linux kernel developer working on security and virtualization, Wright understands the importance of ensuring industry collaboration on common code bases, standardized APIs, and interoperability across multiple open hybrid clouds.

 

Read more on CloudNativeDay. Save 20% when using code CND16LNXCM and register now.

 
 

When people talk about cloud native applications you almost inevitably hear a reference to a success story using Apache Mesos as an application delivery framework at tremendous scale. With adoption at Twitter, Uber, Netflix, and other companies looking for scale and flexibility Mesos provides a way to abstract resources (CPU, memory, storage, etc.) in a way that enables distributed applications to be run in fault-tolerant and elastic environments. The Mesos kernel provides access to these abstractions via APIs and scheduling capabilities in much the same way that the Linux kernel does but geared towards consumption at the application layer rather than the systems layer.

Benjamin Hindman (@benh), the co-creator of Apache Mesos, developed the open source powerhouse as a Ph.D. student at UC Berkeley before bringing it to Twitter.  The software now runs on tens of thousands of machines powering Twitter’s data centers and is often credited for killing the fail whale and providing the scale Twitter needed to serve its growing base of over 300 million users. It’s also causing a huge ground swell in companies developing cloud native applications.

Ben, now founder of Mesosphere, will give the welcome address at MesosCon North America, the Apache Mesos conference going on in Denver on June 1-2. This event is a veritable who’s who from across the industry of those using Mesos as a framework to develop cloud native applications.

MesosCon is a great place to learn about how to design application clusters running on Apache Mesos from engineers who have done it like Craig Neth (@cneth), distinguished member of the technical staff at Verizon, who will walk attendees through how they got a 600 node Mesos cluster powered up and running tasks in 14 days.

Your Uber has arrived, thanks to Open Source Software

Traditionally, machines were statically partitioned across the different services at Uber. In an effort to increase the machine utilization, Uber has recently started transitioning most of its services, including the storage services, to run on top of Apache Mesos.

At MesosCon, Uber engineers will describe the initial experience building and operating a framework for running Cassandra on top of Mesos across multiple data centers at Uber. This framework automates several Cassandra operations such as node repairs, the addition of new nodes, and backup/restore. It improves efficiency by co-locating CPU-intensive services as well as multiple Cassandra nodes on the same Mesos agent. And it handles failure and restart of Mesos agents by using persistent volumes and dynamic reservations.

Running Cassandra on Apache Mesos Across Multiple Datacenters at Uber at MesosCon

Microservices, Allowing us to binge watch House of Cards on Netflix

Netflix customers worldwide streamed more than forty-two billion hours of content last year. Service-style applications, batch jobs, and stream processing alike, from a variety of use cases across Netflix, rely on executing container-based applications in multi-tenant clusters powered by Apache Mesos and Fenzo, a scheduler Java library for Apache Mesos frameworks. These applications are consuming microservices that allows Netflix to build composable applications at massive scale.  

Based on the experiences from Netflix projects Mantis and Titus, Netflix Software Engineer Sharma Podila (@podila) will share his experiences running Docker and Cgroups based containers in a cloud native environment.

Lessons from Netflix Mesos Clusters at Mesoscon.

How Microservices are being Implemented at Adobe

Dragos Sccalita Haut is a solutions architect at Adobe’s API Platform, adobe.io, building a high scale distributed API Gateway running in the cloud. He realized that as the number of microservices increase and the communication between them becomes more complicated. This brings new questions to light:

How do microservices authenticate?
How do we monitor who’s using the APIs they expose?
How do we protect them from attacks?
How do we set throttling and rate limiting rules across a cluster of microservices?
How do we control which services allow public access and which ones we want to keep private?
How about Mesos APIs and frameworks, can they benefit from these features as well?

The answer to these questions was using the Mesos API management layer to expose microservices in a secure, managed and highly available way.

Let Dragos teach you to Be a Microservices Hero at MesosCon.

MesosCon in the Mile High City June 1-2

If you are interested in hearing how Apache Mesos is being developed and deployed by the world’s most interesting and progressive companies the place to see this is MesosCon on June 1-2, in Denver. The conference will feature two days of sessions to learn more about the Apache Mesos core, an ecosystem developed around the project, and related technologies. The program will include workshops to get started with Apache Mesos, keynote speakers from industry leaders, and sessions led by adopters and contributors.