Google Cloud Anthos — Multi-Cloud Orchestration: Part 1
This is part-1 of the Google Cloud Anthos Series. This is a series of blogs divided into three parts.
- Google Cloud Anthos and an Intro to Kubernetes. (This article!)
- Creating and Managing hybrid clusters using Kubernetes Engine. Creating Clusters using Kubernetes Engine
- Configuring Anthos for multi-cluster operation. Configuring Google Cloud Anthos for Multi-Cluster Operation
We will be creating a hybrid environment for Kubernetes using Anthos (a modern application management platform that provides a consistent development and operations experience for cloud and on-premises environments).
In this blog, we will learn about
1)What is Anthos?
2)Need for Anthos
3)Anthos real-time use cases
4)Introduction to Kubernetes
5)Concepts of Kubernetes
Anthos - A managed application platform that extends Google Cloud services and engineering applications to our environments so we can develop apps faster and establish operational consistency across them.
Customers want to develop and deploy their applications anywhere on-prem, within the public cloud, or in multiple public clouds seamlessly and securely. This is possible by integrating with Anthos and Google Cloud.
Anthos is made on the firm foundation of Google Kubernetes Engine (GKE), the managed containers as a service offering on Google Cloud Platform. But other vital technologies augment the facility of Kubernetes.
Need for Anthos:
Customers continued to seek a platform that made it simple to span various, competitor cloud providers, even though the previously launched Google Kubernetes Engine (GKE) and GKE On-Premise enabled hybrid Kubernetes installations.
By providing one platform for the management of all Kubernetes workloads, Google Cloud Anthos allows users to apply their skills to developing technology, instead of counting on engineers to do a multitude of proprietary cloud technologies.
Anthos also ensures operational consistency across hybrid and public clouds, with the ability to apply standard settings across infrastructures and custom security rules tied to specific workloads and namespaces, independent of where those workloads are executing.
HSBC, one of the largest banking and financial services organizations in the world, used anthos managed hybrid-cloud environment to reduce big data analytics complexity and cost. This approach for managing hybrid environments provided an innovative, differentiated solution that was able to be deployed quickly for our customers.
DenizBank uses anthos to scale its private cloud. This helps to buy time for its IT developing team to focus more on developing and less on scaling. With anthos, they can create a faster and in-demand product. The customers are happy with the stability and performance of their application.
UPC Polska (Liberty Global Europe’s telecommunications operation in Poland) stood up its new customer service application in Google Cloud Anthos with the assistance of Accenture to maximize the facility of hybrid computing.
An open-source framework for managing containerized workloads and services that allows declarative configuration as well as automation. It has a huge and fast-expanding ecosystem. Services, support, and tools for Kubernetes are widely available.
Concepts of Kubernetes:
A Pod may be a group of 1 or more application containers (such as Docker) and includes shared storage (volumes), IP address, and knowledge about the way to run them.
A node may be a working machine in Kubernetes and perhaps a Virtual Machine(VM) or physical machine, counting on the cluster. Multiple Pods can run on a single Node.
A cluster may be a group of two or more computers, or nodes, that run in parallel to realize a standard goal. This enables workloads consisting of a high number of individual, parallelizable tasks to be distributed among the nodes within the cluster. As a result, these tasks can leverage every computer's combined memory and processing power to extend overall performance.
A container may be a standard unit of software that packages up code and every one its dependencies; therefore the application runs quickly and reliably from one computing environment to a different.
A workload is an application running on Kubernetes. Be it a single component or several that are employed together as your workload, on Kubernetes you run it inside a group of pods.
To create and manage hybrid clusters using Kubernetes click here Creating Clusters using Kubernetes Engine.
Authors: Santhosh Mathavan, Karthik Prabhu — Cloud CoE