Sunday, 28 February 2021

Six Strong reasons to train CKA Certification with World’s leading No: 2 AWS EKS Marketplace Provider — Yobitel (Official KCSP)

 Yobitel Training for CKA Certified Kubernetes Administrator

✓ World’s leading No:2 EKS Marketplace & Cloud-Native Application Provider with AWS as Technology Partner

✓ Access 50+ enterprise graded Cloud-Native Stacks for Free in AWS Marketplace
✓ Prepare Pre-vetted Helm charts + Custom Resource Definition (CRD) container images
✓ Being KCSP and we provide 100% free workshop and labs — 2500 trained and certified
✓ 20 hours of exclusive training with real-time projects
✓ Access to CNCF online communities and exam preparation support with 24/7 online

Please register online and train yourself.

Tuesday, 30 July 2019

Hosting Containerized Cloud-Native Stacks (Kubernetes) on GCP Marketplace

Google Cloud Marketplace is the fastest way for the customers to get started on Google Cloud Platform (GCP). GCP Marketplace offers ready-to-go development stacks, solutions, and services to accelerate development so that the users can spend less time installing and more time developing the application.

Why Google cloud marketplace?
  • Deploy in a few clicks.
  • Production-ready solutions. 
  • Augment platform capabilities. 
  • Streamlined billing.
  • Built using Deployment Manager. 
  • Security update notifications.
Steps to place the application on Google cloud marketplace:

The technical integration of the overall procedure is shown in the below diagram.
The main seven steps need to be followed while placing an application to the marketplace are described below.

Step 1: Fetch the helm chart

Download the helm chart of the application from its repository or write the required helm chart of the application. The helm chart mainly contains values.yaml, chart.yaml, and templates files.
The above diagram shows the sample tree structure of the helm chart. Click here to get to know more about helm charts.

Step 2: Create the initial schema

Write the schema.yaml file for the application because a schema defines the general structure to which files should adhere. The general structure of the schema.yaml file.
Step 3: Add an application descriptor

In the template folder of the helm chart, add a file named application.yaml with the contents shown below. The file mainly describes the application and is used in the UI.
Step 4: Write the Dockerfile

Container images are typically built using a Dockerfile and the docker build command-line tool. It is better to publish the Dockerfile and container build instructions in the public repository for the application. This enables customers to modify or rebuild the images, which is sometimes necessary to certify images for enterprise production environments. If the application image depends on a base image such as Debian, or a language run time image such as Python or OpenJDK, then one must use one of GCP Marketplace's certified container images.

Build the Dockerfile using the following command

sudo docker build -t  mongodb . 

Step 5: Push the Docker image to the registry

Container Registry repository must have the following structure:
  • The Application's main image must be in the root of the repository. The application's image should be in gcr.io/boxbe00034/mongodb format.
  • Deployer folder contains the image to be deployed.
  • If your application uses additional container images, each additional image must be in its own folder. For example, if your application requires an Ubuntu 16.04 image, add the image to gcr.io/boxbe00034/mongodb/ubuntu16_04.
  • All the images in your application must be tagged with the minor version using semantic versioning of the application.  
Push the built application to the remote GCR (Google Container Registry) so that the application running in the cluster can access the image. The command to push the application to the container registry is :

docker --push gcr.io/boxbe00034/mongodb : 4.0

Once you push the application to the registry, the image registry looks as shown below:
Step 6: Submitting the application
  • Enable Partner Portal 
  • Add your solution to partner portal:
In Partner Portal, you must add marketing information about your solution, and set up the product versions that you want to list.
  • Submit solution for review
Step 7: Getting the GCP Marketplace security check

Once you submit the solution, automatically through the GCP Marketplace platform testing and reviewing operations will take place. Automated testing includes security scanning, functional testing, and integration testing. This platform also provides the application’s end-to-end customer experience, including marketing materials, installation flow, security, and user guide.

Summary

When the application is supposed to publish in the marketplace, it should undergo many important steps, which include, building the image, test, deploy and reviewed by the marketplace teams. This document describes the most important steps to be followed while on boarding an application to the GCP marketplace. This procedure can also be followed for publishing applications in other cloud and container marketplaces such as AWS Marketplace, Azure Marketplace, IBM Cloud Marketplace, Oracle Cloud Marketplace and so on, with minimal changes on the platform, programming and operational dependencies respect to marketplace terms and conditions. 

For building custom based cloud-native stack development & integration, hosting private or public marketplace repositories for existing applications and newly build cloud-native stacks, if you require more details about cloud-native microservices & cloud-native stack transformation, please refer to Yobitel Communications. www.yobitel.com





 

Monday, 15 July 2019

Orchestrating Zero-Down Time Blue-Green Deployments for Production Workloads in Kubernetes

This article is mainly concerned about the most trending word in the cloud-native world that is blue-green deployment.

Blue-green deployment is a technique that reduces downtime and risk by running two similar production environments called Blue and Green and among those two, only one of the environments is active and it is serving all production traffic. For example, if Green is active then blue is idle.

When you are updating the application or releasing the new version of the software, then deployment and the final stage of testing takes place in the idle environment. Once you have deployed and fully tested the software in the idle environment, you switch the router, so that now all incoming requests go to idle instead of active. Idle is now active, and active environment is idle.

This technique can eliminate downtime due to app deployment and also it reduces risk. If something unexpected happens with your new version on an active environment, you can immediately roll back to last version by switching back to the idle environment. The actual procedure of blue-green deployment is schematically explained in the below figure.

 

Blue-green deployment provides you to achieve the following goals by eliminating risks that are witnessed in the deployment. 

Possibility of errors is less:

When we are doing online bank transactions, sometimes we will be surprised with the error message saying “bank server is under maintenance” or similar. But with blue-green deployment, you never have to worry about this maintenance screen. Up-gradation is fast and the application can be deployed with the next click on the updation of the software. It would provide near-zero downtime and would present easy rollback capabilities. 

Testing the production environment:

With the blue-green deployment, it is easy to ensure the pre-production environment is as close to the production environment and which is not only important but essential too. The user can test any application while it is disconnected from the main traffic. The team has the eligibility to even load the test if they desire too.

Helps to maintain consistent traffic:

In any e-commerce enterprise, banking sector or any major enterprise where the traffic is high and maintaining the consistent traffic is essential. Blue-green deployment assures that your traffic never stops. Also makes sure that customers can place their order just fine without disruption, which means that the employees overseas continue to do their job without any interruption. The fundamental idea behind blue-green deployment is to shift traffic between two identical environments that running differently in different applications.

Easy recovery:

You might witness times where you would get introduced to bugs and viruses. We can either spend a lot of money on its recovery or we can inevitably find them and recover them. The blue-green deployment helps us to have our older and more stable version of our applications to come back online at a moment’s notice by evading the pain to roll back a deployment.

Example of Blue-Green deployment:

Here, in this example, the Nginx image version 1.10 is updated to 1.11. Let the Nginx image with version 1.10 is in a blue environment, and it is active. The service.yaml file of the application is shown below:

The service pods of the Nginx image with 1.10 are active and are running


The green environment is idle and the Nginx image is upgraded to 1.11 version. Once the up-gradation is complete, make the green environment as active and blue as idle. The service.yaml file for the application is updated with the new version. The service pods of the 1.11 version are active and running.


Summary:

Deployments are one of the most important parts of the software development life cycle, therefore all the activities involved should thoroughly be researched and tested to ensure that they are a perfect fit for your system architecture and business. Blue-green deployments need two identical sets of hardware, and that hardware carries added costs and overhead without actually adding capacity or improving utilization. Blue-green deployments are usually utilized for consumer-facing applications and the applications which have critical uptime requirements. The new code is delivered to the inactive environment, where it is completely tested.

For more details about cloud-native microservices & cloud-native stack transformation, please refer Yobitel Communications. www.yobitel.com/cloud-native-services.

Sunday, 30 June 2019

5 Easy Steps To Create a Kubernetes Cluster Using Kubeadm

  • June 28, 2019

    If you want to manually create the cluster (i.e., without GKE) using kubeadm, then this document may help you to do it. Before moving into the procedure it is good to have knowledge on kubeadm.

    Prerequisites:

    OS: Ubuntu version 16.04 or higher
    Minimum Memory required: For master node - 2 CPU's, 8GB memory
                                                 For each worker node - 1 CPU's, 4GB memory

    Let us look into the steps for cluster creation using kubeadm

    Step 1: Docker Installation

    Create the Instances in the GCE , It should be at least one master node and  two or more worker nodes. Follow these steps for all the nodes.

    Docker installation is the primary steps in the cluster creation, so follow the below steps for all the nodes you created. Otherwise you can also look into the official docker documentation for docker installation.
    1. $ sudo apt-get update -y
    2. $ sudo apt-get install -qy docker.io
    Check whether docker got properly installed or not using docker --version command 

    Step 2: Install Kubeadm
    • Get Kubernetes repo key
     $ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
    • Add Kubernetes repository to manifests
    $ sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
    • Install Kubeadm 
    $ sudo apt install kubeadm

    Check kubeadm version and make sure it is properly installed or not.

    $ kubeadm version 

    Step 3: Create the cluster

    It is possible to configure kubeadm init with a configuration file instead of command line flags, and some more advanced features may only be available as configuration file options. This file is passed in the --config option.

    On master node:

    Initialize the kubeadm

    $ sudo kubeadm init [option]

    Options:
    •     --pod-network-cidr=10.244.0.0/16
    •     --config=/root/kubeadm-config.yaml
    •     --cert-dir

    Copy and execute kubectl config file

    $ mkdir -p $HOME/.kube
    $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    $ sudo chown $(id -u):$(id -g) $HOME/.kube/config

    Step 4: Join worker node

    To join worker nodes to the master node, run the below command separately on each worker node

    As a root user 

    $ kubeadm join 10.156.0.18:6443 --token ow15j0.kz12nltctqeowkiy \
    >     --discovery-token-ca-cert-hash sha256:a7024eacb754a01721f28cedc52e92427a83225db0f800d1bfb9117f2832602c
    To check whether all the pods are active and running, run the following command on the master node.

    $ kubectl get nodes

    Step 5: Troubleshooting
    • When you are initializing kubeadm with cidr, then If you get pods status as not ready  then run the following code in master node
    $ sudo kubectl  apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    • If you are doing kubeadm init with config file then sample template of the file is shown below.

    Kubeadm-config.yaml sample template



     You can access the yaml file here.

    Tear down:

    To tear down the kubeadm, run these commands on the master node

    $ kubectl drain <node name> — delete-local-data — force — ignore-daemonsets

    $ kubectl delete node <node name>

    Then, on the node being removed, reset all kubeadm installed state:

    $ kubeadm reset




Monday, 10 June 2019

Time To Travel with Hyper-Converged Cloud-Native Containerized Application Services


The fundamentals of Cloud-native have been described as container packaging, dynamic application development, and a Microservices-oriented Serverlessarchitecture. Technologies are used to develop applications built as services, packaged in containers, deployed as microservices and managed on elastic infrastructure through agile DevOps Continuous Development workflows. The main motive is to improve speed, scalability and finally margin. Truly cloud-based organizations have started to differentiate themselves as being ‘cloud-native.’ The main tools and services included in the cloud-native services are:

  • Infrastructure Services
  • Automation/Orchestration
  • Containerization
  • Microservices Architecture
  • Serverless
  • Containerized Application Service

  • Currently, enterprise applications are built using modern cloud technologies and it is hosted and managed in the cloud end-to-end, where it includes writing code, testing, deploying it and operating those applications, all in the cloud. Even though having all these advantages, it also includes disadvantages:
    • We are talking about adopting digital transformation via cloud, containers and even serverless mechanism.
    • Also fascinating the rich user experience with microservices and with continuous delivery to continuous deployment.
    • But, other than understanding and adopting, it is always a pain for an enterprise customer to prepare and execute the existing advanced bottom-lined infrastructure for their applications to reside and to maintain.
    Here, in the years of evolution, every enterprise customer loves to adopt click and go application and forget worrying either it’s in cloud or container, do we follow canary or blue-green deployments, green field or brown field, improving continuous delivery application life-cycle management, version upgrades, packaging the applications, deployment practices and standards, governance policies, overhead in maintenance and automation requirements with a shooting price for various unidentified infrastructure and application billings. There are many reasons why one to migrate towards cloud-native service and few are listed here. Check out our other post regarding cloud-native services here.

    1. Reduced Cost through Containerization against Cloud Platforms:
    Containers make it easy to manage and secure applications independently of the infrastructure that supports them. The industry is now consolidating around Kubernetes for the management of these containers at scale. As an open source platform, Kubernetes enjoys industry-wide support and is the standard for managing resources in the cloud. Cloud-native applications fully benefit from containerization. Enhanced cloud-native capabilities such as Serverless let you run dynamic workloads and pay-per-use compute time in milliseconds. This is the ultimate flexibility in pricing enabled by cloud-native.

    2. Build More Reliable Systems:
    In the traditional system, downtime used to be accepted as normal and achieving fault tolerance was hard and expensive. With modern cloud-native approaches like microservices architecture and Kubernetes in the cloud, you can more easily build applications to be fault tolerant with resiliency and self-healing built in. Because of this design, even when failures happen you can easily isolate the impact of the incident so it doesn’t take down the entire application. Instead of servers and monolithic applications, cloud-native microservices helps you achieve higher uptime and thus further improve the user experience.

    3. Ease of Management:
    Cloud-native also has many options to make infrastructure management effortless. It began with PaaS platforms like Google App Engine about a decade ago and has expanded to include serverless platforms like Spotinst and AWS Lambda. Serverless computing platforms let you upload code in the form of functions and the platform runs those functions for you so you don’t have to worry about provisioning cloud instances, configuring networking, or allocating sufficient storage. Serverless takes care of it all.

    4. Achieve Application Resilience:
    The disadvantages of monolithic applications are overcome by Microservices. The main advantage of microservice is that even if a single server fails, its neighboring services can function normally. This would affect the user experience to an extent but is better than rendering the entire application unusable. Even in the rare case of a failed host, you can replicate a backup instance in the cloud, which is much faster than procuring new hardware. Finally, cloud vendors provide multiple availability zones and they increase the performance of every region you serve by isolating the faults to particular regions. The cloud enables reliability in a way that’s not possible with traditional on-premise hardware.

    5. Do Not Compromise on Monitoring and Security:
    As a system scales it’s easy to compromise on monitoring and security. Monitoring and security are fundamentally different from cloud-native applications. Rather than rely on a single monitoring tool, you will likely need to take a best-of-breed approach to monitor by using a combination of vendor-provided open source monitoring tools like Prometheus. Security in the cloud requires adequate encryption of data in transit and at rest. The cloud vendors provide encryption services for this purpose. Additionally, open source tools like Calico are enabling networking and network policy in Kubernetes clusters across the cloud. Though monitoring and security are more complex and challenging for cloud-native applications, when done right they provide a level of visibility and confidence that is unheard of with traditional monolithic applications running on-premise.

    6. Containerized Application Services:
    Containerization helps the development team to move fast, deploy software efficiently, and operate at an unprecedented scale. The main uses of containerized applications are listed below:
    • Containerized applications like Kubecharts, ChartMuseum are the User Interfaces for deploying and managing applications in Kubernetes clusters.
    • Chartmuseum is an open source helm chart repository server with support for cloud storage back ends, including Google Cloud Storage, Amazon S3, etc.
    • Harbor is a containerized application which is mainly used for the version upgrade management and also to manage and serve the container images in a secure environment.
    • Istio is used to provide security to pods and containers, which are secured to uncertain scalability levels.
    • Individual application components can be stored in JFrog Artifactory so that later they can be assembled into a full product -  thus allowing a build to be broken in smaller chunks, making more efficient use of resources, reducing build times, better tracking of binary debug databases, etc.

    7. Enterprise Mesh for Cloud-native Stacks:
    The concept of the service mesh as a separate layer is tied to the rise of the cloud-native application. In the cloud-native model, a single application might consist of hundreds of services; each service might have thousands of instances, and each of those instances might be constantly-changing as they are dynamically scheduled an orchestrator like Kubernetes. Managing it is vital to ensuring end-to-end performance and reliability. Communication within clusters is a solved issue, but communication across clusters requires more design and operational overhead. The communication between these microservices in a cluster can be enhanced by a service mesh. Service mesh-like Istioenvoy can make multi-cluster communication painless.

    8. Schedulers:
    The Kubernetes Scheduler is a core component of Kubernetes: After a user or a controller creates a Pod, the Kubernetes Scheduler, monitoring the Object Store for unassigned Pods, will assign the Pod to a Node. Then, the Kubelet, monitoring the Object Store for assigned Pods, will execute the Pod. example are etcd, IBM spectrum LSF (which is used in High-Performance computers). When the schedulers are applied to the file system, file share access, prioritization, job placement, rich policy control, job dependencies, singularity Integrator can be achieved.

    Summary:
    Cloud-Native is a powerful, promising technology. Enterprises are understandably eager to get there as fast as they can. But reaping the full benefit of the cloud means first taking care to build a solid foundation based on the principles of Cloud-Native architecture.
    The key points being cloud-native are:
    • Cloud-native workloads are slowly gaining momentum. Today, 18% of organizations have more than half of workloads cloud-native. Large enterprises are waiting to adapt existing applications to cloud environments until the end of the useful life of existing data center equipment.
    • High Throughput computing.
    • Data Analytics allows you to view statistical information about the unstructured, data in your cloud environment. With this information, you can quickly assess the current state of your data, take actionable steps to retrieve valuable storage space, and mitigate the risk of compliance-related issues.
    • A cloud-native application consists of discrete, reusable components known as microservices that are designed to integrate into any cloud environment.

    Solution:
    • Cloud-Native SaaS Multi-Cloud Containerized Serverless Application Platform Services and Business Intelligence Mechanism with our redefined cloud-native stacks - Kubernetes Charts and Yobibyte.
    • Users have an amazing choice of deploying the applications on our Yobibyte platform with the customized Cloud-Native Application repository - Kubecharts (A Kubernetes application package medium), to deploy in minutes and to achieve full-fledged digital transformation.  
    • Our Kubecharts repository provides 1000+ enterprise free and licensed containerized serverless application packages can deploy in multi-cloud environments to the enterprise users.  
    • It enables no vendor-locking, any time deploy and no term locking for your containerized application and provides pay-as-you-go (PAYG) mode.

    For more details about cloud-native services log into our website Yobitel communications.