My Notes (Cheatsheet)

Cloud Native Landscape

  • Provisioning - layer with the tools needed to lay the infrastructure foundation

    • Automation: Cloud Custodian, KubeEdge, CDK for Kubernetes, etc.

    • Container Registry: Harbor, Dragonfly, etc.

    • Security & Compliance: Falco, Open Policy Agent (OPA), etc.

    • Key Management: SPIFFE, SPIRE, etc.

  • Runtime - layer where everything revolves around containers and what they need to run in a cloud native environment

    • Cloud Native Storage: Rook, CubeFS, Longhorn, etc.

    • Container Runtime: containerd, cri-o, etc.

    • Cloud Native Network: Cilium, CNI, Antrea, etc.

  • Orchestration and Management - layer that contains the tools to orchestrate and manage containers and applications

    • Scheduling & Orchestration: KEDA, Kubernetes, Crossplane, etc.

    • Coordination & Service Discovery: CoreDNS, etcd, etc.

    • Remote Procedure Call: gRPC, etc.

    • Service Proxy: Envoy, Contour, etc.

    • API Gateway: Emissary-Ingress, etc.

    • Service Mesh: Istio, Linkerd, etc.

  • App Definition and Development - layer that is concerned with the tooling needed to enable applications to store and send data as well as with the ways we build and deploy applications.

    • Database: TiKV, Vitess, etc.

    • Streaming & Messaging: CloudEvents, NATS, etc.

    • Application Definition & Image Build: Helm, Artifact Hub, Backstage, etc.

    • Continuous Integration & Delivery: Argo, Flux, etc.

  • Observability and Analysis - includes tools that monitor applications and flag when something is wrong

    • Observability: Fluentd, Jaeger, Prometheus, OpenTelemetry, etc.

    • Chaos Engineering: Chaos Mesh, Litmus, etc.

  • Platform - bundle multiple tools across the different layers together, configuring and fine-tuning them so they are ready to be used

Kubernetes Solutions

  • K3s - A lightweight Kubernetes distribution designed for resource-constrained environments. It offers a smaller footprint compared to vanilla Kubernetes but with some reduced features.

  • K3D - Builds on top of K3s, focusing on ease of use. It simplifies setting up multi-node K3s clusters locally using Docker containers.

  • Minikube -A tool for running single-node Kubernetes clusters locally within a virtual machine. It's a good choice for beginners due to its straightforward setup.

  • Kind - Creates in-cluster Kubernetes environments using Docker containers. It excels at quickly spinning up multi-node clusters ideal for testing and development.

  • MicroK8s - A single-command installation Kubernetes distro optimized for simplicity and low resource usage. It provides automatic high availability and updates, making it attractive for production use on edge devices.

Managed Kubernetes from Cloud Service Provider (CSPs)

  • Amazon Elastic Kubernetes Service (EKS)

  • Google Kubernetes Engine (GKE)

  • Azure Kubernetes Service (AKS)

  • Alibaba Container Service for Kubernetes (ACK)

  • Oracle Container Engine for Kubernetes (OKE)

  • DigitalOcean Kubernetes (DOKS)

  • IBM Cloud Kubernetes Service

Kubernetes Control Plane Components

  • kube-apiserver - it is the front end for the Kubernetes control plane.

  • etcd - a consistent and highly-available key value store used as Kubernetes' backing store for all cluster data.

  • kube-scheduler - component that watches for newly created Pods with no assigned node, and selects a node for them to run on.

  • kube-controller-manager - component that runs controller processes (e.g. Node controller, Job controller, etc.)

  • cloud-controller-manager - component that embeds cloud-specific control logic. It only runs controllers that are specific to your cloud provider.

Kubernetes Node Components

  • kubelet - an agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.

  • kube-proxy - is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.

  • Container runtime - is responsible for managing the execution and lifecycle of containers within the Kubernetes environment.

Kubernetes Service

  • ClusterIP - default, used in creating a service accessible only within the Kubernetes cluster

  • NodePort - exposes the service on a predefined port on each node

  • LoadBalancer - leverages cloud provider's load balancer to distribute traffic across multiple Pods in the service

  • ExternalName - acts as a DNS alias, maps a service name to a predefined external DNS name

  • Headless - send traffic to specific pod, typically used for stateful pods

Probes

  • Liveness Probe - continuously checks if a container is still functional and operational

  • Readiness Probe - determines if a container is ready to accept incoming traffic

  • Startup Probe - specifically designed for slow-starting containers; verifies if the container has successfully started before activating liveness and readiness probes

Service Mesh

  • Istio - open-source service mesh with a wide range of features and vendor support. It's known for its powerful routing capabilities and extensive ecosystem of integrations.

  • Linkerd - open-source service mesh, known for its focus on simplicity and ease of use. It's a good choice for smaller deployments or those prioritizing a lightweight solution.

  • Kuma - open-source service mesh from Cloud Native Computing Foundation (CNCF) with focus on multi-cluster and hybrid cloud deployments. It's designed for flexibility and supports various underlying networking technologies.

4Cs of Cloud Native Security

  • Cloud, Cluster, Containers and Code

Authentication, Authorization and Accounting

  • Authentication - verify the identity of a user/entity

  • Authorization - grant access to specific resources or actions based on the user's identity or other attributes

  • Accounting - record and track user activity

Open Standards

  • Open Container Initiative (OCI) - is a lightweight, open governance structure for purpose of creating open industry standards around container formats and runtimes

    • The OCI currently contains three specifications:

      • Runtime Specification (runtime-spec)

      • Image Specification (image-spec)

      • Distribution Specification (distribution-spec)

  • Container Network Interface (CNI) - a specification and libraries for writing plugins to configure network interfaces in Linux and Windows containers

  • Container Storage Interface (CSI) - a standard for exposing arbitrary block and file storage systems to containerized workloads

  • Container Runtime Interface (CRI) - a plugin interface which enables kubelet to use a wide variety of container runtimes.

CNCF Governance Structure

  • CNCF 3 Main Bodies

    • Governing Board (GB) - responsible for marketing, budget and other business oversight decisions for the CNCF

    • Technical Oversight Committee (TOC) - responsible for defining and maintaining the technical vision

    • End User Community (EUC) - responsible for providing feedback from companies and startups to help improve the overall experience for the cloud native ecosystem

  • Membership type:

    • Platinum

    • Gold

    • Silver

    • End User

    • Academic/Non-Profit

  • Technical Advisory Groups - TAGs (formerly SIGs)

    • Expert groups that focus on specific areas within the cloud-native landscape.

    • TAGs: app-delivery, network, observability, runtime, security and storage

  • End User Technology Radar

    • A technology radar is an opinionated guide to a set of emerging technologies. The CNCF End User Technology Radar is intended for a technical audience who want to understand what solutions end users use in cloud native, and which they recommend.

    • Three levels:

      • Adopt

      • Trial

      • Assess

    • Check all radars here: https://radar.cncf.io/overview

Deployment Strategies

  • Rolling Update - default; updates pods in a controlled manner, one by one

  • Recreate - terminates all existing pods running the old version of your application and then creates new pods with the updated image

  • Canary - involves a staged rollout to a limited set of users (the canary group)

  • Blue/Green - involves maintaining two identical environments, Blue (production) and Green (staging); this approach minimizes downtime in production and allows for rollback by switching back between environment if issues arise.

Last updated