Anyone looking for an orchestrating platform for any containerized applications might have heard of Kubernetes as it has captivated the hearts of all software professionals with its various solutions like AKS, EKS and GKE. There is a misnomer around Kubernetes about it’s complex behaviour to setup and manage. However, if the setup is done using any of the managed could solutions, it can be a low hanging fruit. All the prominent cloud platforms provide Kubernetes solutions and the setup can be managed easily. For the beginners elf self-setup is not recommended as it may turn out to be extremely complex.
Infrastructure as Code
It is always considered as the best practice for using Infrastructure as Code as the Desired State configuration and this has lot of benefits such as stable environment and reduced risk while applying changes. We can test the changes in non-production environment by specifying it as code in the infrastructure. It discourages or prevents manual deployments, making your infrastructure deployments more consistent, reliable, and repeatable. We have tools like Terraform and Pulumi to deploy Kubernetes in any cloud platform where networking or load balancers or DNS configuration can be done easily in the cluster.
Monitoring & Centralised logging
Being a stable platform, Kubernetes solves all the problems while deploying the clusters which developers will not know if they don’t look for it. However, it has proper monitoring solutions that notifies about certificate expiry or over usage of nodes. Kubernetes platform and applications can be monitored easily using Prometheus and Grafana. This also helps in setting up the Alerting system so that downtime or failures can be prevented easily. Logging can be collected using Fluentd or Filebeat and the logging information can be sent to ElasticSearch platform that helps to collect all the error logs or log events in a centralized platform.
Developer need not spend much time in managing all these tools as it is set up centrally in the platform.
Centralised Ingress Controller with SSL certificate management
Ingress is a simple configuration that describes how traffic should flow from outside of Kubernetes to your application. This can be achieved by installing a central Ingress Controller (e.g., Nginx) in the cluster to manage all incoming traffic for every application. When an Ingress Controller is linked to a public Cloud LoadBalancer, all traffic is automatically load-balanced among Nodes and sent to the right pods IP Addresses.
Centralization helps an Ingress Controller to investigate HTTPS and SSL. We have cert-manager in Kubernetes that is a centrally deployed application that looks into HTTPS certificates. It can be configured using Let’s Encrypt, wildcard certificates, or even a private Certification Authority for internal company trusted certificates. All incoming traffic will be automatically encrypted using the HTTPS certificates and forwarded to the correct Kubernetes pods.
Role-Based Access Control (RBAC)
Kubernetes Administrator role should be handled carefully and hence least privilege should be given to all while accessing Kubernetes. This can be done easily with the help of Role-Based Access.
Control for the complete Kubernetes stack (Kubernetes API, deployment tools, dashboards, etc.). Authentication and authorization can be managed centrally using OAuth2/OIDC for Kubernetes integration with an IAM solution like Keycloak, Azure AD, or AWS Cognito. RBAC helps to define access to users based on their role and this can be applied to the access groups as well.
Kubectl is an important command line tool in Kubernetes, but we cannot manually use “kubectl apply” command in production. We can use Git to do Kubernetes deployment but the desired state configuration should be present in Git along with a deployment platform. There are ArgoCD, Flux, and Jenkins as GitOps platforms for Kubernetes deployments. This works well for all type of Kubernetes deployments that the changes are rolled back immediately if it is not in the desired format. Environments, teams, projects, roles, policies, namespaces, clusters, app groups and applications are managed easily with a GitOps bootstrapping technique. All changes are made traceable, automated and manageable with GitOps.
User credentials are stored as secrets and Kubernetes secrets are used to inject secrets into your containers as environment variables or file mappings. RBAC is applied to access secrets in production environment and this helps to keep the nature of secrets. CI/CD deployment can be used to inject secrets into Kubernetes. Deployment can also be done using a local development environment, but there can be configuration state drift. Also, as it is not traceable, secrets cannot be managed easily in local deployment environment. We can sync the secrets using cloud central vaults such as Azure Key Vault, Hashicorp Vault or AWS Secrets Manager with a central secret’s operator like External Secrets Operator. This way, secret references can be stored in Git that points an entry in an external secrets Vault. This helps to manage the developers in accessing the secrets with RBAC access. They will be able to reference secrets, and use them in containers, but will never be able to directly access them.
If we have good Infrastructure as Code (IaC) solution with proper monitoring and RBAC access and deployment in place, Kubernetes is a go to solution for any type of orchestration. It is good to set up Kubernetes cluster based on standardized open-source tools so that time and work can be saved.