Kubernetes Security Blog | RAD Security

Kubernetes Security Best Practices at Every Layer | KSOC

Written by Jimmy Mesta | May 17, 2023 11:21:15 PM

Introduction

Kubernetes security is critical, as containers continue to transform software development. As the new foundation for CI/CD, containers give you a fast, flexible way to deploy apps, APIs, and microservices with the scalability and performance digital success depends on. But containers and container orchestration tools such as Kubernetes are also popular targets for hackers — and if they’re not protected effectively, they can put your whole environment at risk. In this article, we’ll talk about security best practices for every layer of the container stack.

 

Container security considerations

It’s important to understand container security implications. As an application-layer construct relying on a shared kernel, a container can boot up much faster than a full VM. On the other hand, containers can also be configured much more flexibly than a VM, and can do everything from mounting volumes and directories to disabling security features. In a “container breakout” scenario, when container isolation mechanisms have been bypassed and additional privileges have been obtained on the host, the container can even run as root under the control of a hacker — and then you’re in real trouble.

Here are a few things you can do to keep the bad guys out of your containers.

 

Layer 0 – The Kernel

Kubernetes is an open source platform built to automate the deployment, scaling, and orchestration of containers, and configuring it properly can help you strengthen security. At the kernel level, you can:

  • Review allowed system calls and remove any that are unnecessary or unwanted
  • Verify your kernel versions are patched and contain no existing vulnerabilities

 

Container Security - Layer 1

 

At Rest

Container security at rest focuses on the image you’ll use to build your running container. First, reduce the container’s attack surface by removing unnecessary components, packages, and network utilities — the more stripped-down, the better. Consider using distroless images containing only your application and its runtime dependencies.

Next, make sure to pull your images only from known-good sources, and scan them for vulnerabilities and misconfigurations. Check their integrity throughout your CI/CD pipeline and build process, and verify and approve them before running to make sure hackers haven’t installed any backdoors.

Runtime

Once your image is packed up, it’s time for debugging. Ephemeral containers will let you debug running containers interactively, including distroless or other lightweight images that don’t have their own debugging utilities. Watch for anomalies and suspicious system-level events that might be indicators of compromise, such as an unexpected child process being spawned, a shell running inside a container, or a sensitive file being read unexpectedly. The Cloud Native Computing Foundations  Falco Project open source runtime security tool, and the many Falco rules files that have been created, are hugely useful for this.

Securing the Workload (Pod) - Layer 2

A pod, the unit of deployment inside Kubernetes, is a collection of containers that can share common security definitions and security-sensitive configurations. Pod Security Standards specify a set of policies with three levels of restriction; privileged, baseline and restrictive. These policies cover the following Kubernetes controls and more:

  • Host Namespaces
  • Privileged Containers
  • HostPath volumes
  • Seccomp
  • Volume Types
  • Running as Non-root

To strengthen basic defense at the pod level, you can use the Pod Security admission controller with the most restrictive policy set.  For more flexibility and granular control over pod security, consider something like an Open Policy Agent (OPA), using the OPA Gatekeeper project. You will want to evaluate Kubernetes admission controllers carefully.

 

Kubernetes Networking - Layer 3

 

Compromised Workloads

By default, all pods can talk to all the other pods in a cluster without restriction, which makes things very interesting from an attacker’s perspective. If a workload is compromised, the attacker will likely try to probe the network and see what else they might be able to access. The Kubernetes API is also available to access from inside the pod, offering another rich target. And if you see traffic originating from a container in a cluster reaching out to a foreign IP that hasn’t been touched before, it’s not a good sign.

Strict network controls are a critical part of container hardening — pod to pod, cluster to cluster, outside-in, and inside-out. Use built-in Network Policies to isolate workload communication and build granular rulesets. Consider implementing a service mesh to control traffic between workloads as well as ingress/egress, such as by defining namespace-to-namespace traffic.

 

Application Layer (L7) Attacks – Server-Side Request Forgery (SSRF)

SSRF is consistently in the news when it relates to Kubernetes, and no wonder. With cloud native environments where APIs talk to other APIs, SSRF can be especially hard to stop; customer-supplied webhooks are especially notorious. Once a target has been found, SSRF can be used to escalate privileges and scan the local Kubernetes network and components; hit the cloud metadata endpoint; dump out the data on the Kubernetes metrics endpoint to learn valuable information about the environment — and potentially make it possible to take it over completely.

 

Application Layer (L7) Attacks – Remote Code Execution (RCE)

RCE is also extremely dangerous in cloud native environments, making it possible to run system-level commands inside a container to grab files, access the Kubernetes API, run image manipulation tools, and compromise the entire machine.

 

Application Layer (L7) Defenses

The first rule of protection is to adhere to secure coding and architecture practices — that can mitigate the majority of your risk. Beyond that, you can layer on network defenses along both axes: north-south, to monitor and block malicious external traffic to your applications and APIs; and east-west, to monitor traffic from container to container, cluster to cluster, and cloud to cloud to make sure you’re not being victimized by a compromised pod.

 

Node security - Layer 4

Node-level security isn’t quite as exciting as networking, but it’s just as important. To prevent container breakout on a VM or other node, limit external administrative access to nodes as well as the control plane, and watch out for open ports and services. Keep your base operating systems minimal, and harden them using CIS benchmarks. Finally, make sure to scan and patch your nodes just like any other VM.

 

Kubernetes Cluster Components - Layer 5

There are all kinds of things going on in a Kubernetes cluster, and there’s no all-in-one tools or strategy to secure it. At a high level, you should focus on:

  • API Server – check your mechanisms for access control and authentication, and perform additional security checks of your dynamic webhooks, Pod Security Policy, and public network access to the Kubernetes API;
  • Access control – use role-based access control (RBAC) to enforce the principle of least privilege for your API server and Kubernetes secrets 
  • Service account tokens – to prevent unauthorized access, limit permissions to service accounts as well as to any secrets where service account tokens are stored
  • Audit logging – make sure this is enabled
  • Third-party components – be careful about what you’re bringing into your cluster so you know what’s running there and why (for example with a KBOM)
  • Kubernetes versions – Kubernetes can have vulnerabilities just like any other system, and has to be updated and patched promptly
  • Kubelet misconfiguration – responsible for container orchestration and interactions with container runtime, Kubelets can be abused and attacked in an attempt to elevate privileges

 

 

Kubernetes Security Standards: CIS, DISA and NIST

 

To ensure the highest level of security for Kubernetes, it is essential to get acquainted with container security standards. Both the Center for Internet Security (CIS) and the Defense Information Systems Agency (DISA) have published comprehensive container security benchmarks and recommended practices. These documents provide detailed guidance on configuring your Kubernetes environment and protecting it against potential threats. They also act as industry-wide security standards trusted and accepted by many organizations.

The National Institute of Standards and Technology (NIST) has released a Special Publication (SP) NIST 800-190 checklist – Application Container Security Guide to help organizations secure their containerized applications and related infrastructure components.

This insightful NIST container security guide will provide you with a thorough breakdown of how to securely and efficiently deploy and manage containers in an enterprise environment and tactics to ensure the integrity of your software supply chain.

The NIST Secure Software Development Framework (SSDF) has recently been released to give organizations a structured way of creating secure software systems. This framework works with NIST 800-190 container security checklist, offering direction on how to securely construct, deploy, and handle containerized applications.

In response to the recent executive order on supply chain security, NIST released its NIST 800-161 standard. This framework was created with vigilance in mind, describing a secure software supply chain management system that enables organizations to guarantee the integrity of all software components. SP 800-161 provides specific guidance for safely developing and deploying containers, including detailed recommendations on leveraging efficient and secure DevOps processes. KSOC has released the first Kubernetes Bill of Materials (KBOM) standard to help teams incorporate Kubernetes into their efforts around software supply chain security. To contribute, visit our Github repo.

 

Security for Kubernetes Managed Services 

 

Amazon Elastic Kubernetes Service (EKS)

When using EKS, it’s vital to implement the best AWS EKS architecture best practices and the AWS EKS security best practices. This includes Identity and Access Management (IAM) policies, Pod Security Policies (PSPs), Runtime Security Policies (RSPs) that define the security configurations for your applications, network policies, Infrastructure security, and Data encryption.

Regarding IAM policies, you should tightly control who has access to your Kubernetes clusters. This can be done by creating roles and users with limited access and tagging all the resources within your clusters. In addition, you should create network policies that control ingress and egress traffic for your clusters. These EKS monitoring best practices will allow you to restrict access to specific ports, protocols, and IP addresses.

For pod and runtime security policies, you should implement policies that dictate the security settings for your applications. This involves setting user access restrictions and ensuring that all services are running in secure containers with appropriate security settings. Infrastructure security is also important when using EKS.

To secure your resources, you should consider using AWS security groups and network ACLs to limit access to specific ports. Additionally, enabling logging and audit trails will aid you in detecting any suspicious activity.

EKS multi-tenancy best practices are often considered when deploying multiple applications onto the same cluster. This typically involves setting resource limits for each application and ensuring network security between tenants.

Finally, data encryption and secrets management should be implemented for your Kubernetes clusters. This includes using encryption at rest and in transit and configuring Kubernetes secrets to securely store sensitive information. Following these AWS Kubernetes security best practices will make your container environment more secure and protected from potential threats.

 

Azure Kubernetes Security Best Practices

 

Azure is another powerful cloud platform for running and deploying Kubernetes clusters. Microsoft has provided a guide to establishing an Azure AKS security baseline, which provides a set of secure configuration settings for AKS clusters. This baseline includes recommendations on using RBAC (role-based access control) to restrict access to Kubernetes resources. It also includes suggestions on using Kubernetes security features such as Pod Security Policies and Network Policies.

To begin, you should secure your AKS deployment by setting up role-based access control (RBAC) to restrict access to Kubernetes resources. Accomplishing this is simply creating users and roles with limited access while using tags to identify resources.

The next step is to use Kubernetes security features such as Pod Security Policies and Network Policies to restrict access to specific pods and networks. This allows you to control which services can communicate with each other, as well as to set resource limits for each application.

You should also use the Azure Kubernetes Service security features such as Transport Layer Security (TLS) and Pod Security Policies to ensure that traffic between services is encrypted and secure. Finally, you should enable logging and audit trails to monitor activity on your cluster. This will ensure that you can detect suspicious behavior and act accordingly.

Lastly, it is important to understand the difference between cloud IAM and Kubernetes RBAC when using a managed Kubernetes platform.

 

Kubernetes Versions and Kubernetes CVEs

 

The most current version of Kubernetes will provide the latest critical security fixes against things like, for example, vulnerabilities in the Kubernetes CSI driver. It is important to stay up to date with the latest version.

 

Conclusion

Whether you are just getting started with Kubernetes in your organization or just getting started with Kubernetes security, it is helpful to look at the full picture before diving in. Kubernetes security can seem daunting, but by working through best practices for each layer of your stack, you can bring your containers to the same high level of protection as the rest of your environment — so you can enjoy the benefits of fast, agile development without putting your environment or your business at risk. Reach out to us for a demo to see how you can reduce the noise of Kubernetes security findings by 98% with Automated Risk Triage and threat vectors.