Why Focusing on Container Runtimes Is the Most Critical Piece of Security for EKS Workloads?

Pierluigi Paganini March 19, 2021

Amazon Elastic Kubernetes Service (EKS), a platform which gives customers the ability to run Kubernetes apps in the AWS cloud or on premises.

Organizations are increasingly turning to Kubernetes to manage their containers. In the 2020 Cloud Native Survey, 91% of respondents told the Cloud Native Computing Foundation (CNCF) that they were using Kubernetes—an increase from 78% in 2019 and 58% a year earlier. More than four-fifths (83%) of that year’s survey participants said that they were running Kubernetes in their production environment.

These findings reflect the fact that organizations are turning to Kubernetes in order to minimize application downtime. According to its documentation, Kubernetes comes with load balancing features that help to distribute high network traffic and keep the deployment stable. It also enables admins to describe the desired state of their containers and use that specification to change the actual state of those containers to the desired state. If any of the containers don’t respond to a user-defined health check in the meantime, Kubernetes can use its self-healing properties to kill those containers and replace them with new ones.

Amazon EKS and the Need for Security

Some organizations are setting up their own environments to take advantage of Kubernetes’ benefits, while others are turning to vendor-managed platforms. Regarding the latter, one of the most popular of those options is Amazon Elastic Kubernetes Service (EKS), a platform which gives customers the ability to run Kubernetes apps in the AWS cloud or on premises. Amazon EKS comes with many benefits including the ability to automatically detect and replace unhealthy control plane nodes as well as scale their resources efficiently. It also applies the newest security patches to a cluster’s control plane as a means of giving customers a more secure Kubernetes environment.

With that last point in mind, perhaps the most important element of EKS security is the need to limit the permissions and capabilities of container runtimes. Container runtimes are dynamic in nature; they’re constantly spinning up and winding down. This dynamism makes it difficult for admins to maintain visibility of their containers, notes Help Net Security, a fact which malicious actors commonly exploit to conduct scans, perform attacks and launch data exfiltration attempts. As such, admins need to vet all of their activities within the container application environment to ensure that their organizations aren’t under attack.

Best Practices for Container Runtime Security in EKS

Admins can follow some best practices to ensure container runtime security in EKS. Those recommendations include the following:

Be Strategic with Namespaces

Admins need to be careful with their namespaces, names which help them to divide cluster resources between multiple users. Specifically, they should use their namespaces liberally and in a way that supports their applications. This latter point involves privilege segregation, a process which ensures all workloads that are managed by different teams have their own namespace.

Implement Role-Based Access Control

As noted elsewhere in Kubernetes’ documentation, Role-Based Access Control (RBAC) is a means by which admins can regulate access to computer or network resources based on individual users’ roles. RBAC API does this by declaring four objects:

  • Role: This API object is a set of permissions that’s given within a specific namespace.
  • ClusterRole: Like a role, a ClusterRole contains rules that represent a set of permissions. But this API object is a non-namespaced resource that enables admins to define permissions across all namespaces and on cluster-scoped resources.
  • RoleBinding: This API object takes the permissions defined by a Role and assigns it to a user or a group of users within a namespace.
  • ClusterRoleBinding: Using the permissions contained within a ClusterRole, a ClusterRoleBinding assigns those rights across all namespaces in the cluster.

To secure the container runtime environment, admins should consider following the principle of least privilege when working with these four API objects. In particular, they might consider limiting their use of ClusterRoles and ClusterRoleBindings, as these assignments could enable an attacker to move to other cluster resources if they compromise a single user account.

Use Network Policies for Cluster Traffic Control

Pods are non-isolated by default, as noted on Kubernetes’ website. These groups of containers accept traffic from any source. Knowing that, a malicious actor could compromise a single pod and leverage that event to move laterally to other pods and cluster resources.

Admins can defend against this type of event by creating a Network Policy that selects their pods and rejects any connections that are not specified within their terms. Admins can begin by creating a Network Policy with egress and ingress policies that support their organization’s security requirements. They can then select whichever pods they want to protect using those specified network connection rules.

Enforce Security Contexts Using OPA Gatekeeper

Kubernetes enables admins to define privilege and access control settings for a pod or container using what’s known as security contexts. They can then enforce those security contexts within their Kubernetes environment using Gatekeeper. Created by the Open Policy Agent (OPA), this tool allows admins to do the same types of things that they’d want to do with the soon-to-be-deprecated Pod Security Policies (PSPs). However, Gatekeeper lets admins go a step further through the creation of custom policies that designate allowed container registries, impose pod resource limits and interact with almost any other parameter that admins can think of.

Protect IAM Credentials of the Nodes’ IAM Instance Role

Here’s StackRox with some guidance on how to implement this security measure:

The nodes are standard EC2 instances that will have an IAM role and a standard set of EKS permissions, in addition to permissions you may have added. The workload pods should not be allowed to grab the IAM’s credentials from the EC2 metadata point. You have several options for protecting the endpoint that still enable automated access to AWS APIs for deployments that need it. If you don’t use kube2iam or kiam, which both work by intercepting calls to the metadata endpoint and issuing limited credentials back to the pod based on your configuration, install the Calico CNI so you can add a Network Policy to block access to the metadata IP, 169.254.169.254.

EKS Security on a Broader Scale

The guidance provided above can help admins ensure container runtime security in Amazon EKS. For more information about other aspects of Amazon EKS security, click here

About the Author: David Bisson is an information security writer and security junkie. He’s a contributing editor to IBM’s Security Intelligence and Tripwire’s The State of Security Blog, and he’s a contributing writer for Bora. He also regularly produces written content for Zix and a number of other companies in the digital security space.

If you want to receive the weekly Security Affairs Newsletter for free subscribe here.

Follow me on Twitter: @securityaffairs and Facebook

[adrotate banner=”9″][adrotate banner=”12″]

Pierluigi Paganini

(SecurityAffairs – hacking, Microsoft Exchange)

[adrotate banner=”5″]

[adrotate banner=”13″]



you might also like

leave a comment