AWS Elastic Kubernetes Service (EKS) is a managed Kubernetes service that makes it easy to run Kubernetes on AWS without needing to install and operate your own Kubernetes control plane. As Kubernetes continues to be the go-to for container orchestration, proficiency in EKS is becoming increasingly valuable. Below are 100 commonly asked interview questions about AWS EKS, along with detailed answers, to help you prepare comprehensively.
Basic Interview Questions
1. What is AWS EKS?
Ans: AWS EKS is a managed service that makes it easy to run Kubernetes on AWS without the need to manage the Kubernetes control plane.
Managed Service means we dont need to manage, maintain the kubernetes master nodes. AWS will handle this part.
2. What are the primary benefits of using EKS?
Ans:
- Simplified Kubernetes management
- High availability
- Security
- scalability
- Integration with other AWS services.
3. What is Kubernetes?
Ans: Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized
applications.
4. How does EKS differ from ECS?
Ans: EKS is based on kubernetes, offering more flexibility and features for container orchestration, while ECS is AWS’s proprietary container orchestration service.
5. What are the components of a Kubernetes cluster?
Ans: A kubernetes cluster consists of a master node (control plane) and worker nodes
6. Why should I use Amazon EKS?
Ans: Amazon EKS provisions and scales the kubernetes control plane, including the application programming interface (API) servers and backend persistence layer, across multiple AWS Availability Zones (AZs) for high availability and fault tolerance. Amazon EKS automatically detects and replaces unhealthy control plane nodes and patches the control plane.
You can run EKS using AWS Fargate, which provides serverless compute for containers. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.
Amazon EKS is integrated with many AWS services to provide scalability and security for your applications. These services include Elastic Load Balancing for load distribution, AWS Identity and Access Management (IAM) for authentication, Amazon Virtual Private Cloud (VPC) for isolation, and AWS CloudTrail for logging.
7. How does Amazon EKS work?
Ans: Amazon EKS works by provisioning (starting) and managing the Kubernetes control plane and worker nodes for you. At a high level, Kubernetes consists of two major components: a cluster of ‘worker nodes’ running your containers, and the control plane managing when and where containers are started on your cluster while monitoring their status.
Without Amazon EKS, you have to run both the Kubernetes control plane and the cluster of worker nodes yourself. With Amazon EKS, you provision your worker nodes using a single command in the EKS console, command-line interface (CLI), or API. AWS handles provisioning, scaling, and managing the Kubernetes control plane in a highly available and secure configuration. This removes a significant operational burden and allows you to focus on building applications instead of managing AWS infrastructure.
8. Which operating systems does Amazon EKS support?
Ans: Amazon EKS supports Kubernetes-compatible Linux x86, ARM, and Windows Server operating system distributions. Amazon EKS provides optimized AMIs for Amazon Linux 2 and Windows Server 2019. EKS- optimized AMIs for other Linux distributions, such as Ubuntu, are available from their respective vendors.
Intermediate Interview Questions
1. How do you create an EKS cluster?
Ans: You can create an EKS cluster using the
1.AWS Management Console
2. AWS CLI
3. AWS SDKs
2. What is eksctl?
Ans: eksctl is a command-line tool for creating and managing EKS clusters.
3. How does EKS handle networking?
Ans: EKS integrates with AWS VPC, using AWS CNI plugin for kubernetes to provide VPC-native networking.
4. What is an EKS node group?
Ans: A node group is a collection of EC2 instances that are part of your EKS cluster, which are used to run your containerized applications.
5. How do you scale an EKS cluster?
Ans: You can scale an EKS cluster by following ways:
- Adding or removing nodes manually
- Automatically using the Cluster Autoscaler
- Horizontal Pod autoscaler.
6. What is the difference between Amazon EKS and self-managed Kubernetes clusters?
Ans: Amazon EKS is a managed service, meaning AWS takes care of the control plane, including updates, patches, and high availability. In contrast, self-managed Kubernetes clusters require manual installation, management, and maintenance of the control plane.
7. How can you scale an application running on EKS?
Ans: You can scale an application on EKS using Kubernetes Horizontal Pod Autoscaling (HPA) and Cluster Autoscaler.
HPA automatically adjusts the number of pods based on CPU utilization or custom metrics.
Cluster autoscaler scales the number of worker nodes based on the demand for resources.
8. What is a Kubernetes Operator in EKS?
Ans: A kubernetes Operator is an extension to kubernetes that allows you to define and manage complex, stateful applications. Operators help automate the life cycle management of applications, including deployment, scaling, backup, and recovery. They can be used to manage databases, message queues, and other stateful workloads.
9. How does EKS handle container image management?
Ans: EKS integrates with Amazon Elastic Container Registry (ECR), which is a managed Docker container registry. ECR provides secure and scalable storage for container images. EKS clusters can pull container images from ECR when launching pods.
Advanced Interview Questions
1. What are the main security features of EKS?
Ans: Below are main security features:
- IAM roles for service accounts
- VPC network policies and security groups
- Kubernetes RBAC
- Integration with AWS security services like AWS Shield and AWS WAF.
2. Explain the role of IAM in EKS.
Ans: IAM in EKS is used to control access to the EKS API, manage permissions for EKS cluster operations, and define roles for Kubernetes service accounts.
3. What is the AWS CNI plugin?
Ans: The AWS CNI plugin allows Kubernetes pods to have the same IP address inside the pod network as they do in the VPC.
4. How do you monitor an EKS cluster?
Ans: Use CloudWatch for logging and metrics, Prometheus for monitoring, and Grafana for visualization.
EKS integrates with Amazon CloudWatch, which provides monitoring and observability for EKS clusters. CloudWatch allows you to collect and analyze metrics, create alarms, and generate logs and insights for your EKS workloads.
5. What is the Cluster Autoscaler in EKS?
Answer: The Cluster Autoscaler automatically adjusts the size of the EKS node group based on the scheduling needs of the pods.
Cluster Autoscaler is a component that automatically adjusts the size of the EKS cluster by adding or removing worker nodes based on resource demands. It monitors the pe
6. Explain how to configure and manage logging and monitoring for Amazon EKS clusters using Amazon CloudWatch and other AWS services.
Ans: Logging and monitoring are essential for ensuring the health and performance of your Amazon EKS clusters. You can use Amazon CloudWatch to collect logs from your pods and nodes, as well as metrics about their performance. Additionally, you can use other AWS services, such as Amazon CloudTrail and Amazon Kinesis Firehose, to further enhance your logging and monitoring capabilities.
7. How can you integrate AWS App Mesh with EKS?
Ans: AWS App Mesh can be integrated with EKS to provide service mesh capabilities for your applications. By deploying Envoy proxies as sidecar containers and configuring App Mesh resources such as virtual services and virtual nodes, you can gain features like traffic routing, observability, and security controls within your EKS cluster.
8. Do I need to install all the dependencies of Kubernetes on each node in order to run it on EKS?
Ans: No, you do not need to install all of the dependencies of Kubernetes on each node in order to run it on Amazon EKS. Amazon EKS takes care of the underlying infrastructure and provides a fully managed Kubernetes environment, so you do not have to worry about setting up and maintaining the Kubernetes control plane or worker nodes.
9. What are two ways that customers can run their applications on EKS?
Ans: Customers can run the application by following ways:
1. You can Create the Worker Nodes (EC2 Instances) to deploy your applications on it. AWS EKS Control plane manage these application
2. You can use AWS Fargate Profiles ( same works as AWS ECS Service) to deploy the Containers on it and AWS EKS Control plane manage these application./
10. How do you configure AWS VPCs, security groups, subnets, and other network resources when setting up an EKS cluster?
Ans: You will need to configure your VPC in order to allow communication between your EKS cluster and your worker nodes. You will also need to create a security group for your EKS cluster that will allow traffic from your worker nodes. Finally, you will need to create subnets for your EKS cluster in order to allow communication between your EKS cluster and the internet.
11. What are the differences between Amazon ECS, Amazon Fargate, and Amazon EKS?
Ans:
Amazon ECS is a container orchestration service that helps you run and manage containerized applications on AWS.
Amazon Fargate is a serverless compute engine for containers that works with Amazon ECS.
Amazon EKS is a managed Kubernetes service that makes it easy for you to run Kubernetes on AWS.
12. Does Amazon EKS work with my existing Kubernetes applications and tools?
Ans: Amazon EKS runs the open-source Kubernetes software, so you can use all the existing plug-ins and tooling from the Kubernetes community. Applications running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment, whether running in on-premises data centers or public clouds. This means that you can easily migrate any standard Kubernetes application to Amazon EKS without any code modifications.
13. Does Amazon EKS work with AWS Fargate?
Ans: Yes, Amazon EKS works with AWS Fargate, which is a fully managed container runtime that allows you to run containerized applications without having to manage the underlying infrastructure. With AWS Fargate, you can use Amazon EKS to deploy and run your applications on a fully managed, serverless infrastructure that scales automatically to meet the needs of your workloads. This can be a convenient way to use Amazon EKS and take advantage of the scalability and reliability of the Kubernetes platform, while avoiding the need to manage the underlying infrastructure.
14. What are Amazon EKS add-ons?
Ans: EKS Add-Ons let you enable and manage Kubernetes operational software, which provides capabilities like observability, scaling, networking, and AWS cloud resource integrations for your EKS clusters. At launch, EKS add-ons supports controlling the launch and version of the AWS VPC CNI plugin through the EKS API.
15. Why should I use Amazon EKS add-ons?
Ans: Amazon EKS add-ons provides one-click installation and management of Kubernetes operational software. Go from cluster creation to running applications in a single command, while easily keeping the operational software required for your cluster up to date. This ensures your Kubernetes clusters are secure and stable and reduces the amount of work needed to start and manage production-ready Kubernetes clusters on AWS.
16. Can I update my Kubernetes cluster to a new version?
Ans: Yes. Amazon EKS performs managed, in-place cluster upgrades for both Kubernetes and Amazon EKS platform versions. This simplifies cluster operations and lets you take advantage of the latest Kubernetes features, as well as the updates to Amazon EKS configuration and security patches.
There are two types of updates you can apply to your Amazon EKS cluster: Kubernetes version updates and Amazon EKS platform version updates. As new Kubernetes versions are released and validated for use with Amazon EKS, we will support three stable Kubernetes versions as part of the update process at any given time.
17. What is an EKS platform version?
Ans:
Amazon EKS platform versions represent the capabilities of the cluster control plane, such as which Kubernetes API server flags are enabled, as well as the current Kubernetes patch version. Each Kubernetes minor version has one or more associated Amazon EKS platform versions. The platform versions for different Kubernetes minor versions are independent.
When a new Kubernetes minor version is available in Amazon EKS (for example, 1.13), the initial Amazon EKS platform version for that Kubernetes minor version starts at eks.1. However, Amazon EKS releases new platform versions periodically to enable new Kubernetes control plane settings and to provide security fixes.
18. Why would I want manual control over Kubernetes version updates?
Ans: New versions of Kubernetes introduce significant change to the Kubernetes API, which can change application behavior. Manual control over Kubernetes cluster versioning lets you test applications against new versions of Kubernetes before upgrading production clusters. Amazon EKS offers the ability to choose when you introduce changes to your EKS cluster.
19. How do I update my worker nodes?
Ans:
AWS publishes EKS-optimized Amazon Machine Images (AMIs) that include the necessary worker node binaries (Docker and Kubelet). This AMI is updated regularly and includes the most up-to-date version of these components. You can update your EKS managed nodes to the latest versions of the EKS-optimized AMIs with a single command in the EKS console, API, or CLI.
If you are building your own custom AMIs to use for EKS worker nodes, AWS also publishes Packer scripts that document our build steps, allowing you to identify the binaries included in each version of the AMI.
20. How much does Amazon EKS cost?
Ans: You pay $0.10 per hour for each Amazon EKS cluster you create and for the AWS resources you create to run your Kubernetes worker nodes. You only pay for what you use, as you use it; there are no minimum fees and no upfront commitments.
Practical Scenario Questions:
1. How do you deploy a Kubernetes application on EKS?
Ans: Write Kubernetes manifest files (YAML) for your application, and use kubectl apply to deploy them to your EKS cluster.
2. How do you upgrade an EKS cluster?
Ans: Upgrade the control plane using the AWS Management Console, AWS CLI, or eksctl, and then upgrade the node groups.
3. How do you set up a CI/CD pipeline for an EKS application?
Ans: Use CodePipeline, Jenkins, or GitLab CI/CD to automate the build, test, and deployment of your application to EKS.
4. What are the steps to enable logging in EKS?
Ans: Enable control plane logging through the AWS Management Console or AWS CLI, and configure CloudWatch or other logging tools for application logs.
5. How do you implement network policies in EKS?
Ans: Use Kubernetes Network Policies to define rules for how pods communicate with each other and other network endpoints.
6. How to handle blue/green deployments in EKS?
Ans: Use Kubernetes Deployments and Services to switch traffic between different versions of the application, ensuring zero downtime.
7. Steps to migrate an on-premises Kubernetes cluster to EKS.
Ans: Export your Kubernetes configurations, set up an EKS cluster, apply configurations to the EKS cluster, and migrate data and workloads.
8. Implementing canary deployments in EKS.
Ans: Deploy a small percentage of traffic to a new version of the application, monitor its performance, and gradually increase the traffic.
9. How to use AWS Fargate with EKS?
Ans: Create an EKS cluster with Fargate profiles to run Kubernetes pods without managing EC2 instances.
Benifits of AWS Fargate are as follows:
AWS Fargate is a serverless compute engine for containers. When used with EKS, it allows you to run containers without managing the underlying infrastructure. Benefits of using Fargate with EKS include reduced operational overhead, better scalability, and optimized resource utilization.
10. Setting up observability in EKS.
Ans: Use tools like Prometheus for monitoring, Grafana for visualization, and Jaeger for tracing to gain insights into your EKS cluster.
11. Setting up a monitoring stack in EKS.
Ans: Deploy Prometheus for metrics collection, Grafana for visualization, and configure Alertmanager for notifications.
12. How to handle stateful applications in EKS?
Ans: Use StatefulSets for deployment, ensure persistent storage with PVs and PVCs, and configure appropriate backup and restore strategies.
13. Implementing a multi-tenant architecture in EKS.
Ans: Use namespaces for isolation, implement Network Policies, apply RBAC, and ensure proper resource quotas and limits.
14. Migrating a monolithic application to microservices on EKS.
Ans: Break down the monolith into smaller services, containerize them, deploy each service as a separate pod, and manage communication between services using Kubernetes services and Ingress.
15. Using Kubernetes Custom Resource Definitions (CRDs) in EKS.
Ans: Define custom resources to extend Kubernetes capabilities, deploy custom controllers to manage the lifecycle of these resources, and use them to automate complex application management tasks.
16. Implementing A/B testing in EKS.
Ans: Use Ingress or Service with weighted routing to direct a portion of traffic to different versions of the application for testing purposes.
17. How to manage Kubernetes configurations across environments?
Ans: Use tools like Helm or Kustomize to manage configurations, and employ CI/CD pipelines to promote changes across different environments.
18. Handling secrets and configuration updates in EKS.
Ans: Use Kubernetes ConfigMaps and Secrets for configuration management, and automate updates using rolling deployments.
19. How to deploy a highly available database on EKS.
Ans: Use StatefulSets with Persistent Volumes, configure multi-AZ deployments, implement backup and restore mechanisms, and use a Service to expose the database.
19. Setting up a multi-cloud Kubernetes architecture.
Ans: Use Kubernetes Federation or tools like Rancher to manage clusters across different cloud providers, ensuring consistent configuration and centralized management.
20. How to implement a zero-downtime deployment in EKS?
Ans: Use rolling updates or blue/green deployments, ensure readiness and liveness probes are correctly configured, and monitor the deployment process.
21. Migrating a legacy application to EKS.
Ans: Containerize the application, create Kubernetes manifests, deploy the application to EKS, and gradually shift traffic from the legacy environment.
22. Handling compliance and auditing in EKS.
Ans: Enable audit logs, use AWS CloudTrail for API activity, implement RBAC policies, and employ tools like OPA for policy enforcement.
23. Implementing a backup and restore strategy for EKS.
Ans: Use Velero for backing up and restoring cluster resources and persistent volumes, and regularly test the backup and restore processes.
24. How to set up a hybrid cloud environment with EKS and on-premises Kubernetes clusters.
Ans: Use Kubernetes Federation or tools like Anthos to manage clusters across different environments, ensuring consistent policies and workload distribution.
25. What are the different networking options available for AWS EKS Cluster?
Ans: Below are some options available for AWS EKS Cluster
1.Amazon VPC:
This is the Default networking options for AWS EKS managed Service. It allows you to isolate your cluster in a private VPC and control network access to your pods and services.
2. AWS Private Link:
Provides private connectivity between your EKS cluster and other AWS services, such as Amazon S3, DynamoDB, and Amazon RDS, without exposing your cluster’s traffic to the public internet.
3. Calico:
An open-source networking solution that provides advanced networking features for EKS clusters, such as network policy enforcement and network policy automation.
26. Explain how to automate Amazon EKS deployments using AWS CodePipeline and other continuous integration and continuous delivery (CI/CD) tools?
Ans: CI/CD pipelines automate the process of building, testing, and deploying Amazon EKS applications. You can use AWS CodePipeline to create a CI/CD pipeline that integrates with your version control system, builds your application containers, and deploys them to your EKS cluster.
27. Discuss how to troubleshoot and resolve common issues that may arise with Amazon EKS clusters?
Ans:Troubleshooting EKS clusters involves identifying the root cause of the issue and taking appropriate corrective actions. Some common troubleshooting techniques include:
Checking logs and events: Reviewing logs and events from pods, nodes, and the control plane can provide valuable insights into the cause of the issue.
Using diagnostic tools: Utilizing tools like kubectl and Amazon CloudWatch to gather detailed information about the cluster’s state and resource utilization.
Consulting documentation and community resources: Referencing official documentation and community resources for troubleshooting guidance and potential solutions to known issues.
28. Explain how to manage autoscaling for Amazon EKS clusters using Amazon Cluster Autoscaler and other autoscaling strategies?
Ans:Autoscaling ensures that your EKS cluster has the right number of worker nodes to meet the workload demands. You can use Amazon Cluster Autoscaler to automatically adjust the number of worker nodes based on the CPU or memory utilization of the cluster. Additionally, you can implement custom autoscaling strategies using tools like Kubernetes Horizontal Pod Autoscaler (HPA) or custom metrics-based autoscalers.
29. Describe how to handle upgrades and rollbacks for Amazon EKS clusters to minimize downtime and disruption?
Ans: Upgrading and rolling back EKS clusters requires careful planning and execution to prevent downtime and disruption. You can use tools like AWS Deployment Controller or Kubernetes Kubectl to manage rolling updates and ensure that the cluster is always in a healthy state during the upgrade process.
30. Discuss how to integrate Amazon EKS with other AWS services, such as Amazon Machine Learning, Amazon SageMaker, and Amazon Aurora, for building and deploying data-intensive applications?
Ans: Amazon EKS integrates seamlessly with other AWS services, enabling you to build and deploy data-intensive applications. You can use Amazon Machine Learning and Amazon SageMaker to build and train machine learning models, and then deploy them as containerized applications on your EKS cluster. Additionally, you can use Amazon Aurora as a highly scalable and reliable database for your data-driven applications.
31. Explain how to leverage Amazon EKS for serverless applications using AWS Fargate and other serverless architectures?
Ans: Amazon EKS supports serverless architectures, allowing you to run containerized applications without managing EC2 instances. You can use AWS Fargate to run your containers on a managed infrastructure, eliminating the need to provision and manage EC2 instances. This approach simplifies deployment and management, and enables you to scale your applications seamlessly.
32. Describe how to implement security best practices for multi-tenant EKS clusters, including workload isolation, network segmentation, and identity and access management (IAM).
Ans: Multi-tenant EKS clusters require additional security considerations to isolate workloads and protect against unauthorized access. You can use Kubernetes Network Policy Enforcement (NPE) to isolate traffic between pods in different namespaces or tenants. Additionally, you can implement IAM roles and policies to restrict access to AWS resources based on the tenant or workload.
33. You have an application deployed on an EKS cluster, and you need to scale it based on a custom metric. How would you achieve this?
Ans: To scale the application based on a custom metric, you can follow these steps:
1.Define a custom metric that reflects the workload or performance of your application. For example, it could be the number of requests per minute.
2. Implement custom metric collection and reporting using a monitoring tool like Prometheus or CloudWatch.
3. Create a Kubernetes Horizontal Pod Autoscaler (HPA) manifest or use the Kubernetes API to define an HPA object.
4. Set the HPA to scale based on the custom metric, specifying the desired minimum and maximum number of replicas for your application.
5. Deploy the updated HPA manifest to the cluster.
6. The HPA controller will periodically monitor the custom metric and adjust the number of replicas accordingly, ensuring the application scales up or down based on the defined metric.
33. You want to deploy an EKS cluster that spans multiple Availability Zones (AZs) to ensure high availability. How would you accomplish this?
Ans: To deploy an EKS cluster across multiple AZs for high availability, you can follow these steps:
1. Create an Amazon VPC (Virtual Private Cloud) that spans multiple AZs.
2. Set up subnets within each AZ, ensuring they are properly configured with appropriate route tables and network ACLs.
3. Launch an EKS cluster using the AWS Management Console, AWS CLI, or AWS SDKs, specifying the VPC and subnets created in the previous steps.
4. Configure the EKS cluster to distribute the control plane across multiple AZs, ensuring it has high availability.
5. Launch worker nodes in each AZ, using Auto Scaling Groups (ASGs) or a managed node group. Configure the ASGs to distribute worker nodes across multiple AZs.
6. Deploy your applications onto the EKS cluster, leveraging the multi-AZ setup to ensure that pods can be scheduled and run on worker nodes in any AZ.
7. Regularly monitor the health and performance of the EKS cluster and its resources, ensuring that proper scaling, load balancing, and redundancy measures are in place.
34. You need to implement secure access to your EKS cluster. How would you accomplish this?
Ans: To implement secure access to an EKS cluster, you can consider the following steps:
1. Utilize AWS Identity and Access Management (IAM) to control user access and permissions. Create IAM roles and policies that grant only the necessary privileges to users or groups.
2. Implement Kubernetes RBAC (Role-Based Access Control) to manage access to cluster resources. Define roles, role bindings, and service accounts to grant or restrict access to specific resources and actions within the cluster.
3. Enable AWS PrivateLink to access the EKS control plane securely over private IP addresses, avoiding exposure over the public internet.
4. Leverage AWS Secrets Manager or Kubernetes Secrets to securely store sensitive information such as API keys, passwords, or database credentials.
5. Implement network isolation using VPC security groups and network ACLs to control inbound and outbound traffic to the EKS cluster.
6. Enable encryption at rest and in transit to protect data stored within the cluster and data transmitted between components.
7. Regularly update and patch the EKS cluster to ensure that security vulnerabilities are addressed promptly.
8. Implement centralized logging and monitoring using services like CloudWatch and AWS CloudTrail to track and audit activities within the cluster.
35. You have an application deployed on an EKS cluster, and you want to enable automatic scaling of both pods and worker nodes based on CPU utilization. How would you accomplish this?
Ans: To enable automatic scaling of pods and worker nodes based on CPU utilization, you can follow these steps:
1. Create a Kubernetes Horizontal Pod Autoscaler (HPA) manifest or use the Kubernetes API to define an HPA object for your application.
2. Set the HPA to scale based on CPU utilization, specifying the desired minimum and maximum number of replicas for your application.
3. Deploy the updated HPA manifest to the cluster.
4. The HPA controller will periodically monitor the CPU utilization of the pods and adjust the number of replicas accordingly.
5. To enable automatic scaling of worker nodes, create an Amazon EC2 Auto Scaling Group (ASG) or a managed node group for the EKS cluster.
6. Configure the ASG or node group to scale based on CPU utilization, specifying the desired minimum and maximum number of worker nodes.
7. Associate the ASG or node group with the EKS cluster.
8. The ASG or node group will monitor the CPU utilization of the worker nodes and scale the cluster up or down accordingly.
36. You have a multi-tenant EKS cluster where multiple teams deploy their applications. You want to ensure resource isolation and prevent one team’s application from affecting the performance of another team’s application. How would you achieve this?
Ans: To ensure resource isolation and prevent interference between applications in a multi-tenant EKS cluster, you can employ the following approaches:
1. Utilize Kubernetes namespaces to logically separate applications and teams. Each team can have its own namespace, allowing them to manage and deploy their applications independently.
2. Implement Kubernetes Resource Quotas within each namespace to define limits on CPU, memory, and other resources that each team can utilize. This prevents one team from monopolizing cluster resources and impacting others.
3. Configure Kubernetes Network Policies to control network traffic between pods and namespaces. Network Policies can restrict or allow communication based on specific rules, ensuring that applications are isolated from each other.
4. Consider using Kubernetes Pod Security Policies to enforce security and isolation measures. Pod Security Policies define a set of conditions that pods must adhere to, ensuring that each team’s applications meet the defined security standards.
5. Monitor and analyze resource usage within the cluster using tools like Prometheus and Grafana. This allows you to identify resource-intensive applications and take necessary actions to ensure fair resource distribution and prevent performance degradation for other applications.
6. Regularly communicate and collaborate with teams to understand their requirements and address any potential conflicts or issues related to resource utilization and performance.
37. You want to implement blue-green deployments for your EKS cluster using GitOps principles. How would you set up this deployment strategy?
Ans: To implement blue-green deployments for an EKS cluster using GitOps principles, you can follow these steps:
1. Set up a version-controlled repository (e.g., Git) to store your application manifests and configurations.
2. Define two sets of Kubernetes manifests or Helm charts—one for the blue environment and another for the green environment. These represent the desired state of the application in each environment.
3. Utilize a continuous integration and continuous deployment (CI/CD) tool such as Jenkins, GitLab CI/CD, or AWS CodePipeline to manage the deployment process.
4. Configure the CI/CD pipeline to monitor changes in the Git repository and trigger deployments based on updates to the manifests or charts.
5. Deploy the blue environment initially by applying the blue manifests or charts to the EKS cluster.
6. Implement a load balancer (e.g., AWS Application Load Balancer) to distribute traffic to the blue environment.
7. Test and validate the application in the blue environment to ensure it meets the desired requirements.
8. Once the blue environment is validated, update the Git repository with the green manifests or charts, reflecting the desired state of the application in the green environment.
9. Trigger the CI/CD pipeline to deploy the green environment by applying the green manifests or charts to the EKS cluster.
10. Implement the necessary routing or load balancer configuration to gradually shift traffic from the blue environment to the green environment.
11. Monitor the deployment and conduct thorough testing in the green environment.
12. If any issues arise, roll back the deployment by shifting traffic back to the blue environment.
13. Once the green environment is validated, update the Git repository again to reflect the desired state of the application in the blue environment.
14. Repeat the process of deploying and validating the blue environment, ensuring a smooth transition between blue and green environments for future deployments.
38. Which platform should I choose for my application if I’m looking for scalability: Amazon ECS or Amazon EKS?
Ans: Both Amazon ECS and Amazon EKS are fully managed container orchestration services that provide scalability and reliability for deploying and running containerized applications on AWS. In general, either platform can be suitable for your application, depending on your specific requirements and preferences.
If you are looking for a more straightforward, opinionated platform for running your applications, Amazon ECS may be a good choice. On the other hand, if you want more control and customization over your container orchestration environment, or if you are already familiar with Kubernetes, Amazon EKS may be a better fit. It is generally a good idea to evaluate both platforms and determine which one is the best fit for your needs.
39. Lets say, kubernetes job should finish in 40 seconds, however on a rare occasion, it takes 5 minutes, how can i make sure to stop the application if it exceeds more than 40 seconds?
Ans: When we create a job spec, we can give –activeDeadlineSeconds flag to the command, this flag relates to the duration of the job, once the job reaches the threshold specified by the flag, the job will be terminated.
kind: CronJob
apiVersion: batch/v1beta1
metadata:
name: mycronjob
spec:
schedule: "*/1 * * * *"
activeDeadlineSeconds: 200
jobTemplate:
metadata:
name: google-check-job
spec:
template:
metadata:
name: mypod
spec:
restartPolicy: OnFailure
containers:
- name: mycontainer
image: alpine
command: ["/bin/sh"]
args: ["-c", "ping -w 1 google.com"]
40. How do you test manifest, without actually executing it?
Ans: use –dry-run flag to test the manifest. This is really useful not only to ensure if the yaml syntax is right for a particular Kubernetes object but also to ensure that a spec has required key-value pairs.
kubectl create -f <test.yaml> --dry-run
Let us now look at an example Pod spec that will launch an nginx pod
apiVersion: v1
kind: Pod
metadata:
name: my-nginx
namespace: mynamespace
spec:
containers:
- name: my-nginx
image: nginx
kubectl create -f example_pod.yaml --dry-run
41. How do you initiate rollback for an applicaiton?
Ans: Rollback and rolling updates are a feature of Deployment object in the Kubernetes. We do the Rollback to an earlier Deployment revision if the current state of the Deployment is not stable due to the application code or the configuration. Each rollback updates the revision of the Deployment
Check the deployment:
kubectl get deploy
output:
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 1 1 1 1 15h
Check the Rollout history:
kubectl rollout history deploy nginx
deployment.extensions/nginx
REVISION CHANGE-CAUSE
1 <none>
2 <none>
Rollback the deployment:
kubectl rollout undo deploy nginx
check the status of rollback:
kubectl rollout history deploy nginx
Output:
deployment.extensions/nginx
REVISION CHANGE-CAUSE
2 <none>
3 <none>
42. What are init containers?
Ans: Generally, in Kubenetes, a pod can have many containers. Init container gets executed before any other containers run in the pod.
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
annotations:
pod.beta.Kubernetes.io/init-containers: '[
{
"name": "init-myservice",
"image": "busybox",
"command": ["sh", "-c", "until nslookup myservice; do echo waiting for myservice; sleep 2; done;"]
}
]'
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
43. What is node affinity and pod affinity?
Ans:
Node Affinity ensures that pods are hosted on particular nodes.
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: Kubernetes.io/e2e-az-name
operator: In
values:
Pod Affinity ensures two pods to be co-located in a single node. The pod affinity rule says that the pod can be scheduled to a node only if that node is in the same zone as at least one already-running pod that has a label with key “security” and value “S1”
apiVersion: v1
kind: Pod
metadata:
name: with-pod-affinity
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S1
Troubleshooting Questions:
1. What to do if a pod is stuck in the pending state?
Ans: Check for resource limits, node availability, and any taints or affinity rules that might be preventing the pod from being scheduled.
2. How to debug failing deployments in EKS?
Ans: Use kubectl describe and kubectl logs to get detailed information on the deployment and pod status.
3. How to handle high latency issues in EKS?
Ans: Analyze network policies, monitor pod performance, check for resource contention, and use tools like Prometheus and Grafana for detailed metrics.
4. What steps to take if nodes are not joining the EKS cluster?
Ans: Verify the node IAM role, check the kubelet logs, ensure correct VPC and subnet configurations, and inspect security group rules.
5. How to manage storage issues in EKS?
Ans: Use Persistent Volumes (PVs) and Persistent Volume Claims (PVCs), monitor storage usage, and consider using Amazon EFS or EBS for dynamic provisioning.
6. How we can use AWS IAM Roles to manage the access of AWS resources to EKS pods and services?
Ans: You have create the IAM roles and policies with respect to particular AWS Service. In the next step, you have map the IAM ARN to aws eks service account. This service account should be used by respective pods or services.
7. Discuss how to monitor and troubleshoot performance issues in Amazon EKS clusters, including identifying bottlenecks, optimizing resource utilization, and resolving performance degradation.
Ans: Monitoring and troubleshooting performance issues in EKS clusters involves gathering metrics, analyzing logs, and identifying the root cause of the performance degradation. You can use tools like Prometheus and Grafana to collect and visualize metrics, and Kubernetes troubleshooting tools like kubectl and kubeshark to investigate performance issues.
8. What happens when pods die unexpectedly? Does EKS automatically restart them?
Ans: When pods die unexpectedly, EKS does not automatically restart them. Instead, it is up to the user to configure their own pod restart policies. This can be done using the kubelet’s –pod-infra-container-image flag, which allows you to specify the image that will be used for the pod’s infrastructure container.
Security Best Practices:
1. How to secure communication in an EKS cluster?
Ans: Use mutual TLS for pod communication, enable encryption for data at rest and in transit, and apply network policies.
Communication within an EKS cluster can be secured through various means:
Network isolation: EKS clusters run within a Virtual Private Cloud (VPC), allowing you to define security groups and network policies to control inbound and outbound traffic.
Transport Layer Security (TLS): You can configure TLS certificates to secure communication between pods and services.
Secrets management: Kubernetes Secrets can be used to store sensitive information securely, such as API keys or database credentials.
2. What is Kubernetes RBAC and how is it used in EKS?
Ans: RBAC (Role-Based Access Control) restricts user permissions within the Kubernetes cluster based on their roles and responsibilities.
3. How to implement secrets management in EKS?
Ans: Use Kubernetes Secrets, AWS Secrets Manager, or HashiCorp Vault to securely manage and access secrets.
4. Best practices for deploying applications in EKS.
Ans: Use version control for manifests, employ CI/CD pipelines, monitor resource usage, use namespaces for separation, and enforce security policies.
5. How to ensure high availability in EKS?
Ans: Deploy applications across multiple Availability Zones, use Cluster Autoscaler, configure appropriate health checks, and implement fault tolerance mechanisms.
6. How to implement security best practices for Amazon EKS clusters, including pod security policies, network policies, and image scanning?
Ans: Security is very important in Kubernetes. You can implement various security best practices. Some of them are as follows:
1.Pod security policies: Enforce resource limits and access control rules for pods to restrict their behavior and prevent unauthorized access.
2. Network policies: Define network policies to control traffic between pods and external resources, preventing unauthorized communication and protecting your cluster from network-based attacks.
3. Image scanning: Use Amazon Inspector or other container image scanning tools to identify vulnerabilities in container images before deployment, reducing the risk of security breaches.
7. Explain how to prepare for and respond to security incidents in Amazon EKS clusters, including incident response plans, security incident and event management (SIEM) tools, and post-incident analysis?
Ans: Security incident response is crucial for protecting EKS clusters from cyberattacks. You should have a well-defined incident response plan, implement SIEM tools to detect and respond to security incidents, and conduct thorough post-incident analysis to learn from the incident and improve your security posture.
8. How does EKS handle high availability?
Ans: Amazon EKS automatically distributes the Kubernetes control plane across multiple Availability Zones (AZs) to ensure high availability. If one AZ becomes unavailable, the control plane automatically fails over to another AZ. Additionally, EKS provides multi-AZ support for worker nodes to distribute them across multiple AZs for increased resilience.
9. How does EKS manage worker nodes?
Ans: EKS manages worker nodes through a Kubernetes feature called the Kubernetes Node Controller. EKS integrates with Amazon EC2 to provision and manage the worker nodes. You can define worker node configurations as Auto Scaling Groups, which allows for automatic scaling based on metrics like CPU utilization or application-specific metrics.
10. How does EKS handle security?
Ans: EKS provides security features such as IAM integration, encryption at rest and in transit, network isolation using Amazon VPC, and support for Amazon VPC security groups. You can also leverage Kubernetes RBAC (Role-Based Access Control) to manage access to the cluster resources.
11. What is the concept of EKS Managed Node Groups?
Ans: EKS Managed Node Groups are a feature of EKS that simplifies the management of worker nodes. With Managed Node Groups, you define the desired number of worker nodes, instance types, and other configurations, and EKS automatically creates and manages the underlying EC2 instances for you.
12. How can you implement fine-grained access control in EKS?
Ans: Fine-grained access control in EKS can be achieved using Kubernetes RBAC (Role-Based Access Control). RBAC allows you to define roles, role bindings, and service accounts to grant or restrict access to specific resources and actions within the cluster.
13. What is EKS Pod Identity Webhook and how does it enhance security?
Ans: EKS Pod Identity Webhook is an open-source project that enhances security by enabling workload pod identity integration with AWS Identity and Access Management (IAM) roles for service accounts. It allows you to securely associate IAM roles with Kubernetes service accounts, providing granular access control and reducing the need for long-lived AWS credentials within your applications.
14. How can you enable and configure multi-cluster networking in EKS?
Ans: Multi-cluster networking in EKS can be achieved using the Amazon VPC CNI (Container Network Interface) plugin. By enabling and configuring the VPC CNI plugin, you can create a shared VPC across multiple EKS clusters, allowing pods in different clusters to communicate with each other using their private IP addresses.
15. What is the EKS Pod Security Policy Admission Controller?
Ans: The EKS Pod Security Policy Admission Controller is a Kubernetes admission controller that enforces security policies for pods running on EKS clusters. It allows you to define and enforce policies related to container runtime security, host filesystem access, and other security-related aspects of pod execution.
Miscellaneous Questions
1. What is the difference between StatefulSet and Deployment in Kubernetes?
Ans: StatefulSet is used for stateful applications, providing unique network IDs and stable storage, while Deployment is used for stateless applications.
2. How does EKS handle persistent storage?
Ans: EKS integrates with AWS storage services like EBS, EFS, and S3 for persistent storage solutions.
3. What are DaemonSets in Kubernetes?
Ans: DaemonSets ensure that a copy of a pod runs on all (or some) nodes in the cluster, often used for logging or monitoring.
DaemonSets in EKS are Kubernetes objects that ensure that a specific pod runs on all or selected nodes in a cluster. They are useful for running system daemons or agents that need to be present on every node, such as log collectors, monitoring agents, or network proxies.
4. Explain the concept of namespaces in Kubernetes.
Ans: Namespaces provide a way to divide cluster resources between multiple users, allowing for better resource management and access control.
5. What are Helm charts?
Ans: Helm charts are packages of pre-configured Kubernetes resources that simplify the deployment and management of applications on Kubernetes.
6. What are ConfigMaps in Kubernetes?
Ans: ConfigMaps are used to store non-confidential data in key-value pairs that can be consumed by pods at runtime.
7. How does Kubernetes handle rolling updates?
Ans: Kubernetes uses rolling updates to incrementally update pods with new versions of an application, ensuring zero downtime.
8. What is a Kubernetes Operator?
Ans: An Operator is a method of packaging, deploying, and managing a Kubernetes application, leveraging custom resources to automate the management of complex applications.
9. Explain the concept of Service Mesh in Kubernetes.
Ans: A service mesh is a dedicated infrastructure layer that handles service-to-service communication, often using tools like Istio or Linkerd to provide observability, traffic management, and security.
10. How does Kubernetes handle load balancing?
Ans: Kubernetes uses Services to distribute traffic among pods, with support for different types of load balancing such as ClusterIP, NodePort, and LoadBalancer.
11. What is a Kubernetes Job?
Ans: A Job is a Kubernetes resource that creates one or more pods and ensures a specified number of them successfully terminate, often used for batch processing tasks.
12. Explain Pod Disruption Budgets (PDBs).
Ans: PDBs specify the minimum number of pods that must be available during voluntary disruptions, helping to maintain application availability.
13. How does Kubernetes handle node failures?
Ans: Kubernetes detects node failures through health checks and evicts pods from failed nodes, rescheduling them on healthy nodes.
14. What is the Horizontal Pod Autoscaler (HPA)?
Ans: HPA automatically scales the number of pod replicas based on observed CPU/memory usage or custom metrics.
15. How to implement blue/green deployments in Kubernetes?
Ans: Deploy the new version of the application alongside the old one, gradually switch traffic to the new version using a Service or Ingress, and monitor for issues.
Blue-green deployments in EKS can be achieved using Kubernetes concepts such as Deployments and Services. You can create two sets of deployments and services—one representing the blue environment and the other representing the green environment—and use load balancers to switch traffic between the two environments.
16. Explain Kubernetes’ pod lifecycle.
Ans: A pod’s lifecycle includes phases like Pending, Running, Succeeded, Failed, and Unknown, representing its current state.
17. How to manage service discovery in Kubernetes?
Ans: Use Kubernetes Services (ClusterIP, NodePort, LoadBalancer) and DNS for internal service discovery, and Ingress for external access.
18. What are Kubernetes annotations?
Ans: Annotations are key-value pairs attached to objects, providing metadata that can be used by tools and libraries to augment the behavior of the Kubernetes API.
19. How to handle pod affinity and anti-affinity?
Ans: Use affinity and anti-affinity rules to specify pod placement preferences, ensuring certain pods are co-located or spread across different nodes.
20. What is the role of kubelet in Kubernetes?
Ans: Kubelet is an agent that runs on each node, responsible for ensuring containers are running in pods, communicating with the control plane.
21. How does Amazon EKS handle the Kubernetes control plane?
Ans: Amazon EKS fully manages the Kubernetes control plane, which includes the API server, scheduler, and other control plane components. AWS takes care of the updates, patches, and high availability of the control plane, allowing you to focus on deploying and managing your applications.
Expert Level Questions:
1. What is the EKS control plane and how is it managed?
Ans: The EKS control plane consists of the Kubernetes API server and other core components, managed by AWS for high availability and scalability.
2. Explain the use of taints and tolerations in Kubernetes.
Ans: Taints and tolerations allow you to control which pods can be scheduled on specific nodes, providing a mechanism to prevent certain pods from running on unsuitable nodes.
3. How does EKS handle IAM roles for service accounts?
Ans: EKS allows you to assign IAM roles to Kubernetes service accounts, enabling fine-grained permissions for AWS resources accessed by your applications.
4. What is kube-proxy and what role does it play in Kubernetes?
Ans: Kube-proxy is a network proxy that runs on each node, ensuring network rules are correctly implemented to enable communication within the cluster.
5. How to implement a multi-cluster EKS architecture?
Ans: Use tools like Kubernetes Federation or AWS App Mesh to manage and route traffic between multiple EKS clusters.
6. What is etcd and its role in Kubernetes?
Ans: etcd is a distributed key-value store used by Kubernetes to store all cluster data, ensuring consistency and high availability.
7. How to manage Kubernetes secrets securely?
Ans: Use Kubernetes Secrets, encrypt secrets at rest, restrict access using RBAC, and consider external secret management tools like AWS Secrets Manager or HashiCorp Vault.
8. What is a Kubernetes Ingress?
Ans: Ingress is an API object that manages external access to services in a Kubernetes cluster, typically providing load balancing, SSL termination, and name-based virtual hosting.
Kubernetes Ingress is an API object that manages external access to services within a cluster. In EKS, you can use the Kubernetes Ingress resource to configure and manage HTTP and HTTPS routing to services running in your EKS cluster.
9. How to implement disaster recovery for an EKS cluster?
Ans: Regularly back up etcd, use multi-region deployments, implement data replication, and create automated failover mechanisms.
10. How does Kubernetes handle service discovery?
Ans: Kubernetes uses DNS-based service discovery through CoreDNS, allowing pods to communicate with each other using service names.
11. What is the purpose of a Kubernetes scheduler?
Ans: The scheduler assigns pods to nodes based on resource requirements, node capacity, and other constraints and policies.
12. How does Kubernetes manage updates to persistent storage?
Ans: Kubernetes manages persistent storage updates through StatefulSets and Persistent Volume Claims, ensuring data consistency and availability.
13. What are Kubernetes taints and tolerations?
Ans: Taints allow nodes to repel certain pods, while tolerations enable pods to be scheduled on nodes with matching taints.
14. Explain the Kubernetes control plane components.
Ans: The control plane consists of etcd, kube-apiserver, kube-controller-manager, kube-scheduler, and cloud-controller-manager, which collectively manage the state and operations of the cluster.
15. How to implement network segmentation in Kubernetes?
Ans: Use Network Policies to define rules for pod communication, segmenting traffic based on namespaces, labels, and other criteria.
16. Why we not used just Fargate service to run the pod except Worker nodes on AWS EKS cluster?
Ans: There are following reasons why we cannot use only fargate profile for application workload
1. Fargate should not be used for stateful pods. If you need to write to a PVC and care about keeping that data around, you’ll need a Node Group.
2. Farget cannot be used for daemonsets
3. Fargate is not available everywhere EKS is. For instance, EKS is available in GovCloud East/West but Fargate is not.
4. Kubernetes upgrades (to the kubelet on the worker) cannot be done unless the pod is recreated.
17. How do I choose whether a pod goes to Fargate or a Node Group if I have both?
Ans: If an EKS cluster has both Fargate Profiles and Node Groups, the Fargate Profile is evaluated first. If the pod’s namespace matches in the Fargate Profile it will wind up on a Fargate worker, otherwise it will be created on a Node Group worker.
18. What happens if I’m only using Fargate but deploy a pod to a namespace not in the Fargate Profile?
Ans: Your pod will be stuck in a pending state. Forever.
19. Can I make all pods always go to Fargate regardless of the namespace?
Ans: . You have to define the list of namespaces that you want to leverage Fargate in the Fargate Profile
20. Can i use my own AMI to create the worker nodes in AWS EKS Cluster?
Ans:
For the Control Plane – No, this is controlled by AWS
For Fargate workers – No, this is controlled by AWS
For Node Group workers – Yes, but a complicated yes. When you create a Node Group via the AWS Console, the default behavior is to not use Launch Templates so you only have the 3 AMI options (linux x64, linux x64+GPU, linux ARM). If you opt to use Launch Templates you can select your own full list of AMIs. Launch Templates will also allow you to leverage spot instances, IAM instance profiles, tenancy and a dozen other configurations. If you use Launch Templates then you’ll need to maintain them going forward.
21. Can I deploy EKS to a Dedicated VPC?
Ans: No
22. Can I run my own CNI like Calico?
Ans: On Fargate, no.
On Node Groups yes, but you’ll need to create your and manage your own AMIs. Using a CNI other than the Amazon VPC CNI plug-in may also mean you are responsible for debugging any pod networking issues that crop up.
23. How do I upgrade Kubernetes on EKS?
Ans: Performing an “Upgrade of Kubernetes” is done in multiple steps. You start by upgrading the Control Plane first. In the AWS Console, this is done by click in the big blue Update Now
on the Clusters page.
One gotcha is you cannot upgrade two or more minor version at a time. If your control plane is at 1.17 and you have any Node Groups or Fargate workers running 1.16, you have to upgrade these to 1.17 before upgrading the control plane to 1.18.
24. How do I upgrade Kubernetes on Node Groups
Ans: Once the Control Plane is upgraded to a newer version, the Node Groups can be updated.
If you aren’t using your own Launch Templates you’ll be prompted in the AWS Console under EKS > Pick your cluster > Compute > Node Groups > AMI release version
if an AMI with a newer kubelet version is available. Clicking Update now
will result in:
1.New workers will be create using the newer AMI
2. Once all the new worker nodes are healthy, the older nodes will be drained and status will be Ready,SchedulingDisabled
.
3. After the pods are moved off of the old workers, the underlying EC2 instances are terminated and the upgrade is complete
If you are maintaining your own AMIs you’ll need to create AMIs with the newer kubelet version. It may be easiest to just create a new Node Group with the new AMI, then delete the old Node Group once the new ones are healthy.
25. How do I upgrade Kubernetes on Fargate?
Ans: Once the control plane has been upgraded to the newer version, any newly created fargate pods will deploy EC2 instances with AMIs based on the version the master has.
To upgrade existing pods, there is no automation to do this. The pods needs to be destroyed and recreated. To do this without downtime for the app itself, Kubernetes Deployments should be leveraged which can do rolling upgrades.
26. How are the kubelet certificates rotated?
Ans:
This depends on which nodes we are looking at:
Control plane nodes – Certificate rotation for the master nodes is managed by AWS.
Fargate nodes – This is baked into the AMI, recreating the pod will result in a new EC2 instance with a newer kubelet + certificate
Node Group nodes – The kubelet certificate is configured to rotate in the /etc/kubernetes/kubelet/kubelet-config.json
configured in the AMI
27. Can Fargate nodes exist in a private subnet?
Ans: Yes, also Node Groups can be either public or private subnets. You may want to leverage more than 1 Node Group and add labels to keep some apps private versus public facing.
28. Do Node Group VMs cost more than the EC2 VMs of the same instance type?
Ans: No, whatever the cost of the EC2 instance is is what you pay for.
29. s the version of Kubernetes automatically upgraded for me when new versions come out?
Ans: No, an admin needs to upgrade the control plane and then any workers. AWS will publish new AMIs but the admin is required to consume these upstream changes.
30. Can I use something other than the AWS Console to spin up clusters?
Ans: No, there is an excellent CLI tool called eksctl
which will you can use to completely manage the bootstrap and lifecycle of the deployment./
31. What problem comes when you use default kubernetes namespace?
Ans: When you use default namespace then by time it is very difficult to manage all the applications.
namespace can be used for blue-green deployment where each namespace has different versions of app available.
it becomes difficult to manage the development. testing and production application in the same namespace.
32. What is the meaning of “pod is empheral”?
Ans: It means that they are not designed to run forever. and when pod is terminated, it cannot be brought back.
33. What happens when kubernetes master node failed?
Ans: kubernetes is designed to be resilient. When master node failed, node of the cluster will keep operating but there can be no changes including pod creation or service member changes until the master node available.
34. What happens when worker node failed?
Ans: When worker node fails, the master stop receiving messages from the worker. if the master does not receive the status updates from the worker nodes, node will be marked as a NotReady
. If a node is NotReady
for 5 minutes
, the master reschedule all the pods that were running on the dead node to other available nodes.
35. When to use the stateful pods?
Ans: for example, redis pod that has access a volume, but you want it to maintain access to same volume even if it is redeployed or restarted. in that case stateful pods are very important.
Performance and Optimization Questions
1. Optimizing resource allocation in an EKS cluster.
Ans: Use ResourceQuotas and LimitRanges, monitor resource usage, and employ Horizontal Pod Autoscaler and Vertical Pod Autoscaler.
2. How to handle cluster upgrades with minimal downtime?
Ans: Perform rolling updates for the control plane and node groups, ensure readiness probes and liveness probes are configured, and use blue/green deployments for critical applications.
3. Best practices for managing large-scale EKS clusters.
Ans: Use multiple namespaces for resource isolation, implement proper monitoring and logging, automate with CI/CD pipelines, and regularly review and optimize resource allocations.
4. How to reduce costs while running EKS?
Ans: Use spot instances for non-critical workloads, right-size instances, implement auto-scaling, and regularly review and optimize resource usage.
5. Improving pod startup times in EKS.
Ans: Optimize container images, use node local DNS cache, ensure efficient networking setup, and minimize pod initialization steps.
6. How to optimize container image size?
Ans: Use minimal base images, multi-stage builds, remove unnecessary dependencies, and regularly clean up unused layers.
7. What are Kubernetes Admission Controllers?
Ans: Admission Controllers are plugins that govern and enforce policies on objects during their creation, modification, or deletion in the cluster.
8. How to handle logging in a Kubernetes cluster?
Ans: Use a centralized logging solution like Fluentd or Logstash to collect logs from all pods and send them to a storage backend like Elasticsearch.
9. What is Kubernetes Federation?
Ans: Kubernetes Federation allows you to manage multiple Kubernetes clusters as a single entity, enabling centralized control and configuration.
10. How to use service accounts in Kubernetes?
Ans: Service accounts provide an identity for processes running in a pod, enabling fine-grained access control to Kubernetes API resources.
11. What is the Vertical Pod Autoscaler (VPA)?
Ans: VPA automatically adjusts the CPU and memory requests and limits for pods based on their actual usage, optimizing resource allocation.
12. How to manage Kubernetes resources with quotas?
Ans: Use ResourceQuotas to set limits on the number of resources (e.g., CPU, memory, pods) that can be used within a namespace.
13. What is a Kubernetes StatefulSet?
Ans: StatefulSets manage the deployment and scaling of stateful applications, ensuring stable network identities and persistent storage.
14. How to ensure compliance in a Kubernetes environment?
Ans: Implement RBAC, Network Policies, encryption, auditing, and use compliance tools like Open Policy Agent (OPA) to enforce policies.
15. How to use Kubernetes ConfigMaps and Secrets?
Ans: ConfigMaps store non-confidential configuration data, while Secrets store sensitive information. Both can be mounted as volumes or exposed as environment variables in pods.
16. How can you scale an EKS cluster?
Ans: You can scale an EKS cluster by adjusting the number of worker nodes in the associated Auto Scaling Group. By modifying the desired capacity of the Auto Scaling Group, EKS automatically adjusts the number of worker nodes in the cluster.
17. How does EKS handle updates and patches?
Ans: EKS automatically applies updates and patches to the managed control plane without any manual intervention required. EKS performs these updates in a controlled manner, rolling them out across multiple AZs to ensure high availability of your applications.
18. Can you integrate EKS with other AWS services?
Ans: Yes, EKS can be integrated with various AWS services. For example, you can use AWS Identity and Access Management (IAM) for access control, Amazon Elastic Container Registry (ECR) for container image storage, Amazon CloudWatch for monitoring, and AWS App Mesh for service mesh capabilities, among others.
19. How can you perform rolling updates and rollbacks in EKS?
Ans: Rolling updates and rollbacks in EKS can be achieved by updating the Deployment resource with a new container image or configuration. Kubernetes will automatically manage the process of rolling out the update to the pods in a controlled manner. If an issue occurs, you can roll back to a previous version using the deployment’s revision history.
mediunbh,pretidevops,datavelly,javainuse