Q1. What is Kubernetes?
Ans: It is an open-source platform designed to automate deploying, scaling, and managing containerized applications. Kubernetes automates the deployment, scaling, and management of containerized applications, allowing for efficient resource utilization and application scaling.
Q2. What is diff between Docker Swarm and kubernetes?
Ans:
Sr No | Parameter | Docker Swarm | Kubernetes |
---|---|---|---|
1 | Orchestration | Docker Swarm is Docker’s native orchestration tool. | Kubernetes is a comprehensive, open-source platform for container orchestration, initially developed by Google. |
2 | Architecture | Follows a simpler architecture with a single manager node and worker nodes. | Features a more complex architecture with multiple master and worker nodes, providing higher fault tolerance and scalability. |
3 | Scalability | Generally suitable for smaller-scale deployments and may face limitations with larger clusters. | Highly scalable and designed to handle massive clusters, making it suitable for enterprise-grade deployments. |
4 | Networking | Provides basic networking features such as overlay networks and service discovery. | Offers advanced networking capabilities including DNS-based service discovery, load balancing, and support for various network plugins. |
5 | Service Discovery | Offers basic service discovery using DNS names. | Provides advanced service discovery mechanisms using DNS, labels, selectors, and other metadata. |
6 | Load Balancing | Docker Swarm does automatic load balancing of traffic between containers in a cluster | Offers sophisticated load balancing features, including built-in and external load balancers. |
7 | High Availability | Offers limited high availability features. | Built-in high availability and fault tolerance mechanisms ensure service reliability even in the event of node failures. |
8 | Self-healing | Offers basic self-healing capabilities such as restarting failed containers. | Provides robust self-healing capabilities, automatically restarting failed containers and rescheduling them to healthy nodes. |
9 | Rolling Updates | It can deploy rolling updates but can’t deploy automatic rollbacks | Kubernetes can deploy rolling updates as well as automatic rollbacks |
10 | Extensibility | Relatively limited extensibility. | Highly extensible with a rich ecosystem of plugins, extensions, and custom resources. |
11 | Community Support | Has an active community but may not be as extensive as Kubernetes. | Boasts a large and vibrant community with extensive documentation, support forums, and contributions from various organizations. |
12 | Autoscaling | Docker sworm cant do autoscaling | It can do autoscaling |
13 | Loggin and monitoring | It requires third-party tools like ELK stack for logging and monitoring. | Kubernetes has integrated tools for loggin and monitoring |
Q.3 What is kube-proxy in kubernetes?
Ans: Kube-proxy is a networking component in Kubernetes responsible for managing network communication between services within a Kubernetes cluster. It runs on each node in the cluster and maintains network rules to enable communication between different pods and services.
Q4. What is kubectl?
Ans:kubectl
is a command-line interface (CLI) tool used to interact with Kubernetes clusters. It allows users to perform various operations on Kubernetes resources, such as pods, services, deployments, and more.
kubectl
is a powerful tool that enables developers, administrators, and operators to manage Kubernetes clusters from the command line.
Q5. What is kubelet?
Ans: It is responsible for managing individual nodes and ensuring that containers are running as expected.
It runs on each node in the cluster and communicates with the Kubernetes API server to receive instructions about which containers to run and manage.
Q6. What is Headleass service?
Ans: A headless service is a type of Kubernetes service that does not have a cluster IP assigned to it.a headless service does not provide load balancing or service discovery through a virtual IP.
Instead, when you create a headless service, Kubernetes does not assign a cluster IP to it, and DNS resolution for the service returns multiple DNS A records, each corresponding to the IP address of an individual pod backing the service.
This means that each pod associated with the headless service has its own DNS record, allowing clients to directly connect to individual pods without going through a load balancer.
Headless services are useful in scenarios where you need direct access to individual pods, such as stateful applications where each pod represents a unique instance or when you require multicast or broadcast-like behavior for service discovery.
Q7. What do you understand by Cloud controller manager?
Ans: The Cloud Controller Manager (CCM) is a component of the Kubernetes control plane responsible for managing cloud-specific integrations and functionalities within a Kubernetes cluster.
It acts as an intermediary between the Kubernetes control plane and the underlying cloud provider’s APIs, abstracting away the complexities of interacting with cloud infrastructure.
The Cloud Controller Manager is responsible for persistent storage, network routing, abstracting the cloud-specific code from the core Kubernetes specific code, and managing the communication with the underlying cloud services. It might be split out into several different containers depending on which cloud platform you are running on and then it enables the cloud vendors and Kubernetes code to be developed without any inter-dependency. So, the cloud vendor develops their code and connects with the Kubernetes cloud-controller-manager while running the Kubernetes.
Node Controller: It checks and confirm that node is deleted properly after it has been stopped
Route Controller: The route controller manages the traffic routes in the underlyting cloud infrastructure
Volume Controller: Manage the Storage Volume and interacts with the cloud provider to orachastrate volume
Service Controller: The service controller is responsible for the management of cloud provider load balancer
Q8. What is the Ingress network, and how does it work?
Ans: Ingress is a resource that manages external access to services within a cluster. /It provides HTTP and HTTPS routing capabilities to route incoming traffic to different services based on hostnames, paths, or other criteria. It acts as a layer 7 (application layer) load balancer for HTTP and HTTPS traffic.
Ingress network is a collection of rules that acts as an entry point to the Kubernetes cluster. This allows inbound connections, which can be configured to give services externally through reachable URLs, load balance traffic, or by offering name-based virtual hosting. So, Ingress is an API object that manages external access to the services in a cluster, usually by HTTP and is the most powerful way of exposing service.
Q8. What is the LoadBalancer in Kubernetes?
Ans: LoadBalancer is a type of service that provides external access to services running within a cluster by exposing them to the external network.It enables traffic from outside the Kubernetes cluster to reach the services running inside the cluster.
Q9. What is NodePort Service in Kubernetes?
Ans: NodePort is a type of Kubernetes service that exposes a service on a static port on each node (worker node or servers) in the cluster. It allows external traffic to reach services running within the Kubernetes cluster by mapping a port on the node’s IP address to a port on the service.
It commonly used when you need to expose a service externally but don’t want to use an external load balancer or when you’re testing and debugging services in development environments.
Q10. What is ClusterIP service in kubernetes?
Ans: ClusterIP is a type of Kubernetes service that exposes a service on an internal, cluster-local IP address. It allows communication between pods and services within the Kubernetes cluster without exposing the service to the external network. ClusterIP services are accessible only from within the cluster and are not reachable from outside the cluster.
Q11. What are the different services within Kubernetes?
Ans:
1. Cluster IP:
- Exposes the service on a cluster-internal IP address.
- Service is only reachable from within the cluster.
- Default service type if none is specified.
- Used for internal communication between pods and services within the cluster.
2. NodePort Service:
- Exposes the service on a static port on each node in the cluster.
- Makes the service accessible from outside the cluster by opening a specific port on each node.
- Routes incoming traffic on the node’s IP address and the static port to the service.
- Suitable for development and testing, but not recommended for production use due to security concerns.
3. Load Balancer Service:
- Provides a dedicated external load balancer for the service.
- Integrates with the cloud provider’s load balancing solution to distribute traffic across multiple pods in the service.
- Automatically assigns an external IP address to the service, allowing it to be accessed from outside the cluster.
- Suitable for production use when you need to expose services to external clients or users.
4. External Name Service:
- AMaps the service to an external DNS name.
- Redirects requests for the service to the specified DNS name.
- Useful for integrating with external services or legacy systems that are not part of the Kubernetes cluster.
5. Headless Service:
- Configured with
clusterIP: None
. - Disables cluster IP allocation for the service.
- Allows direct access to individual pods using DNS, without load balancing or service discovery.
- Suitable for stateful applications or services that require direct access to individual pods.
Q12. What is etcd?
Ans: Etcd is a distributed key-value store that is widely used in distributed systems and serves as a core component in Kubernetes for storing cluster data. It provides a reliable way to store configuration data, state information, and metadata in a distributed and highly available manner.
Q13. What is the name of the initial namespaces from which Kubernetes starts?
Ans: There are 3 namespaces used in kubernetes cluster bydefault.
- Default Namespace
- Kube-system Namespace
- Kube-public Namespace
1. default: This is the default namespace where Kubernetes resources are created if no namespace is specified. Most of the resources are created in this namespace if explicitly mentioned, and it is used as a fall-back when namespaces are not specified.
2. kube-system: This namespace is reserved for Kubernetes system components and infrastructure. It contains essential system resources such as kube-dns, kube-proxy, kube-scheduler, kube-controller-manager, and other core components.
3. kube-public: This namespace is created automatically and is readable by all users (including those not authenticated). It is commonly used to store resources that are accessible to all users, such as cluster-wide information or resources that need to be publicly visible.
Q.14 What is a Namespace in Kubernetes?
Ans: Namespace is a virtual cluster environment that provides a way to divide and isolate resources within a Kubernetes cluster. It allows multiple users, teams, or projects to share the same physical cluster while maintaining logical separation and access control over their resources.
Q.15 What is Heapster?
Ans: Heapster was a component in the Kubernetes ecosystem responsible for collecting, aggregating, and exporting cluster resource usage metrics such as CPU, memory, and network usage. It provided valuable insights into the resource utilization of containers and pods running within a Kubernetes cluster.
Q.16 What are Daemonsets?
Ans: DaemonSets are a type of Kubernetes controller that ensures that a specific pod runs on each node in the cluster. They are used to deploy system daemons or background services that need to be running on every node in the Kubernetes cluster.
Q.17 What is the job of Kube-scheduler?
Ans: It is making decisions about which nodes in the cluster should run newly created pods. Its primary job is to ensure efficient resource utilization, maintain high availability, and distribute workload across the cluster.
Q.17 What is pod in kubernetes?
Ans: A pod is the smallest and most basic unit of deployment. It represents a single instance of a running process in your cluster. A pod can contain one or more containers, which are tightly coupled and share the same network namespace, storage, and other resources.
Q.18 What is node in kubernetes?
Ans: A node is a physical or virtual machine that serves as a worker machine in a cluster. Each node is managed by the control plane and has the necessary components for running pods, which are the basic building blocks of Kubernetes applications.
For e.g. If you use AWS EKS cluster as kubernetes managed service then you can use EC2 servers as a node for EKS cluster
Q.19 What is Kube-api server in kubernetes?
Ans: The kube-apiserver
is a core component of the Kubernetes control plane. It acts as the front-end for the Kubernetes control plane and provides the Kubernetes API, which is used by both the Kubernetes components and users to interact with the cluster.
Q.20 What are the main components of Kubernetes architecture?
Ans: Below are the main components of kubernetes cluster:
Master Node Components:
- kube-apiserver: This component exposes the Kubernetes API and serves as the front-end for the Kubernetes control plane.
- etcd: It is a distributed key-value store that stores the cluster’s configuration data, state, and metadata.
- kube-scheduler: This component is responsible for scheduling pods onto nodes in the cluster based on resource requirements and other constraints.
- kube-controller-manager: It runs various controllers that handle different aspects of the cluster’s operation, such as node management, replication control, and endpoint reconciliation.
- cloud-controller-manager (optional): This component interacts with the underlying cloud provider’s APIs to manage cloud-specific resources like load balancers, storage volumes, and virtual machines.
Worker Node Components:
- kubelet: This is an agent that runs on each node and is responsible for managing the containers and pods on the node, including starting, stopping, and monitoring them
- kube-proxy: It is a network proxy that runs on each node and maintains network rules to enable communication between pods and external network resources.
Networking Components:
- Container Network Interface (CNI): Kubernetes uses a pluggable networking model, and CNI plugins are responsible for setting up the network connectivity between pods across the cluster.
- Pod Network: This is the network overlay that connects pods across different nodes in the cluster, enabling them to communicate with each other.
Add-ons:
- DNS: Kubernetes typically includes a DNS add-on that provides DNS-based service discovery within the cluster.
- Dashboard: A web-based user interface for interacting with and managing the Kubernetes cluster.
- Ingress Controller: An optional component that manages incoming traffic to the cluster, typically providing HTTP and HTTPS routing capabilities.
Q21. How Kubernetes and docker are linked together ?
Ans: Docker build the container for our application and these containers can communicate with each other by using container orchastration tool like kubernetes.
Q.22 Can you explain, What is Container orchastration?
Ans: Container orchestration is the process of automating the deployment, management, scaling, and networking of containerized applications. It involves coordinating the deployment and operation of multiple containers across a distributed environment to ensure that applications run reliably and efficiently.
Container orchestration platforms provide tools and features to simplify the management of containerized applications and infrastructure.
It does the following things:
- Deployment
- Scaling
- Service Discovery
- Networking
- Health Monitoring
- Rolling Updates
- Resource Management
Q.23 What are the kubernetes features?
Ans: Kubernetes has wide range of features that enable users to deploy, manage, and scale containerized applications efficiently.
- Container Orchestration: Kubernetes automates the deployment, scaling, and management of containerized applications, ensuring that they run reliably and efficiently across a distributed environment.
- Service Discovery and Load Balancing: Kubernetes provides built-in service discovery and load balancing mechanisms to route traffic to the appropriate pods within the cluster. It manages load balancing, DNS resolution, and service endpoints to ensure high availability and fault tolerance.
- Automatic Scaling: Kubernetes supports automatic scaling of applications based on resource utilization metrics such as CPU and memory usage. It can automatically adjust the number of pod replicas to handle changes in workload demand.
- Self-Healing: Kubernetes monitors the health and status of pods and nodes within the cluster and automatically restarts or reschedules pods that fail or become unhealthy. It ensures that applications remain available and responsive even in the event of failures.
- Rolling Updates and Rollbacks: Kubernetes supports rolling updates and rollbacks of application deployments, allowing users to update or revert changes to applications without causing downtime. It performs rolling updates in a controlled manner, gradually replacing old pods with new ones.
- Persistent Storage: Kubernetes provides support for persistent storage volumes, allowing applications to store and access data persistently. It supports various storage backends and volume types, including local storage, network-attached storage (NAS), and cloud storage providers.
- Secrets and ConfigMaps: Kubernetes allows users to manage sensitive information such as passwords, API keys, and configuration data securely using Secrets and ConfigMaps. It provides a centralized mechanism for storing and accessing configuration data, ensuring that it is encrypted and accessible only to authorized users.
- Resource Quotas and Limits: Kubernetes enables administrators to define resource quotas and limits for namespaces, pods, and containers, ensuring efficient resource utilization and preventing resource contention. It helps in optimizing resource allocation and maintaining performance and stability.
- Multi-Tenancy: Kubernetes supports multi-tenancy, allowing multiple users, teams, or projects to share the same cluster while maintaining logical separation and access control over their resources. It provides namespaces, RBAC, and network policies for isolating and securing workloads.
- Extensibility: Kubernetes is highly extensible and customizable, allowing users to integrate with third-party plugins, controllers, and extensions to extend its functionality. It provides APIs, custom resources, and extension points for integrating with external systems and services.
- Observability: Kubernetes provides built-in monitoring, logging, and tracing capabilities for observing and troubleshooting applications running in the cluster. It integrates with monitoring and logging solutions such as Prometheus, Grafana, Elasticsearch, and Fluentd to collect and analyze metrics, logs, and events.
- Networking: Kubernetes manages networking between pods and services within the cluster, providing virtual networks, overlay networks, and network policies to facilitate communication and isolation between workloads. It supports various network plugins and CNI (Container Networking Interface) implementations for integrating with different network environments.
- Security: Kubernetes provides a range of security features and best practices for securing containerized applications and infrastructure. It supports role-based access control (RBAC), pod security policies, network policies, and encryption to ensure that applications and data are protected against unauthorized access and attacks.
- High Availability: Kubernetes is designed for high availability, with built-in features such as pod replication, node redundancy, and distributed architecture. It ensures that applications remain available and responsive even in the event of node failures or network partitions.
Q.24 What are Cluster in Kubernetes?
Ans: Cluster refers to a set of physical or virtual machines (nodes) that are grouped together to run containerized applications and services.
Q.25 List different types of controllers in kubernetes?
Ans: Below are the controllers used in k8s:
- Node Controller
- Replication controller
- Service Account and token controller
- Endpoint controller
- Namespace Controller
Q.26 What is init Containers in kubernetes?
Ans: An init container is a special type of container that runs before the main containers in a pod start. Init containers are primarily used to perform initialization tasks, setup operations, or preconditions before the application containers in the pod start running.
Q.27 What is difference between replicaset and replication controller?
Ans: Both are used for same functionality but differ only in selectors to reproduce pod.
The replication controller allow us to create multiple pod easily bur if a pod crashes it is replaced with new pod. It can scale the number of pods and update or delete multiple pods with single command
The replicaset is the same as replicaiton controller except that they have more options for the selectors. They use set-based selectors to manage the pods.
Q.28 Which selector does replicaset use?
Ans: ReplicaSet uses a label selector to identify and manage pods, ensuring that the correct number of replica pods are running to meet the desired state defined by the ReplicaSet’s configuration. Labels play a crucial role in pod selection and management by ReplicaSet and other Kubernetes controllers.
For example, you might specify a label selector like app: frontend
to select pods with the label app=frontend
.
Q.29 Which selector does replication controller use?
Ans: ReplicationController uses a label selector to identify and manage pods, ensuring that the correct number of replica pods are running to meet the desired state defined by the ReplicationController’s configuration. Labels play a crucial role in pod selection and management by ReplicationController and other Kubernetes controllers.
Q.30. what do equality based seelector do?
Ans: equality-based selectors are used to select resources based on specific label key-value pairs.
In equality-based selectors, you specify label key-value pairs using equality operators. The equality operators include =
, !=
, ==
, and in
. The most commonly used operator is =
for exact match.
Equality-based selectors match resources based on exact key-value pairs. For example, if a label selector specifies app=frontend
, it will match resources with the label app=frontend
and ignore resources with labels such as app=backend
or environment=production
.
Q.31 what do set based seelector do?
Ans: set-based selectors are used to select resources based on a set of label selectors and logical operators. Set-based selectors provide a more flexible and expressive way to define label selectors compared to equality-based selectors.
Here’s how set-based selectors work:
Match Expressions: Set-based selectors use match expressions to specify label selection criteria. Each match expression consists of a key, an operator, and a set of values. The supported operators include In
, NotIn
, Exists
, and DoesNotExist
.
OR Operator (matchLabels): Additionally, you can use the matchLabels
field to specify label selectors using a map of key-value pairs. The OR operator is implicitly applied to matchLabels, meaning that resources need to match at least one of the specified label selectors.
Matching Criteria: Set-based selectors provide more flexibility in defining label selection criteria compared to equality-based selectors. You can specify complex criteria such as selecting resources with labels matching multiple key-value pairs, excluding resources with specific labels, or selecting resources based on the presence or absence of certain labels.
Q.32 list some security measures in kubernetes?
Ans:
- Training and education
- Limit external access
- secure network communication
- implement pod security context
- secure etcd
- secure API server
- Role-Based Access Control (RBAC):
- Network Policies:
- Pod Security Policies (PSP):
- Container Runtime Security:
- Secrets Management:
- Image Security:
- Audit Logging:
- Runtime Monitoring and Alerting:
- Regular Updates and Patching:
- Security Audits and Assessments:
Training and Education: Educate your team about Kubernetes security best practices and conduct security training regularly.
Limit External Access: Minimize external access to the Kubernetes API server and use a VPN or private network for secure access.
Secure Network Communication: Use TLS for secure communication between components in the cluster. Enable mutual TLS authentication for enhanced security.
Implement Pod Security Context: Set appropriate security context for Pods to control their privileges and capabilities. Avoid running containers with excessive permissions.
Secure etcd: Ensure that the etcd data store used by the Kubernetes control plane is secure. Configure TLS encryption for etcd communication and consider enabling role-based access control for etcd.
Secure API Server: Ensure that the Kubernetes API server is properly secured. Use TLS certificates for communication, disable insecure ports, and enable audit logging to monitor API server activity.
Role-Based Access Control (RBAC): RBAC allows you to define granular access policies and permissions for users, groups, and service accounts within the Kubernetes cluster. By implementing RBAC, you can enforce the principle of least privilege and restrict access to sensitive resources and operations.
Network Policies: Network policies allow you to define rules that control the traffic flow between pods and external endpoints within the Kubernetes cluster. By enforcing network policies, you can segment and isolate workloads, control ingress and egress traffic, and protect against unauthorized access and network-based attacks.
Secrets Management: Kubernetes provides a built-in mechanism for managing sensitive information such as passwords, API keys, and TLS certificates called Secrets. It’s essential to use Secrets to store and access sensitive data securely, encrypt Secrets at rest, and restrict access to Secrets based on RBAC policies.
Image Security: Kubernetes relies on container images to deploy applications. It’s crucial to use trusted and verified container images from reputable sources, regularly scan container images for vulnerabilities and malware, and enforce image security policies using tools such as image signing, admission controllers, and vulnerability scanners.
Audit Logging: Kubernetes supports audit logging, which allows you to track and monitor API server requests, resource changes, and administrative actions within the cluster. Enabling audit logging helps in detecting and investigating security incidents, complying with regulatory requirements, and maintaining accountability.
Runtime Monitoring and Alerting: Implementing runtime monitoring and alerting solutions such as Prometheus, Grafana, and ELK stack helps in detecting and responding to security threats in real-time. Monitor cluster health, resource utilization, and security-related metrics, and configure alerts for suspicious activities and anomalies.
Regular Updates and Patching: Stay up-to-date with Kubernetes releases and security advisories, and apply patches and updates promptly to address known security vulnerabilities and issues. Regularly review and update configurations, dependencies, and third-party components to mitigate security risks.
Security Audits and Assessments: Conduct regular security audits, assessments, and penetration testing of the Kubernetes cluster and containerized applications to identify and remediate security vulnerabilities, misconfigurations, and compliance gaps.
Q.33 how do container in pod communicate to each other?
Ans: Containers within a pod in Kubernetes can communicate with each other using localhost. When multiple containers are co-located within the same pod, they share the same network namespace, allowing them to communicate with each other over the loopback interface (127.0.0.1)
Q.34 How pod to pod communication works on the same node?
Ans: pod-to-pod communication within the same node typically works through the container network interface (CNI) plugin, which sets up networking rules and routes to enable communication between pods.
pod-to-pod communication within the same node in a Kubernetes cluster is facilitated by the container network interface, which sets up networking rules and routes to enable direct communication between pods over the local network interface. This allows pods to communicate seamlessly with each other, enabling the deployment of distributed applications and microservices within the cluster.
Q.35 how pod to pod communication works in the kubernetes cluster which is on different worker node?
Ans: pod-to-pod communication between pods on different worker nodes in a Kubernetes cluster involves routing traffic through the cluster’s network infrastructure using overlay networking, routing protocols, and service discovery mechanisms. This allows pods to communicate seamlessly with each other, enabling the deployment of distributed applications and microservices across the cluster.
Q.36 What is diff between configmap and secrets in kubernetes ?
Ans:
Sr No | Paramerer | ConfigMaps | Secrets |
1 | Purpose | It is used to store non-sensitive configuration data such as environment variables, command-line arguments, configuration files, or any other configuration-related data that an application needs. | Secrets are used to store sensitive information such as passwords, API keys, tokens, certificates, or any other confidential data that should not be exposed in plain text. |
2 | Data Format | ConfigMaps store data as key-value pairs. They can contain text-based data, such as strings, numbers, or even entire configuration files. | Secrets also store data as key-value pairs, but they are encoded or encrypted to protect sensitive information. Kubernetes automatically encodes or encrypts secret data at rest to prevent unauthorized access. |
3 | Access control | ConfigMaps are typically accessible by all users and service accounts within the same namespace. They do not provide encryption or access control mechanisms by default. | Secrets provide additional security features such as encryption at rest and RBAC (Role-Based Access Control) to restrict access to sensitive data. You can define fine-grained access controls to limit who can access and modify secret data. |
4 | Base64 Encoding | ConfigMap data is stored in plain text format without any encoding or encryption. | By default, secret data is stored in Base64-encoded format to prevent accidental exposure of sensitive information |
5 | Use cases | 1. commonly used to store configuration data that needs to be shared among multiple pods or containers within the same namespace. | 1. Secrets are used to store sensitive information that should be kept confidential, such as database passwords, API keys, or TLS certificates. |
2. They are suitable for storing environment variables, configuration files, or application settings. | 2. They are ideal for storing credentials or other sensitive data required by applications. |
Q.37 List some monitoring tools used in kubernetes cluster?
Ans:
- Grafana
- Kibana
- CA advisor
- Premetheus
- SolarWinds
- Elasticsearch
- Sysdig
Q.38 wtite some security mesures that we can use in k8s cluster?
Ans: Below are some security measures that we can use in the k8s cluster:
- limit access to ETCD
- Implement network segmentation
- Define resource quota
- Provide limited access to node of kubernetes
- Used RBAC ( Role based access control)
- Define network policies
- Enable PSP ( POD security policy)
- Use secrets manager to store the sensitive information
- Scan the Container images for vulunerability and malware
- Use container runtime security
- Use Audit logging to track and monitor API server requests, resource changes, and administrative action within cluster
- Use k8s Namespace to create logical partition for diff environment like, dev, stage, prod.
- Regular updates and patching
- Check regular security audits, assessment and penetration testing for kubernetes cluster to identify and remediate security vulnerabilities, misconfigurations, and compliance gaps.
- Implement backup and disaster recovery strategies to protect critical data and applications in case of security incidents, data loss, or service outages.
- Regularly back up cluster configurations, application data, and persistent volumes.
Q39. What is Persistent Volume Claim?
Ans: PVC stands for persistent volume claim.It is a resource used by applications to request storage resources from the cluster.
PVCs provide a way for pods to consume durable storage volumes independent of the underlying storage infrastructure.
Q40. What happens when master node failed or worker node failed?
Ans: If master node failed then cluster remains keep operational. There is no effect in the pod creation or any service member changes
Q.41 What will you do to upgrade kubernetes cluster?
Ans: To upgrade the cluster we will check the following points:
- Backup data
- Review release notes
- Upgrade control plane components
- upgrade ETCD
- upgrade worker nodes
- Verify cluster health
- monitor for issues
- Updates add-ons and plugins
- Prepared for Rollback plan
Backup Data: Before starting the upgrade process, it’s essential to back up any critical data, including etcd data, cluster configurations, and application data. This ensures that you can restore the cluster to a previous state in case of any issues during the upgrade process.
Review Release Notes: Review the release notes for the target Kubernetes version to understand the changes, new features, and potential compatibility issues. Pay attention to any deprecated APIs or breaking changes that may affect your cluster configuration or applications.
Upgrade Control Plane Components:
a. Upgrade the control plane components (API server, scheduler, controller manager) one by one. This can usually be done by updating the package or container images associated with each component.
b. Follow the upgrade instructions provided by the Kubernetes distribution or cloud provider. This may involve running specific commands or scripts to upgrade the control plane components.
Upgrade etcd (if necessary): If the etcd data store is managed separately from the Kubernetes control plane, ensure that it is compatible with the new Kubernetes version and upgrade it as needed.
Upgrade Worker Nodes:
a. Drain each worker node to evict pods and ensure that no workloads are running on the node.
b. Upgrade the kubelet and kube-proxy components on each worker node to match the version of the control plane components.
c. Reboot the worker node if necessary to apply kernel updates or other system changes.
Verify Cluster Health:
a. After upgrading control plane components and worker nodes, verify the health and functionality of the cluster.
b. Test basic cluster operations, such as creating and deleting pods, deploying applications, and accessing services, to ensure that everything is working as expected.
Monitor for Issues:
a. Monitor cluster logs, metrics, and events for any issues or errors that may arise during or after the upgrade process.
b. Address any issues promptly to minimize downtime and ensure the stability of the cluster.
Update Add-ons and Plugins: If you’re using any third-party add-ons or plugins (e.g., monitoring, logging, networking), ensure that they are compatible with the new Kubernetes version and update them as needed.
Rollback Plan: Have a rollback plan in place in case the upgrade process encounters critical issues or unexpected failures. This may involve restoring from backups or reverting to the previous Kubernetes version.
Q.42 How can you provide the API-security in kubernetes?
Ans: Below are some techniques that we can use to provide the security for kubernetes api:
1. Enable strong authentication mechanisms, such as client certificates, bearer tokens, or OIDC authentication, to verify the identity of users and service accounts accessing the API server.
2. Configure Role-Based Access Control (RBAC) to define fine-grained access policies and permissions for users, groups, and service accounts.
3. Enable TLS encryption for communication between clients and the API server to prevent eavesdropping, tampering, and man-in-the-middle attacks.
4. Regularly review and update API server configurations to address security vulnerabilities and mitigate risks.
5. Define network policies to control traffic flow to and from the API server and other Kubernetes components, restricting access based on IP addresses, ports, and protocols.
6. Enable audit logging to record API server requests, responses, and actions performed by users and service accounts.
7. Define Pod Security Policies to enforce security controls and restrictions on pods’ behavior, including security contexts, volume permissions, and privileged access.
8. Regularly apply security patches and updates to the Kubernetes components,
Q.43 How do you debug the pod that is not being scheduled?
Ans:
- Check Pod status:Look for any error messages or events associated with the pod that indicate why it is not being scheduled.
kubect get pods
2. Check Node resources: Verify that there are sufficient resources (CPU, memory, etc.) available on the nodes in the cluster to accommodate the pod’s resource requests.
kubectl describe node <node-name>
3. Check Pod Resource requests: Ensure that the pod’s resource requests are within the limits of the available resources on the nodes.
kubectl describe pod <pod-name>
4. Check Node Selector and Affinity:: If the pod specifies node selectors or affinity rules, ensure that they match the labels or conditions of the available nodes in the cluster.
kubectl describe node <node-name>
5. Check Taints and Tolerations: Verify that the node where the pod is supposed to be scheduled does not have any taints that the pod cannot tolerate. Check the pod’s tolerations and node taints
kubectl describe node <node-name>
or
kubectl describe pod <node-name>
6. Check Pod Security Policies (PSP): If Pod Security Policies (PSPs) are enforced in the cluster, ensure that the pod’s security settings comply with the policies.
kubectl describe pod <pod-name> and kubectl describe psp <psp-name>
7. Check Node Conditions: Verify that the nodes in the cluster are in a healthy state and ready to accept new pods
kubectl describe node <node-name>
Q44. What are the limitations of using default namespace?
Ans: It makes difficult to keep the track of all application you can manage in your cluster over time.
Custom namespace helps us to grouping apps into logical grouping such as namespace for monitoring programs, another for security applicaitons.
Q.45 How can you safely drain the k8s node?
Ans:
Step 1: Identify the nodes that need to be drained
kubectl get nodes
Step 02: Cordon the node
Before draining the node, mark it as unschedulable to prevent new pods from being scheduled onto the node. This ensures that no new workloads are added to the node during the draining process.
kubectl cordon <node-name>
step 03: Drain the node
Execure below command to initiate the draining process. Kubernetes will evict all pods running on the node and reschedule them onto other nodes in the cluster. Use the --ignore-daemonsets
flag to skip evicting pods managed by DaemonSets. DaemonSet pods are typically critical system components that should remain running on every node in the cluster.
kubectl drain --ignore-daemonsets <node name>
Step 04: Monitor the draining process
Monitor the draining process to ensure that all pods are successfully evicted from the node and rescheduled onto other nodes in the cluster.
kubectl get pods --all-namespaces -o wide
Step 06: Verify Draining
Once the draining process is complete, verify that the node is empty of pods. The output should be empty, indicating that no pods are running on the drained node.
kubectl get pods --field-selector spec.nodeName=<node-name>
Step 07: Perform Maintainance
With the node drained and empty, you can safely perform maintenance tasks such as applying updates, rebooting the node, or making configuration changes.
Step 08: Uncordon Node
After completing maintenance, mark the node as schedulable again. This allows new pods to be scheduled onto the node once it’s back online.
kubectl uncordon <node-name>
Step 09: Monitor rescheduling
Monitor the cluster to ensure that pods evicted from the drained node are successfully rescheduled onto other nodes and that the cluster returns to its desired state.
Q46. How we can control the resource usage of POD?
Ans: We can use limit resource and request resource in the pod template to control the resource usage.
Request: No of resources being requested for an container. If a container exceeds its request for resources, it can be throttled back down to its request.
Limit: An upper cap on the resources a single container can use. If it tries to exceed this predefined limit it can be terminated if K8’s decides that another container needs these resources. If you are sensitive towards pod restarts, it makes sense to have the sum of all container resource limits equal to or less than the total resource capacity for your cluster.
apiVersion: v1
kind: Pod
metadata:
name: node-app
spec:
containers:
- name: example1
image:example/example1
resources:
requests:
memory: "500Mi"
cpu: "1m"
limits:
memory: "1000Mi"
cpu: "5m"
Q.47 What is PDB (Pod Disruption Budget) ?
Ans: A Kubernetes administrator can create a deployment of a kind: PodDisruptionBudget for high availability of the application, it makes sure that the minimum number is running pods are respected as mentioned by the attribute minAvailable spec file. This is useful while performing a drain where the drain will halt until the PDB is respected to ensure the High Availability(HA) of the application. The following spec file also shows minAvailable as 2 which implies the minimum number of an available pod (even after the election).
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: node-app-pdb
spec:
minAvailable: 2
selector:
matchLabels:
app: node-app
Q.48 How to run the pod on a particular node?
Ans: To run the Particular pod on particular node, we can use node affinity or node selectors to specify the node where the pod should be scheduled.
- Node Selector: we can use a node selector to specify that a pod should be scheduled on nodes with specific labels.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
nodeSelector:
<label-key>: <label-value>
Replace <label-key>
and <label-value>
with the label key and value that match the desired node’s labels. For example:
nodeSelector:
kubernetes.io/hostname: my-node
2. Node Affinity:Node affinity allows you to specify more complex rules for pod scheduling based on node labels. You can use node affinity to prefer or require certain nodes for pod placement
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: <label-key>
operator: In
values:
- <label-value>
Replace <label-key>
and <label-value>
with the label key and value that match the desired node’s labels. For example:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
3. Taints and Tolerations: If the node has taints applied, you may also need to add tolerations to the pod definition to allow the pod to be scheduled on the tainted node.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
tolerations:
- key: <taint-key>
operator: Equal
value: <taint-value>
effect: NoSchedule
Replace <taint-key>
and <taint-value>
with the key and value of the taint applied to the node.
Q 49. How to do maintainance activity on k8s node?
Ans: Performing maintenance activities on a Kubernetes (K8s) node requires careful planning and execution to minimize disruption to running applications and ensure the overall health and stability of the cluster. Here are the general steps to perform maintenance on a Kubernetes node:
- Drain the node
- Mark the node as Unschedulable
- Perform Maintainance Task
- Verify Node Status
- Uncordon the Node
- Validate the POD status
- Rollout updates (if applicable)
- Monitor Cluster Health
1.Drain the Node: Before performing maintenance, you should drain the node to gracefully evict all the running pods from the node. The Kubernetes control plane will schedule the evicted pods to other healthy nodes in the cluster. Use the following command to drain the node:
kubectl drain <node_name> –ignore-daemonsets
2. Mark the Node as Unschedulable: Prevent new pods from being scheduled on the node during maintenance:
kubectl cordon <node_name>
3. Perform Maintenance Tasks: Perform any required maintenance tasks on the node, such as OS upgrades, kernel updates, hardware replacements, etc.
4. Verify Node Status: After the maintenance is completed, verify that the node is back online and functioning correctly.
5. Uncordon the Node: Allow the node to accept new pods again:
kubectl uncordon <node_name>
6. Validate Pod Status: Check the status of the pods that were running on the node before draining to ensure they have been successfully rescheduled to other nodes.
7. Rollout Updates (if applicable): If you have made any changes that require pod updates (e.g., container image updates), trigger a controlled rollout of the affected pods to the updated version.
8. Monitor Cluster Health: Keep an eye on the overall health of the cluster after maintenance. Monitor the logs and metrics to ensure that all components and nodes are functioning as expected.
50. What are the various K8 services running on nodes and describe the role of each service?
Ans: In a Kubernetes (K8s) cluster, several essential services run on nodes to ensure proper cluster management, networking, and communication between components. Here are some of the key services and their roles:
- kubelet
- kube-proxy
- container runtime
- kube-dns / core dns
- kubelet-certificate controller
- kubelet eviction manager
- kube-proxy (IPVS mode)
- metric server
- node problem detector
- kube-reserved and kube-system-reserved cgroups
1.kubelet: The kubelet is an agent that runs on each node and is responsible for managing the containers running on that node. It communicates with the Kubernetes control plane and ensures that the containers specified in Pod manifests are running and healthy.
2. kube-proxy: The kube-proxy is responsible for network proxying and load balancing for services running in the cluster. It enables communication between Pods and services and maintains network rules to forward traffic to the appropriate destinations.
3. container runtime: The container runtime is the software responsible for pulling container images and running containers on the node. Kubernetes supports various container runtimes, such as Docker, containerd, and others.
4. kube-dns/coredns: The kube-dns or CoreDNS service provides DNS resolution within the cluster. It allows Pods to discover and communicate with each other using DNS names instead of direct IP addresses.
5. kubelet-certificate-controller: This service ensures that each node has the necessary TLS certificates required for secure communication with the control plane.
6. kubelet-eviction-manager: The kubelet-eviction-manager monitors the resource usage of the node and triggers Pod eviction when there is a lack of resources, helping to maintain node stability and prevent node resource exhaustion.
7. kube-proxy (IPVS mode): In clusters running with IPVS (IP Virtual Server) mode, kube-proxy uses IPVS to handle the load balancing of services more efficiently.
8. metrics-server: The metrics-server collects resource usage metrics (CPU, memory, etc.) from nodes and Pods and provides them to Kubernetes Horizontal Pod Autoscaler (HPA) and other components for scaling decisions.
9. node-problem-detector: The node-problem-detector detects and reports node-level issues, such as kernel panics or unresponsive nodes, to the Kubernetes control plane for further actions.
10. kube-reserved and kube-system-reserved cgroups: These are control groups that reserve CPU and memory resources for the kubelet and critical system components to ensure their stability and proper functioning.
These services, running on every node, play a crucial role in maintaining the health, networking, and performance of the Kubernetes cluster. They ensure seamless communication, resource management, and container orchestration, providing the foundation for deploying and managing containerized applications effectively in the Kubernetes environment.
51. What is the role of Load Balance in Kubernetes?
Ans:
The role of Load Balancing in Kubernetes is to distribute incoming network traffic across multiple instances of a service or a set of Pods that are part of a Kubernetes Deployment or ReplicaSet. Load balancing ensures that each instance or Pod receives a fair share of requests, optimizing resource utilization and providing high availability for applications.
Here’s how load balancing works in Kubernetes:
1.Service: In Kubernetes, a Service is an abstraction that defines a logical set of Pods and a policy for accessing them. A Service acts as a stable endpoint for other applications to access the Pods running your application.
2. Load Balancer: When a Service is created, Kubernetes can automatically provision a load balancer (external or internal, depending on the cloud provider and configuration) to distribute incoming traffic across the Pods associated with the Service.
3. Traffic Distribution: The load balancer continuously monitors the health and availability of the Pods associated with the Service. It uses different algorithms, such as round-robin, least connections, or IP hash, to evenly distribute incoming requests to the available Pods. This ensures that each Pod gets its fair share of traffic, preventing any single Pod from being overwhelmed.
4. High Availability: Load balancing also provides high availability. If a Pod becomes unhealthy or unresponsive, the load balancer automatically routes traffic to the remaining healthy Pods, ensuring that the application remains accessible even if individual Pods fail.
5. Scaling and Rolling Updates: Load balancing plays a critical role in scaling and rolling updates. When new Pods are added due to scaling or updates, the load balancer automatically starts routing traffic to these new Pods, gradually replacing the older ones. This allows for seamless scaling and updates with minimal or no disruption to the application.
6. Service Discovery: Load balancing facilitates service discovery within the cluster. Clients do not need to know the exact locations or IP addresses of individual Pods; they can simply access the Service, and the load balancer routes their requests to the appropriate Pod.
52. How to monitor the Kubernetes cluster?
Ans:
Monitoring a Kubernetes cluster involves setting up various tools and practices to collect and analyze data on the cluster’s health, performance, and resource usage. Here’s a step-by-step guide to monitoring a Kubernetes cluster effectively:
- Choose a Monitoring Solution: Select a monitoring solution suitable for your needs. Popular choices include Prometheus, Grafana, Datadog, New Relic, and others. Prometheus and Grafana are widely used in Kubernetes environments due to their flexibility and strong community support.
- Deploy Monitoring Components: Set up the monitoring components within the Kubernetes cluster. For Prometheus and Grafana, you can use Helm charts or manifests to deploy them. Prometheus scrapes metrics from Kubernetes components and applications, while Grafana provides visualization and dashboard capabilities.
- Node-Level Metrics: Collect and monitor node-level metrics (CPU, memory, disk, network) using tools like Node Exporter or cAdvisor. These tools export metrics to Prometheus, which stores and manages the data.
- Application Metrics: Instrument your applications with client libraries like Prometheus client libraries or OpenTelemetry to expose custom metrics. These metrics can be scraped by Prometheus and visualized in Grafana.
- Kubernetes Metrics: Use kube-state-metrics to expose Kubernetes-specific metrics like the status of deployments, replicasets, pods, and services. These metrics provide insights into the state of Kubernetes resources.
- Monitor Cluster Components: Keep an eye on the health of Kubernetes components like API server, controller manager, etcd, and scheduler. Prometheus can scrape metrics from these components, and alerting rules can be configured to notify of any issues.
- Alerting: Configure alerting rules in Prometheus or through your monitoring solution to get notified of critical issues or abnormal behavior. Use Alertmanager to manage and route alerts to various channels like email, Slack, or other messaging platforms.
- Visualize Data: Create custom dashboards in Grafana to visualize the collected metrics. Display critical cluster metrics, application-specific metrics, and any other relevant data for easy monitoring.
- Long-Term Storage: Consider setting up long-term storage for historical metrics data. Tools like Thanos or VictoriaMetrics can help store and query historical data from Prometheus.
- Log Aggregation: Use a centralized logging solution (e.g., ELK Stack, Fluentd, Loki) to collect and analyze container logs for debugging and troubleshooting purposes.
- Security Monitoring: Implement security monitoring to detect potential security threats and unauthorized access attempts in your Kubernetes cluster.
- Regular Review and Maintenance: Regularly review the monitoring data, analyze trends, and fine-tune alerting thresholds. Keep monitoring components updated and ensure that they are functioning correctly.
53. How to get the central logs from POD?
Ans:
To collect central logs from Pods running in a Kubernetes cluster, you can use a centralized logging solution. One popular approach is to use the ELK Stack, which consists of three main components: Elasticsearch, Logstash (or Fluentd), and Kibana. Here’s how you can set up central logging using the ELK Stack:
- Install Elasticsearch: Deploy Elasticsearch as a central log storage and indexing solution. Elasticsearch will store and index the logs collected from various Pods.
- Install Logstash or Fluentd: Choose either Logstash or Fluentd as the log collector and forwarder. Both tools can collect logs from different sources, including application logs from Pods, and send them to Elasticsearch.
– If using Logstash: Install and configure Logstash on a separate node or container. Create Logstash pipelines to process and forward logs to Elasticsearch.
– If using Fluentd: Deploy Fluentd as a DaemonSet on each node in the Kubernetes cluster. Fluentd will collect logs from containers running on each node and send them to Elasticsearch.
- Configure Application Logs: Inside your Kubernetes Pods, ensure that your applications are configured to log to the standard output and standard error streams. Kubernetes will collect these logs by default.
- Install Kibana: Set up Kibana as a web-based user interface to visualize and query the logs stored in Elasticsearch. Kibana allows you to create custom dashboards and perform complex searches on your log data.
- Configure Log Forwarding: Configure Logstash or Fluentd to forward logs from the Kubernetes Pods to Elasticsearch. This may involve defining log collection rules, filters, and log parsing configurations.
- View Logs in Kibana: Access Kibana using its web interface and connect it to the Elasticsearch backend. Once connected, you can create visualizations, search logs, and analyze log data from your Kubernetes Pods.
Additionally, you can consider using other centralized logging solutions like Loki or Splunk for log aggregation and analysis. The process may vary slightly depending on the logging tool you choose, but the core concept remains the same: collect logs centrally from Kubernetes Pods and make them available for analysis and visualization in a user-friendly interface.
Keep in mind that setting up and maintaining a centralized logging solution requires careful planning and consideration of resource usage, especially if you have a large number of Pods generating a significant volume of logs.
54. What is the difference between a replica set and a replication controller?
Ans: Replica Set and Replication Controller do almost the same thing. Both ensure that a specified number of pod replicas are running at any given time. The difference comes with the usage of selectors to replicate pods. Replica Set uses Set-Based selectors while replication controllers use Equity-Based selectors.
- Equity-Based Selectors: This type of selector allows filtering by label key and values. So, in layman’s terms, the equity-based selector will only look for the pods with the exact same phrase as the label.
Example: Suppose your label key says app=nginx; then, with this selector, you can only look for those pods with label app equal to nginx. - Selector-Based Selectors: This type of selector allows filtering keys according to a set of values. So, in other words, the selector-based selector will look for pods whose label has been mentioned in the set.
Example: Say your label key says app in (Nginx, NPS, Apache). Then, with this selector, if your app is equal to any of Nginx, NPS, or Apache, the selector will take it as a true result.
55. List some container resource monitoring tools?
Ans: Here are some of the resource monitoring tools:
- Grafana
- Kibana
- CAdvisor
- Prometheus
- SolarWinds
- ElasticSearch
- Sysdig
56. Which selectors does the replica set use?
Ans: A replica set in Kubernetes uses label selectors to identify which pods it should manage. The selectors specify a set of key-value pairs that the replica set uses to match against the labels applied to the pods. Set-based selectors allow filtering keys according to a set of values. There are three kinds of operators: in, not in, and exists. The replica set will look for pods whose labels match the selectors.
57. Which selectors do replication controllers use?
Ans: Replication controllers use label selectors to identify the set of pods that they manage. Specifically, they use equality-based selectors, which allow filtering by label key and values. These selectors look for pods with labels that match a specific key-value pair. To use an equality-based selector, you can use the “-l” or “–selector” option.
58. What do equality-based selectors do?
Ans: They allow filtering by label keys and values. Thus they will only look for pods with the exact same phrase as the label. When a pod or other resource is created, it can be labeled with key-value pairs. Equality-based selectors allow you to select resources based on an exact match of those key-value pairs.
59. Define StatefulSets
Ans: StatefulSets are a type of workload API that manage stateful applications. They can also be used to manage the scaling and deployment of pod sets. StatefulSets are often used to manage the deployment and scaling of pods that require stable network identities and persistent storage, making them well-suited for stateful workloads.
60. Explain the two types of Kubernetes pods.
Ans:
The two Kubernetes pods are single-container pods and multi-container pods. Here’s a brief explanation of each.
Single-container pods: These pods contain only one container and are the most common type of pod used in Kubernetes. They can be created using commands such as kubectl run or kubectl create.
Multi-container pods: These pods contain multiple containers that are tightly coupled and need to run together on the same host. Multi-container pods are created using the kubectl create command with a YAML file that defines the pod’s configuration..
61. List some of the types of Kubernetes volumes.
Ans:
The different types of Kubernetes volumes are as follows:
EmptyDir: This volume is first created when a node is assigned with a pod. Initially, it is empty. A volume of type emptyDir is available for the lifetime of the pod.
Flocker: It is an open-source and clustered container data volume manager.
HostPath: This volume mounts a file or directory from the host node’s filesystem into the pod. It can provide access to host files or share files between containers on the same host.
NFS: Network File System (NFS) allows computers to either access or share files over the network. It is a dedicated file storage when multiple users must retrieve data for centralized disk capacity.
63. How can you perform maintenance in a single pod?
Ans: Here are the steps for performing maintenance in a single pod:
- Get the name of the pod you want to perform maintenance using the command
- Put the pod in maintenance mode by adding a label to it. You can use any label name, but here we’ll use “maintenance-mode.”
- Verify that the label has been applied to the pod
- Perform maintenance on the pod as needed
- Remove the maintenance label from the pod when you’re done
- Verify that the label has been removed
64. Can you schedule the pods to the node if the node is tainted?
Ans: If a node is tainted, pods will not be scheduled on it by default, but you can use tolerations in the pod spec to allow specific pods to be scheduled on the tainted node.
Tolerations are used to specify that a pod can tolerate (or “ignore”) a certain taint, allowing it to be scheduled on a tainted node. This can be useful in scenarios where you want to reserve certain nodes for specific types of workloads or to mark nodes as unsuitable for certain workloads.
Apply taint to a node:
kubectl taint nodes node1 key=value:NoSchedule
Apply tolerations to the pod:
spec:
tolerations:
- key: "key"
operator: "Equal"
value: "value"
effect: "NoSchedule"
65. How can you achieve zero downtime in Kubernetes?
Ans: We can achieve zero downtime in Kubernetes through RollingUpdate strategy. It is a process that allows updating the Kubernetes system with little effect on performance and zero downtime. This strategy involves gradually replacing old instances of an application with new ones, thus ensuring that the application is always available to end-users.
When you use RollingUpdate strategy, Kubernetes creates a new replica set with the updated version of your application and gradually replaces the old replica set with the new one. This ensures that the new version is rolled out to users gradually and any issues can be caught early.
66. How can you run a pod on a specific node?
Ans:
We can run a pod on a specific node using node affinity. Here, a node is assigned to an arbitrary label, and they are configured to be assigned to that node as per the label created.
For example, this code snippet creates an arbitrary label, node location, and assigns the value Germany to the node named person-01:
kubectl label nodes person-01 nodelocation=Germany
67. A Pod running a critical service in your EKS cluster has suddenly failed. How would you troubleshoot and resolve this issue?
Ans: Start by checking the status and logs of the Pod using kubectl describe pod
and kubectl logs
commands respectively. To store and monitor your EKS cluster logs, consider using the CloudWatch Logs feature of AWS.
68. You have a set of Pods running an application in EKS. How would you expose this application to outside traffic?
Ans: AWS Load Balancer Controller in EKS allows for exposure of services to outside traffic. By adding annotations to your service, you can create an AWS Application Load Balancer or a Network Load Balancer.
69. Your EKS application is experiencing higher than expected traffic. How would you automatically scale the Pods?
Ans: Set up the Kubernetes Horizontal Pod Autoscaler in EKS. For metrics needed for autoscaling, EKS supports the Kubernetes Metrics Server. You can also consider AWS Cluster Autoscaler on EKS for node-level scaling in combination with the Kubernetes Cluster Autoscaler.
70. Your EKS application needs to access sensitive information such as database passwords. How would you securely manage this information?
Ans: Kubernetes Secrets in EKS can be used for storing sensitive information. AWS Secrets Manager also provides a secure way to store secrets. AWS Secrets Manager Injector enables secrets to be directly accessed from Secrets Manager to your Kubernetes Secrets.
71. Your EKS application needs to write and read data from a persistent storage. How can you achieve this in EKS?
Ans: The Amazon EBS CSI driver in EKS can provision Amazon EBS volumes for Pods. Persistent Volumes and Persistent Volume Claims can be defined to bind Pods to EBS volumes.
72. How can you ensure separation of resources in a multi-tenant EKS cluster?
Ans: Use Kubernetes Namespaces for virtual separation within the EKS cluster. AWS also provides the IAM roles for service accounts feature to assign IAM permissions to Pods.
73. You have deployed an application in EKS using a Deployment object. You now need to update the application with zero downtime. How would you achieve this?
Ans: Kubernetes Deployments in EKS support Rolling updates. This strategy allows for incremental updates of Pods instances with new ones, ensuring zero downtime.
74. You rolled out an update to your application in EKS, but there are unexpected errors. How would you rollback the update?
Ans: Use the kubectl rollout undo deployment
command to rollback a Deployment in EKS. This reverts the Deployment to its previous state.
75. An EC2 instance serving as a Node in your EKS cluster has failed. How does EKS handle this situation?
Ans: The Kubernetes Node Controller in EKS marks the Node as ‘unavailable’ and reschedules the Pods to other Nodes in the EKS cluster. If your nodes are part of an EC2 Auto Scaling group, a new instance will replace the failed one.
76. : How does EKS distribute network traffic to Pods?
Ans: EKS uses Kubernetes Services to define a set of Pods and route network traffic to them. Additionally, the AWS Load Balancer Controller can create an AWS Application Load Balancer or a Network Load Balancer to distribute traffic among Pods.
77. You want to set up a GitOps workflow for your EKS applications with automatic syncing when changes are pushed to the application’s repository. How would you achieve this?
Ans: Install Argo CD in your EKS cluster and create an Argo CD application pointing to the desired Git repository. By setting the sync policy to ‘automatic’, Argo CD will automatically apply changes whenever the repository’s state changes.
78. You are tasked with setting up a monitoring solution for your EKS applications. Which tools would you use and how would you set them up?
Ans: Use Prometheus for metrics collection and Grafana for metrics visualization. Both can be deployed in your EKS cluster. With the Prometheus Operator, setup in Kubernetes can be simplified.
79. Your team needs to be alerted when the CPU usage of any Pod in your EKS cluster exceeds 80% for more than 5 minutes. How would you set this up?
Ans: Use Prometheus to scrape CPU metrics from your EKS cluster and set up alert rules. Grafana can send out the alert notifications through various alert notification channels.
80. Your Prometheus instance is unable to handle the load of your growing EKS cluster. How would you scale it?
Ans: Scale Prometheus in EKS by dividing the targets that Prometheus scrapes into different Prometheus servers, a process known as sharding. Alternatively, you can use Thanos or Cortex for horizontal scalability of Prometheus.
81. Your team wants a Grafana dashboard to visualize the HTTP request latency of your applications running in EKS. How would you achieve this?
Ans: After ensuring your application exposes these metrics and Prometheus scrapes them, set up a Grafana dashboard for visualization. Grafana can create plots using data queried from Prometheus.
82. You deployed an application update using Argo CD in your EKS cluster, but the new version is causing errors. How would you rollback to the previous version?
Ans: In the Argo CD dashboard, select the application, then click on the ‘App Details’ tab, and finally on the ‘History’ tab. Select the desired state and hit the ‘Sync’ button to rollback.
83. Your company uses multiple EKS clusters for different environments (development, staging, production). How would you streamline the deployment process across these clusters?
Ans: Register all your EKS clusters to your Argo CD instance. When defining the Argo CD application, specify the destination cluster where the application should be deployed.
84. You need to collect and analyze metrics of the EKS control plane itself. How can you achieve this?
Ans: Prometheus can collect metrics from the EKS control plane. Set up service monitors for the EKS API server and other components. Use Grafana to visualize these metrics.
85. Your team needs to access Grafana dashboards, but you want to ensure only authorized persons can view it. How would you secure Grafana?
Ans: Grafana supports multiple authentication methods, including OAuth, LDAP, and basic auth. Enable the appropriate authentication based on your organization’s requirements. Set up fine-grained access control in Grafana for detailed user permissions.
86. Your monitoring needs to be highly available, and you can’t afford to lose metrics data. How would you ensure the high availability of Prometheus?
Ans: Run two or more identically configured Prometheus instances in parallel for high availability. Both instances scrape the same targets, thus they have the same data. If one instance fails, the other instance provides access to your metrics data
87. What is difference between stateful sets and deployments?
Ans:
Sr No | Feature | StatefulSets | Deployments |
1 | Purpose | Manage stateful applications | Manage stateless applications |
2 | Pod Identity | Each pod has a unique, stable network identity (e.g., myapp-0 , myapp-1 ) | Pods are interchangeable and have no unique identity |
3 | Storage | Persistent storage is associated with each pod, ensuring data persistence across rescheduling | Storage is typically ephemeral, and pods do not retain data when rescheduled |
4 | Pod Creation Order | Pods are created and deleted in a specific, ordered sequence | Pods are created and deleted in any order, without a defined sequence |
5 | Scaling | Scaling up and down occurs one pod at a time, maintaining order | Scaling up and down can occur in parallel, without concern for order |
6 | Use Cases | Databases, key-value stores, distributed systems | Web servers, stateless APIs, batch jobs |
7 | Network Identity | Pods maintain a consistent network identity (DNS) | Pods can be accessed through a Service, but individual pods are not uniquely identifiable |
8 | Pod Updates | Updates are performed in a controlled manner, often with manual intervention to ensure order | Rolling updates are performed automatically and can be controlled with strategies like RollingUpdate or Recreate |
9 | Headless Services | Often used with Headless Services to directly expose each pod | Usually used with ClusterIP or other service types to load-balance across pods |
10 | Configuration Complexity | Higher complexity due to maintaining state and order | Generally simpler due to stateless nature and interchangeable pods |
88. how application traffic reach to application on aws eks cluster, explain with proper flow?
Ans:
1. User Requests: A user opens a browser and navigates to https://myapp.example.com
.
2. DNS Resolution: The domain myapp.example.com
is resolved to the IP address of the AWS Load Balancer via Route 53 or another DNS service.
3. AWS Load Balancer: The user’s request hits the AWS Load Balancer (e.g., ALB). The ALB, configured by the Ingress Controller, forwards the request based on the Ingress rules.
4. Ingress Resource: The Ingress resource in the EKS cluster defines the routing rules. For example, it routes traffic from /api
to the api-service
and /web
to the web-service
.
5. Service Definition: The Ingress Controller routes the traffic to the appropriate Kubernetes Service (e.g., api-service
). The Service is of type LoadBalancer
, NodePort
, or ClusterIP
and exposes the necessary pods.
6. Pod Selection and Endpoints: The Service identifies the pods using label selectors (e.g., app: myapp-api
). It maintains an up-to-date list of endpoints (pods IPs) that match the selector
7. Routing to PODs: The Service routes the request to one of the healthy pods listed in its endpoints. kube-proxy
on the node handles the actual routing based on the Service’s rules.
8. Application Response: The pod processes the request and sends the response back to the user through the same path: Pod → Service → AWS Load Balancer → User’s Browser.
User -> DNS (Route 53) -> AWS Load Balancer (ALB/NLB) -> Ingress Controller -> Kubernetes Service -> Pod
89. Can we use many claims out of a persistent volume? Explain in detail?
Ans: The mapping between persistentVolume and persistentVolumeClaim is always one to one. Even When you delete the claim, PersistentVolume still remains as we set persistentVolumeReclaimPolicy is set to Retain and It will not be reused by any other claims. Below is the spec to create the Persistent Volume.
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
90. What kind of object do you create, when your dashboards like application, queries and kubernetes API to get some data?
Ans: You should be creating service account. A service account creates a token and tokens are stored inside a secret object. By default Kubernetes automatically mounts the default service account. However, we can disable this property by setting automountServiceAccountToken: false in our spec. Also, note each namespace will have a service account
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-sa
automountServiceAccountToken: false
91. What is difference between pod and job?
Ans:
Pod is always ensure that container is running
Job ensures that the pods run to its completion. Job is to a finite task
92. How to monitor that pod is always running?
Ans: A liveness probe always checks if an application in a pod is running, Â if this check fails the container gets restarted. This is ideal in many scenarios where the container is running but somehow the application inside a container crashes.
93. What are the types of multi-container pod patterns?
Ans:
Sidecar: A pod spec which runs the main container and a helper container that does some utility work, but that is not necessarily needed for the main container to work.
Adapter: The adapter container will inspect the contents of the app’s file, does some kind of restructuring and reformat it, and write the correctly formatted output to the location.
Ambassador: It connects containers with the outside world. It is a proxy that allows other containers to connect to a port on localhost.