AWS Services Based Interview Questions and Answers
1. What is Cloud Computing?
Ans: Cloud computing means it provides services to access programs, application, storage, network, server over the internet through browser or client side application on your PC, Laptop, Mobile by the end user without installing, updating and maintaining them.
2. Why we need to go with Cloud Computing?
Ans: Below are some advantages we can get from cloud computing if we go with it.
- Lower Computing Cost
- Improved Performance
- No IT maintainance
- Business Connectivity
- Easily Upgraded
- Device independent
3. What are deployment models in the cloud?
Ans: Below are the deployment models
- Private cloud
- Public cloud
- Hybrid Cloud
4. What is AWS?
Ans: AWS stands for Amazon Web Services. AWS is a platform that provides on-demand resources for hosting web services, storage, networking, databases and other resources over the internet with a pay-as-you-go pricing.
5. What are the Components of AWS?
Ans: There are multiple AWS components which we can use to create and host the application. Below are some of them.
- EC2 (Elastic Compute Cloud)
- S3 ( Simple Storage Service)
- Route 53
- EBS ( Elastic Block Store)
- AWS Cloudwatch
6. What is mean by Region, Availability Zone and Edge Location?
Ans:
Region: An independent collection of AWS resources in a defined geography. A collection of Data centers (Availability zones). All availability zones in a region connected by high bandwidth
Availability Zones: An Availability zone is a simply a data center. Designed as independent failure zone. High speed connectivity, Low latency.
Edge Locations: Edge location are the important part of AWS Infrastructure. Edge locations are CDN endpoints for cloud front to deliver content to end user with low latency
7. How to access AWS Platform?
Ans:
- Using AWS Console
- AWS CLI (Command Line Interface)
- AWS SDK (Software development kit)
8. Explain Service models of cloud?
Ans:
- SAAS (Software as a Service): It is software distribution model in which application are hosted by a vendor over the internet for the end user freeing from complex software and hardware management. (Ex: Google drive, drop box)
- PAAS (Platform as a Service): It provides platform and environment to allow developers to build applications. It frees developers without going into the complexity of building and maintaining the infrastructure. (Ex: AWS Elastic Beanstalk, Windows Azure)
- IAAS (Infrastructure as a Service): It provides virtualized computing resources over the internet like cpu, memory, switches, routers, firewall, Dns, Load balancer (Ex: Azure, AWS)
9. What is Elastic Beanstalk?
Ans: AWS Elastic Beanstalk is the fastest and simplest way to get an application up and running on AWS. Developers can simply upload their code and the service automatically handle all the details such as resource provisioning, load balancing, Auto scaling and Monitoring.
10. What is Amazon Lightsail?
Ans: Lightsail designed to be the easiest way to launch and manage a virtual private server with AWS.Lightsail plans include everything you need to jump start your project a virtual machine, SSD based storage, data transfer, DNS Management and a static IP.
11. What is a snowball?
Ans: Snowball is a data transport solution that used source appliances to transfer large amounts of data into and out of AWS. Using snowball, you can move huge amount of data from one place to another which reduces your network costs, long transfer times and also provides better security.
12. What is MFA in AWS?
Ans: Multi factor Authentication can add an extra layer of security to your infrastructure by adding a second method of authentication beyond just password or access key.
13. What are the Authentication in AWS?
Ans:
- Username and Password
- Access key
- Access Key / Session Token
14. What is SNS?
Ans: SNS stands for Simple Notification Service. SNS is a web service that makes it easy to notifications from the cloud. You can set up SNS to receive email notification or message notification.
15. What is Amazon ElastiCache?
Ans: Amazon ElastiCache is a web services that simplifies the setup and management of
distributed in memory caching environment.
- Cost Effective
- High Performance
- Scalable Caching Environment
- Using Memcached or Redis Cache Engine
16. What is AWS Certificate Manager ?
Ans: AWS Certificate Manager is an administration that lets you effortlessly arrangement, oversee, and send open and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) endorsements for use with AWS administrations and your inward associated assets. SSL/TLS declarations are utilized to anchor arrange interchanges and set up the character of sites over the Internet and additionally assets on private systems. AWS Certificate Manager expels the tedious manual procedure of obtaining, transferring, and reestablishing SSL/TLS endorsements.
17. What is the AWS Key Management Service?
Ans: AWS Key Management Service (AWS KMS) is an overseen benefit that makes it simple for you to make and control the encryption keys used to scramble your information. … AWS KMS is additionally coordinated with AWS CloudTrail to give encryption key use logs to help meet your inspecting, administrative and consistence needs.
18. What is Amazon EMR ?
Ans: Amazon Elastic MapReduce (EMR) is one such administration that gives completely oversaw
facilitated Hadoop system over Amazon Elastic Compute Cloud (EC2).
19. What is Amazon Kinesis Firehose ?
Ans: Amazon Kinesis Data Firehose is the least demanding approach to dependably stack gushing information into information stores and examination devices. … It is a completely overseen benefit that consequently scales to coordinate the throughput of your information and requires no continuous organization
20. There are four main AWS services related to CI/CD: CodeCommit, CodePipeline, CodeBuild, and CodeDeploy. Describe each of them?
Ans:
1.AWS CodeCommit: It is essentially a managed service—i.e. Amazon manages and scales it behind the scenes for you, just like S3—for Git-based source control.
2. AWS CodeBuild: It is used to build, test, and generate artifacts—files that are generated from successful build steps—for deployment. This, too, is a managed service, doing provisioning and scaling automatically.
3. AWS CodeDeploy: It automates application deployments to several types of compute resources such as EC2 instances or ECS clusters.
4. AWS CodePipeline: It is a continuous delivery service that allows automating and integrating build, test, and deploy processes.
21. How do you ensure security in AWS environments?
Ans: Security in AWS environments can be ensured through various measures which are as follows:
- Using IAM users to control access
- Implementing network security with VPC and security groups,
- Encrypting data at rest and in transit using KMS Keys
- Regularly updating and patching instances
- Enabling services like AWS CloudTrail and AWS Config for monitoring and auditing.
22. Explain the difference between EC2 and Lambda.
Ans:
EC2: It is a service that provides virtual servers in the cloud, allowing users to run applications on them.
Lambda: It is a serverless computing service where you can run code without provisioning or managing servers. Lambda automatically scales based on the incoming requests and charges only for the compute time consumed.
23. How do you automate deployments in AWS?
Ans: Deployments in AWS can be automated using services like AWS CodeDeploy, AWS CloudFormation, or through scripting with tools like AWS CLI (Command Line Interface) or SDKs (Software Development Kits) in languages like Python or Node.js. Continuous Integration/Continuous Deployment (CI/CD) pipelines can also be set up using services like AWS CodePipeline.
24. Describe your experience with AWS networking?
Ans: In my experience, I have set up and configured Virtual Private Clouds (VPCs) with subnets, route tables, and internet gateways. I have implemented security measures using network ACLs and security groups. Additionally, I have configured VPN connections and Direct Connect to extend on-premises networks into AWS.
25. What is AWS and its key services?
Ans: AWS (Amazon Web Services) is a cloud computing platform offered by Amazon. Its key services include compute services like EC2, storage services like S3, database services like RDS, networking services like VPC, and many others for analytics, machine learning, security, and more.
26. What is VPC, and how does it work?
Ans: VPC (Virtual Private Cloud) is a virtual network in AWS that allows users to provision a logically isolated section of the AWS cloud. It enables users to define their own IP address range, create subnets, configure route tables, and gateways. VPC provides a secure and scalable environment for deploying AWS resources.
27. Explain the concept of Auto Scaling in AWS?
Ans: Auto Scaling in AWS automatically adjusts the number of EC2 instances or other resources in response to changes in demand or conditions defined by the user. It helps maintain application availability and performance while minimizing costs by scaling resources up during peak periods and down during low usage.
28. How do you monitor AWS resources and applications?
Ans:
AWS CloudWatch: Monitoring in AWS can be done using services like CloudWatch, which collects and tracks metrics, sets alarms, and triggers actions based on predefined thresholds.
AWS CloudTrail: It provides visibility into user activity and API usage. Additionally, third-party monitoring tools can be integrated for more comprehensive monitoring.
29. What is AWS Cloudformation, and how do you use it?
Ans: AWS CloudFormation is a service that allows you to define and provision AWS infrastructure as code using templates. These templates can be written in JSON or YAML format and describe the resources and their configurations. CloudFormation automates the provisioning and management of resources, making it easier to replicate and manage infrastructure.
30. Describe your experience with AWS IAM (Identity and Access Management).
Ans: In my role, I’ve used AWS IAM to manage access to AWS services and resources. This includes creating and managing IAM users, groups, and roles, assigning permissions using policies, enabling multi-factor authentication (MFA), and integrating IAM with other AWS services for fine-grained access control.
31. Explain the difference between RDS and DynamoDB?
Ans: RDS (Relational Database Service) is a managed database service that supports relational databases like MySQL, PostgreSQL, and SQL Server. RDS offers features like automatic backups, scaling, and patching for relational databases,
DynamoDB, on the other hand, is a fully managed NoSQL database service provided by AWS. DynamoDB provides fast and scalable performance for key-value and document data.
32. How do you ensure high availability in AWS environments?
Ans: High availability in AWS environments can be ensured by following ways:
- deploying resources across multiple Availability Zones (AZs) within a region
- using services like Elastic Load Balancing (ELB) to distribute traffic
- implementing Auto Scaling to dynamically adjust capacity
- leveraging managed services with built-in redundancy and failover capabilities.
33. What is AWS Lambda, and how do you use it?
Ans: AWS Lambda is a serverless compute service that allows you to run code in response to events without provisioning or managing servers.
You can upload your code in languages like Node.js, Python, or Java, and Lambda automatically scales and executes it in response to events triggered by other AWS services or custom sources.
34. Describe your experience with AWS ECS (Elastic Container Service)?
Ans: AWS ECS is a fully managed container orchestration service that allows you to run, stop, and manage Docker containers on a cluster of EC2 instances or AWS Fargate.
In my experience, I’ve used ECS to deploy and scale containerized applications, manage task definitions, and integrate with other AWS services for logging, monitoring, and networking.
35. Explain the use cases for AWS S3 (Simple Storage Service)?
Ans: AWS S3 is a highly scalable object storage service that can be used for a variety of use cases such as:
- storing and serving static website content
- hosting backups and archives
- storing data for analytics and big data processing
- data lake for storing structured and unstructured data.
36. How do you handle disaster recovery in AWS?
Ans: Disaster recovery in AWS involves implementing strategies such as
- Data replication across multiple regions using services like S3 Cross-Region Replication
- RDS Multi-AZ deployments
- Creating automated backups with services like AWS Backup and snapshots
- Testing recovery procedures regularly to ensure rapid recovery in the event of a disaster.
37. What is AWS CloudWatch, and how do you use it for monitoring?
Ans: AWS CloudWatch is a monitoring and observability service that provides metrics, logs, and events for AWS resources and applications.
You can use CloudWatch to:
- Collect and track metrics
- Set alarms based on metric thresholds
- Store and analyze log data
- Gain insights into resource utilization
- Performanc
- Operational health
38. Describe your experience with AWS CloudFront?
Ans: AWS CloudFront is a content delivery network (CDN) service that accelerates the delivery of content to users by caching data at edge locations around the world.
In my experience, I’ve used CloudFront to distribute static and dynamic content, improve website performance, and reduce latency by caching content closer to end-users.
39. What is AWS Elastic Beanstalk, and how does it work?
Ans: AWS Elastic Beanstalk is a platform as a service (PaaS) offering that makes it easy to deploy and manage applications in the AWS cloud. You can simply upload your application code, and Elastic Beanstalk automatically handles the deployment, scaling, and monitoring of the underlying infrastructure required to run the application.
40. Explain the difference between AWS S3 and EBS (Elastic Block Store)?
Ans:
AWS S3 (Simple Storage Service): It is an object storage service designed for storing and retrieving large amounts of data.
EBS (Elastic Block Store): It is a block storage service designed for use with EC2 instances, providing persistent block-level storage volumes that can be attached to instances.
41. How do you optimize costs in AWS environments?
Ans: Cost optimization in AWS involves various strategies such as
- Rightsizing EC2 instances based on workload requirements
- Leveraging Reserved Instances and Savings Plans for predictable workloads
- Using Spot Instances for flexible workloads
- Implementing auto-scaling to match resource usage with demand
- Regularly reviewing and optimizing resource usage.
42. Describe your experience with AWS CloudTrail?
Ans: AWS CloudTrail is a service that provides visibility into user activity and API usage in AWS environments by recording API calls and logging related events.
We can CloudTrail to track changes to resources, troubleshoot operational issues, and comply with auditing and compliance requirements.
43. What is AWS Lambda@Edge, and when would you use it?
Ans: AWS Lambda@Edge is an extension of AWS Lambda that allows you to run Lambda functions at AWS edge locations in response to CloudFront events.
You would use Lambda@Edge to customize content delivery, manipulate HTTP headers, implement security measures, and perform other edge computing tasks to enhance the performance and security of your applications.
44. How do you manage permissions and access control in AWS?
Ans: Permissions and access control in AWS are managed using AWS Identity and Access Management (IAM), which allows you to create and manage users, groups, and roles, and assign permissions using policies. IAM provides fine-grained access control to AWS resources and services based on the principle of least privilege.
45. Explain the difference between AWS Redshift and Aurora.
Ans: AWS Redshift is a fully managed data warehouse service designed for running analytics queries on large datasets using SQL, while Aurora is a MySQL and PostgreSQL-compatible relational database engine designed for high performance, scalability, and availability.
46. Describe your experience with AWS ECS Fargate?
Ans: AWS ECS (Elastic Container Service) Fargate is a serverless compute engine for containers that allows you to run containers without managing the underlying infrastructure. In my experience, I’ve used ECS Fargate to deploy and scale containerized applications without provisioning or managing EC2 instances.
47. What is AWS Direct Connect, and when would you use it?
Ans: AWS Direct Connect is a dedicated network connection service that allows you to establish a private connection between your on-premises data center and AWS. You would use Direct Connect for high bandwidth, low latency, and consistent network performance requirements, such as data migration, hybrid cloud deployments, or large-scale data transfer.
48. How do you ensure data encryption in transit and at rest in AWS?
Ans: Data encryption in transit can be ensured by using SSL/TLS encryption for communication between clients and AWS services, while data encryption at rest can be achieved by using services like AWS Key Management Service (KMS) to manage encryption keys and encrypting data using server-side encryption (SSE) with S3, EBS, RDS, and other AWS services.
49. Describe your experience with AWS RDS Multi-AZ deployments?
Ans: AWS RDS (Relational Database Service) Multi-AZ deployments provide high availability and fault tolerance for database instances by automatically replicating data synchronously to standby instances in different Availability Zones.
In my experience, I’ve used Multi-AZ deployments to enhance the reliability and durability of critical database workloads.
50. What is AWS API Gateway, and how does it work?
Ans: AWS API Gateway is a fully managed service that allows you to create, publish, maintain, monitor, and secure APIs at any scale. It acts as a front door for applications to access data, business logic, or functionality hosted on AWS or on-premises servers, and provides features like API throttling, authorization, and monitoring.
51. Explain the concept of AWS Organizations?
Ans: AWS Organizations is a service that allows you to centrally manage and govern multiple AWS accounts within your organization. It provides features for creating and organizing accounts into hierarchical groupings, applying policies for security, compliance, and cost management, and automating account management tasks.
52. How do you handle logging in AWS environments?
Ans: Logging in AWS environments can be handled using services like CloudWatch Logs, which collects and monitors log data from AWS resources and applications, and provides features for searching, filtering, and analyzing log data. Additionally, third-party logging solutions can be integrated for centralized logging and analysis.
53. Describe your experience with AWS Lambda functions?
Ans: I have used AWS Lambda functions to implement serverless compute solutions for various use cases such as data processing, event-driven automation, real-time processing, and backend services. I’ve written Lambda functions in languages like Python, Node.js, and Java, and integrated them with other AWS services and external systems.
54. What is AWS EFS (Elastic File System), and when would you use it?
Ans: AWS EFS is a scalable and fully managed file storage service designed for use with AWS cloud services and on-premises resource.
56. Describe your experience with AWS Step Functions?
Ans: You would use EFS when you need a shared file system that can be accessed concurrently by multiple EC2 instances or containers in a scalable and elastic manner.
55. Explain the concept of AWS CloudFormation StackSets?
Ans: AWS CloudFormation StackSets is a feature that allows you to provision and manage AWS resources across multiple accounts and regions from a single template. It enables centralized management and automation of infrastructure deployments at scale, making it easier to ensure consistency and compliance across environments.
Ans: AWS Step Functions is a serverless orchestration service that allows you to coordinate and manage workflows involving multiple AWS services and tasks.
In my experience, I’ve used Step Functions to create state machines that define the flow of tasks, handle errors and retries, and provide visibility and monitoring into workflow execution.
57. What is AWS Elastic Load Balancing, and how does it work?
Ans: AWS Elastic Load Balancing is a service that automatically distributes incoming application traffic across multiple targets, such as EC2 instances, containers, and IP addresses, to ensure high availability and fault tolerance. It works by monitoring the health of targets and routing traffic to healthy targets based on configured load balancing algorithms.
58. How do you ensure compliance with AWS best practices?
Ans: Compliance with AWS best practices can be ensured by following AWS Well-Architected Framework principles:
- regularly reviewing and implementing AWS Security Best Practices
- conducting periodic security assessments and audits
- leveraging AWS Trusted Advisor
- other tools for automated compliance checks and recommendations.
59. Explain the use cases for AWS DynamoDB Streams?
Ans: AWS DynamoDB Streams is a feature that captures changes to DynamoDB tables in real-time and streams them to other AWS services or custom applications. You would use DynamoDB Streams for use cases such as data replication, change data capture, event-driven processing, and building real-time analytics applications.
60. Describe your experience with AWS Kinesis?
Ans: AWS Kinesis is a platform for streaming data on AWS, consisting of three services: Kinesis Data Streams, Kinesis Data Firehose, and Kinesis Data Analytics. We have used Kinesis to ingest, process, and analyze real-time streaming data from various sources such as IoT devices, logs, and clickstreams.
61. What is AWS OpsWorks, and how do you use it?
Ans: AWS OpsWorks is a configuration management service that automates the deployment and management of applications and resources on AWS using Chef or Puppet.
You can use OpsWorks to define the configuration of your applications, deploy code to EC2 instances or ECS clusters, and manage the lifecycle of your infrastructure.
62. How do you manage secrets and sensitive information in AWS?
Ans: Secrets and sensitive information in AWS can be managed using AWS Secrets Manager or AWS Systems Manager Parameter Store.
These services provide secure storage and management of sensitive data such as passwords, API keys, and encryption keys, and allow you to securely retrieve and rotate secrets as needed.
63. Describe your experience with AWS Elasticache?
Ans: AWS Elasticache is a fully managed in-memory caching service that supports popular caching engines like Redis and Memcached
In my experience, I’ve used Elasticache to improve application performance by caching frequently accessed data, reducing database load, and lowering latency for read-heavy workloads.
64. What is AWS CodeDeploy, and how does it work?
Ans: AWS CodeDeploy is a deployment automation service that allows you to deploy applications to EC2 instances, Lambda functions, or on-premises servers with automated rollbacks and health monitoring.
It works by deploying application revisions to target instances and verifying deployment success using configurable deployment strategies.
65. Explain the concept of AWS Snowball?
Ans: AWS Snowball is a service that allows you to transfer large amounts of data into and out of AWS using secure, ruggedized devices. You would use Snowball for offline data transfer when network bandwidth is limited or when transferring large datasets to or from AWS that would be impractical over the internet.
66. How do you troubleshoot performance issues in AWS environments?
Ans: Troubleshooting performance issues in AWS environments involves analyzing metrics and logs using services like CloudWatch and CloudTrail, identifying bottlenecks and resource constraints, optimizing resource utilization and configuration, and leveraging AWS Support and third-party monitoring tools for deeper analysis and resolution.
67. Describe your experience with AWS Glue?
Ans: AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy to prepare and transform data for analytics. In my experience, I’ve used Glue to discover, catalog, and clean data from various sources, perform schema transformation and mapping, and create ETL jobs for data processing and analysis.
68. What are the best practices for securing AWS IAM roles and policies?
Ans: Best practices for securing AWS IAM roles and policies using below:
- Implementing the principle of least privilege
- Regularly reviewing and rotating access keys and credentials
- Enabling MFA for privileged accounts
- Using IAM policies with conditions for granular access control
- Regularly monitoring and auditing IAM activity for security compliance.
69. What are the differences between Amazon RDS Multi-AZ and Read Replicas, and when would you use each?
Ans:
AWS RDS: Amazon RDS Multi-AZ provides synchronous replication of the primary database to a standby instance in a different Availability Zone for high availability and failover protection. Multi-AZ is used for disaster recovery and failover.
Read Replicas: read replicas are asynchronous replicas of the primary database used for read scaling. Read Replicas are used for read-heavy workloads and scaling read operations.
70. Explain the differences between Amazon EC2 instances and AWS Lambda, and when would you choose one over the other?
Ans:
Amazon EC2 instances are virtual servers that you manage and can run any type of workload.
AWS Lambda is a serverless compute service where you upload your code and AWS manages the infrastructure.
Use EC2 for long-running or stateful workloads requiring custom configurations, and Lambda for event-driven, short-lived, and stateless functions.
71. How does Amazon CloudFront handle dynamic content caching, and what are the considerations for caching dynamic content?
Ans: Amazon CloudFront caches dynamic content at the edge using cache behaviors and cache control headers. Considerations for caching dynamic content include setting appropriate cache-control headers, handling cache invalidation with versioning or cache tags, and using Lambda@Edge for dynamic content customization.
72. Describe the differences between Amazon S3 and Amazon EBS, and when would you use each for storage?
Ans:
Amazon S3 is an object storage service for storing and retrieving unstructured data over the internet.
Amazon EBS provides block storage volumes for EC2 instances.
Use S3 for scalable and durable storage of objects, and EBS for block-level storage attached to EC2 instances for databases and file systems.
73. What are the advantages and disadvantages of using Amazon DynamoDB compared to traditional SQL databases like Amazon RDS?
Ans: Amazon DynamoDB is a NoSQL database with advantages such as scalability, low latency, and automatic scaling, but it lacks support for complex queries and transactions compared to traditional SQL databases like Amazon RDS, which offer flexibility and familiarity with SQL queries but may require more management overhead.
74. How does AWS Lambda handle concurrency and scaling, and what are the best practices for optimizing Lambda functions for high throughput?
Ans: AWS Lambda automatically scales to handle incoming requests concurrently by creating instances of the function in response to demand. Best practices for optimizing Lambda functions include managing function duration, setting appropriate memory and timeout configurations, optimizing code for performance, and using asynchronous invocations for long-running tasks.
75. Explain how you would design a highly available and fault-tolerant architecture using AWS services like Amazon Route 53, ELB, and Auto Scaling?
Ans: Designing a highly available and fault-tolerant architecture involves using services like Amazon Route 53 for DNS routing, Elastic Load Balancing (ELB) for distributing incoming traffic across multiple EC2 instances in different Availability Zones, and Auto Scaling for automatically adjusting capacity based on demand to ensure resilience and high availability.
76. What are the best practices for securing data in transit and at rest in AWS environments, and how would you implement them?
Ans: Best practices for securing data in transit include using HTTPS/TLS for encrypted communication, implementing AWS Certificate Manager for managing SSL/TLS certificates, and using AWS Key Management Service (KMS) for encrypting data at rest with server-side encryption.
77. Describe the differences between AWS Direct Connect and AWS VPN, and when would you use each for connecting to AWS resources?
Ans:
AWS Direct Connect: It provides dedicated private network connections between on-premises data centers and AWS, offering consistent network performance and security.
AWS VPN: It provides encrypted connections over the internet.
Use Direct Connect for high-throughput, low-latency, and private connectivity, and VPN for secure remote access and low-volume traffic.
78. How does Amazon Redshift handle data distribution and sorting, and what are the implications for query performance?
Ans: Amazon Redshift distributes data across multiple nodes using a distribution key and sorts data within each node using sort keys. Properly choosing distribution and sort keys is crucial for query performance, as it affects data distribution and retrieval efficiency during query execution.
79. What are the considerations for optimizing costs in AWS environments, and what strategies would you implement to minimize costs?
Ans: Considerations for cost optimization in AWS include the following:
- Rightsizing resources
- Leveraging Reserved Instances and Savings Plans
- Using spot instances for non-critical workloads
- Implementing lifecycle policies for storage
- Monitoring and optimizing usage with AWS Cost Explorer and AWS Budgets.
80. Explain how you would design a serverless architecture using AWS Lambda, API Gateway, and other AWS services for a scalable and cost-effective solution?
Ans:Designing a serverless architecture involves using AWS Lambda for executing business logic, Amazon API Gateway for exposing Lambda functions as HTTP endpoints, Amazon S3 for storing static assets, DynamoDB for database storage, and CloudWatch for monitoring and logging. This architecture scales automatically with usage and minimizes infrastructure management overhead.
81. How does AWS CloudFormation differ from AWS Elastic Beanstalk, and when would you use each for deploying infrastructure and applications?
Ans:
AWS CloudFormation is an infrastructure as code service for provisioning and managing AWS resources using templates.
AWS Elastic Beanstalk is a platform as a service for deploying and managing applications without worrying about underlying infrastructure.
Use CloudFormation for infrastructure automation and repeatable deployments, and Elastic Beanstalk for quick and easy application deployment and scaling.
82. Describe the differences between Amazon ECS and Amazon EKS, and when would you choose one over the other for container orchestration?
Ans:
Amazon ECS (Elastic Container Service) is a fully managed container orchestration service for running Docker containers on EC2 instances.
Amazon EKS (Elastic Kubernetes Service) is a fully managed Kubernetes service.
Choose ECS for simplicity and tight integration with AWS services, and EKS for compatibility with Kubernetes ecosystem and multi-cloud deployments.
83. What are the considerations for designing a data lake architecture using AWS services like Amazon S3, AWS Glue, and Amazon Athena?
Ans: Considerations for designing a data lake architecture include defining data ingestion and storage patterns, implementing data cataloging and metadata management with AWS Glue, optimizing data partitioning and compression in S3, and using Amazon Athena for ad-hoc querying and analysis of data stored in S3.
84. How does Amazon VPC Peering differ from AWS Transit Gateway, and what are the use cases for each?
Ans:
Amazon VPC Peering allows direct connectivity between VPCs within the same or different AWS accounts.
Use Case: VPC Peering is suitable for connecting a few VPCs with low latency requirements,
AWS Transit Gateway is a hub and spoke model for connecting multiple VPCs and on-premises networks.
Use Case:Transit Gateway is used for scaling connectivity across multiple VPCs and simplifying network management.
85. Explain how you would design a disaster recovery solution using AWS services like Amazon S3, AWS Glacier, and AWS Backup for data backup and recovery?
Ans: Designing a disaster recovery solution involves replicating data to Amazon S3 for primary storage, using AWS Backup to automate backups and manage retention policies, and leveraging Amazon Glacier for long-term archival and cost-effective storage. Implement cross-region replication and backup policies to ensure data durability and availability in the event of a disaster.
86. What are the differences between AWS IAM policies and AWS resource policies, and how do you manage access control effectively using both?
Ans: AWS IAM policies are attached to IAM users, groups, or roles and define permissions for accessing AWS resources, while AWS resource policies are attached to individual resources and control access to the resource itself. Manage access control effectively by using IAM policies for user and application access management and resource policies for controlling access to specific AWS resources like S3 buckets or KMS keys.
87. Describe the differences between Amazon Aurora and traditional relational databases like MySQL or PostgreSQL, and when would you choose Aurora over traditional databases?
Ans: Amazon Aurora is a MySQL and PostgreSQL-compatible relational database service with advantages such as high performance, scalability, and fault tolerance compared to traditional databases. Aurora uses a distributed architecture and storage engine optimizations to achieve better performance and availability, making it suitable for high-throughput and mission-critical applications.
88. How does Amazon Kinesis handle streaming data ingestion and processing, and what are the considerations for designing scalable and real-time analytics solutions?
Ans: Amazon Kinesis is a platform for ingesting, processing, and analyzing streaming data at scale. Kinesis Data Streams allows real-time data ingestion, Kinesis Data Firehose automates data delivery to destinations like S3 or Redshift, and Kinesis Data Analytics enables real-time analytics on streaming data. Considerations for designing scalable solutions include data partitioning, processing parallelism, and fault tolerance to handle varying data volumes and velocity.
89. What are Recovery Time Objective and Recovery Point Objective in AWS?
Ans:
Recovery Time Objective: It is the maximum acceptable delay between the interruption of service and restoration of service. This translates to an acceptable time window when the service can be unavailable.
Recover Point Objective: It is the maximum acceptable amount of time since the last data restore point. It translates to the acceptable amount of data loss which lies between the last recovery point and the interruption of service.
AWS Route 53 Related interview Questions and Answers
1. What is a Hosted Zone in Route 53?
Ans: A hosted zone is a container for DNS records for a specific domain. It is similar to a traditional DNS zone file.
2. Route 53 can be used to route users to infrastructure outside of aws.True/false?
Ans: false
3. What are the Difference Between Route53 and ELB?
Ans:
Route 53: Amazon Route 53 will handle DNS servers. Route 53 give you web interface through which the DNS can be managed using Route 53, it is possible to direct and failover traffic. This can be achieved by using DNS Routing Policy. One more routing policy is Failover Routing policy. we set up a health check to monitor your application endpoints. If one of the endpoints is not available, Route 53 will automatically forward the traffic to other endpoint.
Elastic Load Balancing: ELB automatically scales depends on the demand, so sizing of the load balancers to handle more traffic effectively when it is not required.
4. What is the relationship between Route53 and Cloud front?
Ans: In Cloud front we will deliver content to edge location wise so here we can use Route 53 for Content Delivery Network. Additionally, if you are using Amazon CloudFront you can configure Route 53 to route Internet traffic to those resources.
5. What are the routing policies available in Amazon Route53?
Ans: Below are the routing policies used in AWS route 53:
- Simple
- Weighted
- Latency Based
- Failover
- Geo-location
- Geo-proximity
- multivalue ans routing
6. What is AWS Route 53?
Ans: AWS Route 53 is a scalable and highly available Domain Name System (DNS) and domain name registration service. It connects user requests to infrastructure running in AWS, such as EC2 instances, load balancers, or S3 buckets, and can also be used to route users to infrastructure outside of AWS.
7. What are the main functions of Route 53?
Ans: The main functions include domain registration, DNS routing, health checking, and traffic management.
8. How does DNS work in Route 53?
Ans: DNS translates human-readable domain names (e.g., www.example.com) into IP addresses (e.g., 192.0.2.1) that computers use to connect to each other.
9. What is the minimum and maximum size of individual objects that you can store in S3?
Ans: The minimum size of individual objects that you can store in S3 is 0 bytes and the
maximum bytes that you can store for individual objects is 5TB
10. What are the different storage classes in S3?
Ans: Following are the types of storage classes in S3,
- Standard frequently accessed
- Standard infrequently accessed
- One-zone infrequently accessed.
- Glacier
- reduced redundancy storage
11. What is the default storage class in S3?
Ans: The default storage class in S3 in Standard frequently accessed.
12. What is glacier?
Ans: Glacier is the back up or archival tool that you use to back up your data in S3.
13. How can you secure the access to your S3 bucket?
Ans: There are two ways that you can control the access to your S3 buckets,
- Access Control List (ACL)
- Bucket Policies
14. How can you encrypt data in S3?
Ans: You can encrypt the data by using the below methods,
- Server Side Encryption using S3 (AES 256 encryption)
- Server Side Encryption using KMS (Key management Service)
- Server Side Encryption using C (Client Side)
15. What are the parameters for S3 pricing?
Ans: The pricing model for S3 is as below,
- Storage used
- Number of requests you make
- Storage management
- Data transfer
- Transfer acceleration
16. What is the pre-requisite to work with Cross region replication in S3?
Ans: Below are the pre-requisites to do cross region replication:
- need to enable versioning on both source bucket and destination bucket.
- both the source and destination bucket should be in different region.
Intermediate Quesitons:
1. What is the difference between a Public and a Private Hosted Zone?
Ans:
Public hosted zones contain records that specify how to route traffic on the internet.
Private hosted zones contain records that specify how to route traffic within an Amazon VPC.
2. Explain how Route 53’s Health Checks work
Ans: Route 53 health checks monitor the health and performance of your applications, endpoints, and other AWS resources. Based on the health check status, Route 53 can route traffic away from unhealthy resources to healthy ones.
3. What are Alias Records in Route 53?
Ans: Alias records are Route 53-specific extensions to DNS functionality. They are used to map domain names to AWS resources like CloudFront distributions, Elastic Load Balancers, and S3 buckets.
4. How does Latency-Based Routing work in Route 53?
Ans: Latency-Based Routing routes traffic to the AWS region that provides the lowest latency to the user, improving application performance.
5. Describe Weighted Routing in Route 53?
Ans: Weighted Routing lets you route traffic to multiple resources based on specified weights. This can be used for A/B testing or gradually shifting traffic from one resource to another.
Advanced Questions:
1. What is DNS Failover in Route 53?
Ans: DNS Failover configures Route 53 to route traffic to a secondary resource if the primary resource becomes unavailable, enhancing availability and reliability.
2. How can Route 53 be used for Disaster Recovery?
Ans: Route 53 can be configured for disaster recovery by setting up health checks and failover routing policies to ensure traffic is redirected to backup sites or resources if the primary site fails.
3. What is Geolocation Routing?
Ans: Geolocation Routing routes traffic based on the geographic location of the user. It allows for tailored responses based on where the request originates.
4. Explain Geoproximity Routing in Route 53?
Ans: Geoproximity Routing routes traffic based on the geographic location of your resources and, optionally, shifts traffic from resources in one location to another.
5. What are Multi-Value Answer Routing policies?
Ans: Multi-Value Answer Routing allows Route 53 to return multiple IP addresses for a single query, which helps in load balancing and improving fault tolerance.
Practical Scenario Questions:
1. How do you migrate a domain to Route 53?
Ans: To migrate a domain, you need to do the following steps:
1.Create a hosted zone for the domain in Route 53
2. Import or manually create DNS record
3. Update the domain registrar with the Route 53 name servers
4. Verify the Transfer
2. How can you integrate Route 53 with a load balancer?
Ans: You can create an Alias record in Route 53 that points to your load balancer’s DNS name.
3. Describe the steps to set up a subdomain in Route 53.
Ans: Create a new hosted zone for the subdomain, add the necessary DNS records, and update the parent domain with the NS records of the subdomain hosted zone.
4. How would you configure Route 53 to handle blue/green deployments?
Ans: Use Weighted Routing to distribute traffic between the blue and green environments, adjusting the weights as needed to shift traffic gradually.
5. Explain the process of setting up a failover configuration in Route 53?
Ans: Create a primary and secondary resource, set up health checks, and use Failover Routing policy to route traffic to the secondary resource if the primary becomes unhealthy.
Troubleshooting Questions:
1. What steps would you take if DNS resolution fails?
Ans: Check the DNS records in the hosted zone, ensure the domain registrar’s name servers match Route 53’s name servers, verify health check status, and review DNS propagation status
2. How do you debug issues with Route 53 health checks?
Ans: Verify endpoint accessibility, check health check configuration, review health check logs, and ensure network configurations are correct.
3. What tools can you use to monitor and log DNS queries in Route 53?
Ans: Use AWS CloudWatch for monitoring and AWS CloudTrail for logging DNS query logs.
4. How would you address slow DNS resolution times?
Ans: Ensure latency-based routing is configured, optimize TTL values, review and optimize health checks, and verify the performance of the endpoint.
5. What would you do if users report intermittent connectivity issues?
Ans: Check health check configurations and status, review DNS records for accuracy, monitor endpoint performance, and ensure there are no network connectivity issues.
Security and Best Practices
1. How can you secure your DNS records in Route 53?
Ans: Use IAM policies to restrict access, enable DNSSEC for domain security, regularly audit DNS records, and implement least privilege access.
2. What is DNSSEC and how does it work in Route 53?
Ans: DNSSEC (Domain Name System Security Extensions) adds a layer of security to DNS lookup and exchange processes. It ensures the responses to DNS queries are authentic and have not been tampered with.
3. How do you implement DNSSEC in Route 53?
Ans: Enable DNSSEC signing for your hosted zone, create a key-signing key, and update the domain’s DS record at the domain registrar.
4. What are best practices for managing Route 53 hosted zones?
Ans: Use descriptive names for records, regularly audit DNS configurations, implement health checks, use Route 53 Resolver for hybrid environments, and enable logging for DNS queries.
5. How can you optimize cost with Route 53?
Ans: Monitor usage, delete unused hosted zones and records, choose appropriate routing policies, and optimize health check frequency.
6. What is Geo-Targeting in CloudFront?
Ans: Geo-Targeting enables the creation of customized content based on the geographic location of the user. This allows you to serve the content which is more relevant to auser.
For example, using Geo-Targeting, you can show the news related to local body elections to a user sitting in India, which you may not want to show to a user sitting in the US. Similarly, the news related to Baseball Tournament can be more relevant to a user sitting in the US, and not so relevant for a user sitting in India.
Miscellaneous Questions
1. What is an SOA record in Route 53?
Ans: The SOA (Start of Authority) record provides important information about the domain and the zone, such as the primary name server, email of the domain administrator, and domain serial number.
2. How do you use Traffic Flow in Route 53?
Ans: Traffic Flow allows you to manage traffic globally through a visual interface by creating and managing traffic policies and policy records.
3. Can you explain what a Resolver is in Route 53?
Ans: A Resolver in Route 53 is a DNS server that receives DNS queries from client devices and resolves them to the correct IP address.
4. What are Traffic Policies in Route 53?
Ans: Traffic Policies are reusable configurations for routing traffic based on specific criteria like latency, geolocation, and health checks.
5. How does Route 53 integrate with other AWS services?
Ans: Route 53 integrates with services like CloudFront, Elastic Load Balancing, S3, and Lambda to provide seamless DNS resolution and traffic management.
6. What is a PTR record and how is it used in Route 53?
Ans: A PTR (Pointer) record is used for reverse DNS lookups, mapping an IP address to a domain name. It is commonly used for mail servers to prevent spam.
7. Describe the process of setting up Geo DNS in Route 53?
Ans: Configure Geolocation Routing policies by specifying regions and the respective resources to which traffic from those regions should be directed.
8. What are CNAME records and how are they used?
Ans: CNAME (Canonical Name) records map an alias name to a true or canonical domain name, used for pointing multiple services to the same IP address without configuring A records for each one.
9. How does Route 53 handle TTL (Time to Live) values?
Ans: TTL values determine how long DNS resolvers cache the DNS records before querying Route 53 again. Lower TTLs allow for quicker propagation of changes.
10. Can you automate Route 53 configurations?
Ans: Yes, you can use AWS CloudFormation, AWS SDKs, and AWS CLI to automate Route 53 configurations and deployments.
Scenario Based Questions
1. How would you manage multi-region traffic with Route 53?
Ans: Use Latency-Based Routing or Geolocation Routing to direct traffic to the closest or most appropriate region based on user location.
2. What steps would you take to implement a hybrid DNS architecture?
Ans: Use Route 53 Resolver endpoints to handle DNS queries from on-premises networks and integrate them with your VPCs.
3. How do you configure Route 53 for dynamic DNS updates?
Ans: Use Route 53 API or SDKs to programmatically update DNS records in response to changes in your infrastructure.
4. Explain the process of creating a custom domain name for a CloudFront distribution using Route 53?
Ans: Create a CNAME record or an Alias record in Route 53 that points to the CloudFront distribution’s domain name.
5. How would you set up a split-view DNS with Route 53?
Ans: Use public and private hosted zones for the same domain to serve different DNS records based on whether the query originates from inside the VPC or from the internet.
Expert Level Questions:
1. What is an NS record and why is it important?
Ans: NS (Name Server) records specify the authoritative DNS servers for a domain, which are essential for DNS resolution.
2. How do you handle domain name migration from another DNS provider to Route 53 without downtime?
Ans: Reduce the TTL values at the old provider, create the same records in Route 53, update the domain registrar with Route 53 name servers, and monitor DNS propagation.
3. Can Route 53 be used for non-web applications?
Ans: Yes, Route 53 can route traffic for various types of applications, including email and database services, by using the appropriate DNS records.
4. How do you monitor Route 53 health checks in real-time?
Ans: Use CloudWatch metrics to monitor health check statuses and set up alarms for any failures.
5. What are the key differences between Alias and CNAME records in Route 53?
Ans:
Alias records are specific to Route 53 and can point to AWS resources, with no cost for queries.
CNAME records can point to any DNS record except the root domain and incur standard DNS query charges.
AWS SNS and SES Related interview questions
1. What is SNS?
Ans: SNS stands for Simple Notification Service. SNS is a web service that makes it easy to notifications from the cloud. You can set up SNS to receive email notification or message notification.
2. Is simple workflow service one of the valid Simple Notification Service subscribers?
Ans: No
3. What is SES, SQS and SNS?
Ans:
SES (Simple Email Service): SES is SMTP server provided by Amazon which is designed to send bulk mails to customers in a quick and cost-effective manner.SES does not allows to configure mail server.
SQS (Simple Queue Service): SQS is a fast, reliable and scalable, fully managed message queuing service. Amazon SQS makes it simple and cost Effective. It’s temporary repository for messages to waiting for processing and acts as a buffer between the component producer and the consumer.
SNS (Simple Notification Service): SNS is a web service that coordinates and manages the delivery or sending of messages to recipients.
4. What is the maximum size of messages in SQS?
Ans: maximum size of messages in SQS is 256 KB.
5. What are the types of queues in SQS?
Ans: There are 2 types of queues in SQS:
- Standard queue
- FIFO (First In First Out)
AWS Scenario Based Interview Questions and Answers
1. A company is deploying the new two-tier an web application in AWS. The company has to limited on staff and the requires high availability, and the application requires to complex queries and table joins. Which configuration provides to the solution for company’s requirements?
Ans: An web application provide on Amazon DynamoDB solution.
2. Which the statement use to cases are suitable for Amazon DynamoDB?
Ans: The storing metadata for the Amazon S3 objects& The Running of relational joins and
complex an updates.
3. Your application has to the retrieve on data from your user’s mobile take every 5 minutes and then data is stored in the DynamoDB, later every day at the particular time the data is an extracted into S3 on a per user basis and then your application is later on used to visualize the data to user. You are the asked to the optimize the architecture of the backend system can to lower cost, what would you recommend do?
Ans: Introduce Amazon Elasticache to the cache reads from the Amazon DynamoDB table and to
reduce the provisioned read throughput.
4. You are running to website on EC2 instances can deployed across multiple Availability Zones with an Multi-AZ RDS MySQL Extra Large DB Instance etc. Then site performs a high number of the small reads and the write per second and the relies on the eventual consistency model. After the comprehensive tests you discover to that there is read contention on RDS MySQL. Which is the best approaches to the meet these requirements?
Ans: The Deploy Elasti Cache in-memory cache is running in each availability zone and Then
Increase the RDS MySQL Instance size and the Implement provisioned IOPS.
5. Let to Suppose you have an application where do you have to render images and also do some of general computing. which service will be best fit your need?
Ans: Used on Application Load Balancer.
6. How will change the instance give type for the instances, which are the running in your applications tier and Then using Auto Scaling. Where will you change it from areas?
Ans: Changed to Auto Scaling launch configuration areas.
7. You have an content management system running on the Amazon EC2 instance that is the approaching 100% CPU of utilization. Which option will be reduce load on the Amazon EC2 instance?
Ans: Let Create a load balancer, and Give register the Amazon EC2 instance with it.
8. What does the Connection of draining do?
Ans: The re-routes traffic from the instances which are to be updated (or) failed an health to check.
9. An user has to setup an Auto Scaling group. Due to some issue the group has to failed for launch a single instance for the more than 24 hours. What will be happen to the Auto Scaling in the condition?
Ans: The auto Scaling will be suspend to the scaling process.
10. You have an the EC2 Security Group with a several running to EC2 instances. You changed to the Security of Group rules to allow the inbound traffic on a new port and protocol, and then the launched a several new instances in the same of Security Group.Such the new rules apply?
Ans: The Immediately to all the instances in security groups.
11. To create an mirror make a image of your environment in another region for the disaster recoverys, which of the following AWS is resources do not need to be recreated in second region?
Ans: May be the selected on Route 53 Record Sets.
12. An customers wants to the captures all client connections to get information from his load balancers at an interval of 5 minutes only, which cal select option should he choose for his application?
Ans: The condition should be Enable to AWS CloudTrail for the loadbalancers.
13. Which of the services to you would not use to deploy an app?
Ans: Lambda app not used on deploy.
14. How do the Elastic Beanstalk can apply to updates?
Ans: By a duplicate ready with a updates prepare before swapping.
15. As a company needs to monitor a read and write IOPS for the AWS MySQL RDS instances and then send real-time alerts to the operations of team. Which AWS services to can accomplish this?
Ans: The monitoring on Amazon CloudWatch
16. The organization that is currently using the consolidated billing has to recently acquired to another company that already has a number of the AWS accounts. How could an Administrator to ensure that all the AWS accounts, from the both existing company and then acquired company, is billed to the single account?
Ans: All Invites take acquired the company’s AWS account to join existing the company’s of
organization by using AWS Organizations.
17. The user has created an the applications, which will be hosted on the EC2. The application makes calls to the Dynamo DB to fetch on certain data. The application using the DynamoDB SDK to connect with the EC2 instance. Which of respect to best practice for the security in this scenario?
Ans: The user should be attach an IAM roles with the DynamoDB access to EC2 instance.
18. My EC2 instance IP address change automatically while instance stop and start. What is the reason for that and explain solution?
Ans: AWS assigned Public IP automatically but it’s change dynamically while stop and start. In that case we need to assign Elastic IP for that instance, once assigned it doesn’t change automatically.
AWS Database Related Interview Questions and Answers
1. Which AWS service will you use to collect and process ecommerce data for near real time analysis?
Ans: Both Dynamo DB & Redshift
2. What are the DB engines which can be used in AWS RDS?
Ans:
1. MariaDB
2. MYSQL DB
3. MS SQL DB
4. Postgre DB
5. Oracle DB
3. What are the database types in RDS?
Ans: Following are the types of databases in RDS:
- Aurora
- Oracle
- MYSQL server
- Postgresql
- MariaDB
- SQL server
4. What is multi-AZ RDS?
Ans: Multi-AZ (Availability Zone) RDS allows you to have a replica of your production database in another availability zone. Multi-AZ (Availability Zone) database is used for disaster recovery. You will have an exact copy of your database. So when your primary database goes down, your application will automatically failover to the standby database.
5. What is Amazon Dynamo DB?
Ans: Amazon Dynamo DB is fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. Dynamo DB makes it simple and Cost effective to store and retrieve any amount of data.
6. What is cloud formation?
Ans: Cloud formation is a service which creates the AWS infrastructure using code. It helps to reduce time to manage resources. We can able to create our resources Quickly and faster.
7. What are the types of backups in RDS database?
Ans: There are 2 types of backups in RDS database:
- Automated backups
- Manual backups which are known as snapshots.
8. Is it possible to stop a RDS instance, how can I do that?
Ans: Yes it’s possible to stop rds. Instance which are non-production and non multi AZ’s
9. When do I prefer to Provisioned IOPS over the Standard RDS storage?
Ans: If you have do batch-oriented is workloads.
10. If I am running on my DB Instance a Multi-AZ deployments, can I use to the stand by the DB Instance for read or write a operation along with to primary DB instance?
Ans: Primary db instance does not working.
11. Which the AWS services will you use to the collect and the process e-commerce data for the near by real-time analysis?
Ans: Good of Amazon DynamoDB.
12. What is meant by parameter groups in rds. And what is the use of it?
Ans: Since RDS is a managed service AWS offers a wide set of parameter in RDS as parameter
group which is modified as per requirement.
13. What is the use of tags and how they are useful?
Ans: Tags are used for identification and grouping AWS Resources
14. What is a redshift?
Ans: Amazon redshift is a data warehouse product. It is a fast and powerful, fully managed, petabyte scale data warehouse service in the cloud.
15. What is Data warehouse in AWS?
Ans: Data ware house is a central repository for data that can come from one or more sources. Organization typically use data warehouse to compile reports and search the database using highly complex queries. Data warehouse also typically updated on a batch schedule multiple times per day or per hour compared to an OLTP (Online Transaction Processing) relational database that can be updated thousands of times per second.
AWS VPC Related Interview Questions and Answers
1. What is VPC?
Ans: VPC stands for Virtual Private Cloud. VPC allows you to easily customize your networking configuration. VPC is a network that is logically isolated from other network in the cloud. It allows you to have your own IP address range, subnets, internet gateways, NAT gateways and security groups.
2. We have a custom VPC Configured and MYSQL Database server which is in Private Subnet and we need to update the MYSQL Database Server, What are the Option to do so ?
Ans: By using NAT Gateway in the VPC or Launch a NAT Instance ( Ec2) Configure or Attach the NAT Gateway in Public Subnet ( Which has Route Table attached to IGW) and attach it to the Route Table which is Already attached to the Private Subnet.
3. What are the Defaults services we get when we create custom AWS VPC?
Ans:
- Route Table
- Network Access Control Lists (NACL)
- Security Group
4. Which service is used to distribute content to end user service using global network of edge location?
Ans: Virtual Private Cloud (VPC)
5. What is VPC Peering connection?
Ans: Amazon VPC peering connection is a networking connection between two amazon vpc’s that enables instances in either Amazon VPC to communicate with each other as if they are within the same network. You can create amazon VPC peering connection between your own Amazon VPC’s or Amazon VPC in another AWS account within a single region.
6. To establish a peering connections between two VPC’s What condition must be met?
Ans: CIDR block should not overlap between vpc setting up a peering connection . peering connection is allowed within a region , across region, across different account.
7. What is meant by subnet?
Ans: A large section of IP Address divided in to chunks are known as subnets
8. What is the Difference Between Public Subnet and Private Subnet ?
Ans:
Public Subnet will have Internet Gateway Attached to its associated Route Table and Subnet. Public Subnet will have internet access
Private Subnet will not have the Internet Gateway Attached to its associated Route Table and Subnet. It will not have the internet access directly.
9. How can you convert a public subnet to private subnet?
Ans: Remove IGW & add NAT Gateway, Associate subnet in Private route table
10. What is VPC peering connection?
Ans: VPC peering connection allows you to connect 1 VPC with another VPC. Instances in these
VPC behave as if they are in the same network.
11. Explain stateful and Stateless firewall?
Ans:
Stateful Firewall: A Stateful Firewall is the one that maintains the state of the rules defined. It requires
you to define only inbound rules. Based on the inbound rules defined, it automatically allows the outbound traffic to flow.
A Security group is a virtual stateful firewall that controls inbound and outbound
network traffic to AWS resources and Amazon EC2 instances. Operates at the instance level. It
supports allow rules only. Return traffic is automatically allowed, regardless of any rules.
Stateless Firewall: It requires you to explicitly define rules for inbound as well as outbound traffic.
A Network access control List (ACL) is a virtual stateless firewall on a subnet level. Supports allow rules and deny rules. Return traffic must be explicitly allowed by rules.
For example, if you allow inbound traffic from Port 80, a Stateful Firewall will allow outbound traffic to Port 80, but a Stateless Firewall will not do so.
12. What are NAT gateways?
Ans: NAT stands for Network Address Translation. NAT gateways enables instances in a private
subnet to connect to the internet but prevent the internet from initiating a connection with those
instances.
13. What is NAT Instance and NAT Gateway?
Ans:
NAT instance: A network address translation (NAT) instance is an Amazon Linux machine Image
(AMI) that is designed to accept traffic from instances within a private subnet, translate the source IP
address to the Public IP address of the NAT instance and forward the traffic to IWG.
NAT Gateway: A NAT gateway is an Amazon managed resources that is designed to operate just like
a NAT instance but it is simpler to manage and highly available within an availability Zone. To allow
instance within a private subnet to access internet resources through the IGW via a NAT gateway.
14. When attached to an Amazon VPC which two components provide connectivity with external networks?
Ans:
- Internet Gateway (IGW)
- Virtual Private Gateway (VGW)
15. Which of the following are characteristics of Amazon VPC subnets?
Ans:
- Each subnet maps to a single Availability Zone.
- By defaulting, all subnets can route between each other, whether they are private or public.
16. How can you control the security to your VPC?
Ans: You can use security groups and NACL (Network Access Control List) to control the
security to your VPC.
17. What are the different types of storage gateway?
Ans: Following are the types of storage gateway:
- File gateway
- Volume gateway
- Tape gateway
18. What is the difference between security groups and network access control list?
Ans:
Sr No | Security Groups | Network access control list |
1 | Can control the access at the instance level | Can control access at the subnet level |
2 | Can add rules for “allow” only | Can add rules for both “allow” and “deny” |
3 | Evaluates all rules before allowing the traffic | Rules are processed in order number when allowing traffic |
4 | Can assign unlimited number of security groups | Can assign upto 5 security groups. |
5 | Statefull filtering | Stateless filtering |
6 | Attached to Ec2 instance | Attached to a subnet. |
19. When attached to an Amazon VPC which two components provide connectivity with external networks?
Ans:
- Internet Gateway
- Virtual Private Gateway
20. How do you access the EC2 which has private IP which is in private Subnet?
Ans: We can access using VPN if the VPN is configured into that Particular VPC where Ec2 is assigned to that VPC in the Subnet. We can access using other Ec2 which has the Public access.
21. We have a custom VPC Configured and MYSQL Database server which is in Private Subnet and we need to update the MYSQL Database Server, What are the Option to do so?
Ans: By using NAT Gateway in the VPC or Launch a NAT Instance ( Ec2) Configure or Attach the NAT Gateway in Public Subnet ( Which has Route Table attached to IGW) and attach it to theRoute Table which is Already attached to the Private Subnet.
22. How do you monitor Amazon VPC?
Ans: You can monitor Amazon VPC using:
- Cloudwatch
- VPC Flow Log
23. VPC is not resolving the server through DNS. What might be the issue and how can you fix it?
Ans: To fix this problem, you need to enable the DNS hostname resolution, so that the problem resolves itself.
24. How do you connect multiple sites to a VPC?
Ans: If you have multiple VPN connections, you can provide secure communication
between sites using the AWS VPN CloudHub.
25. Name and explain some security products and features available in VPC?
Ans:
- Security groups – This acts as a firewall for the EC2 instances, controlling inbound and outbound traffic at the instance level
- Network access control lists – It acts as a firewall for the subnets, controlling inbound and outbound traffic at the subnet level.
- Flow logs – These capture the inbound and outbound traffic from the network interfaces in your VPC.
AWS CloudFront Related Interview Questions and Answers
1. What is cloudfront?
Ans: Cloudfront is an AWS web service that provided businesses and application developers an easy and efficient way to distribute their content with low latency and high data transfer speeds. Cloudfront is content delivery network of AWS.
2. Can cloud front serve content from a non AWS origin server?
Ans: No
3. What Are The Main Features Of Amazon Cloud Front?
Ans: Amazon Cloud Front is a web service that speeds up delivery of your static and dynamic web content, such as .html, .css, .js, and image files, to your users.CloudFront delivers your content through a universal network of data centers called edge locations
4. What are edge locations?
Ans: Edge location is the place where the contents will be cached. When a user tries to access some content, the content will be searched in the edge location. If it is not available then the content will be made available from the origin location and a copy will be stored in the edge location.
5. What Is Lambda edge In AWS?
Ans: Lambda Edge lets you run Lambda functions to modify satisfied that Cloud Front delivers, executing the functions in AWS locations closer to the viewer. The functions run in response to Cloud Front events, without provisioning or managing server.
6. what is the use of headers and what are the types of headers in the aws cloudfront?
Ans: In AWS CloudFront, headers are crucial components used to control various aspects of content delivery, caching behavior, security, and customization. Headers can provide additional metadata, authentication tokens, security directives, or customization parameters for HTTP requests and responses. They play a significant role in optimizing content delivery, enhancing security, and providing personalized experiences for users.
Types of Headers:
Request Headers: Headers included in HTTP requests sent by clients to CloudFront distributions.These headers can be forwarded to the origin server or processed at the edge.
Response Headers: Headers included in HTTP responses returned by the origin server or generated at the edge by CloudFront. These headers are forwarded to clients or modified before being sent to clients.
Uses of Headers in AWS CloudFront:
- Cache-control header
- Custom header
- forwareded Header
- Security Header
- Customization and Personalization
- Content Encoding
- Request Routing
1.Cache-Control Headers: Used to control caching behavior at the edge. CloudFront obeys Cache-Control directives to determine how long to cache objects and how to respond to client requests.
2. Custom Headers: Can be added to HTTP requests or responses using Lambda@Edge functions. These headers provide additional metadata, authentication tokens, or customization parameters.
3. Forwarded Headers: CloudFront can forward specific headers from viewer requests to the origin server and from the origin server’s response back to the viewer. This enables passing of headers such as User-Agent
, Authorization
, or custom headers to the origin server.
4. Security Headers: Used to enhance security by adding security-related headers to HTTP responses. Examples include Strict-Transport-Security
(HSTS), Content-Security-Policy
(CSP), or X-Frame-Options
.
5. Customization and Personalization: Headers can be used to customize or personalize content delivery based on specific criteria such as geographic location, device type, or user preferences.
6. Content Encoding: Used for content encoding to compress content (e.g., gzip compression) before serving it to clients, reducing bandwidth usage and improving performance.
7. Request Routing: Headers can be
AWS IAM Related Interview Questions and Answers
1. What are roles?
Ans: Roles are used to provide permissions to entities that you trust within your AWS account. Roles are users in another account. Roles are similar to users but with roles you do not need to create any username and password to work with the resources.
2. What are policies?
Ans: Policies are permissions that you can attach to the users that you create. These policies will contain that access that you have provided to the users that you have created
3. What are the types of policies?
Ans: There are 2 types of policies:
- Managed Policies
- Inline Policies
4. How many Policies can be attached to a role?
Ans: 10 (Soft limit), We can have till 20.
5. What are IAM Roles and Policies, What is the difference between IAM Roles and Policies?
Ans:
Roles are for AWS services, Where we can assign permission of some AWS service to other Service.
Example – Giving S3 permission to EC2 to access S3 Bucket Contents.
Policies are for users and groups, Where we can assign permission to user’s and groups.
Example – Giving permission to user to access the S3 Buckets.
6. What are the two types of access that you can provide when you are creating users?
Ans: Following are the two types of access that you can create:
- Programmatic access
- Console access
7. I am viewing an AWS Console but unable to launch the instance, I receive an IAM Error how can I rectify it?
Ans: As AWS user I don’t have access to use it, I need to have permissions to use it further
8. What are the different ways to access AWS?
Ans: There are three different ways
1.AWS Console
2. AWS CLI (Command LIne Interface)
3. AWS SDK (Software Development Kit
9. How a Root AWS user is different from in IAM User?
Ans: Root User will have acces to entire AWS environment and it will not have any policy attached to it. While IAM User will be able to do its task on the basis of policies attached to it.
10. What is the meaning of non-explicit deny for an IAM User?
Ans: When an IAM user is created and it is not having any policy attached to it,in that case he will not be able to access any of the AWS Service until a policy has been attached to it
11. What is the precedence level between explicit allow and explicit deny?
Ans: Explicit deny will always override Explicit Allow
12. What is the benefit of creating a group in IAM?
Ans: Creation of Group makes the user management process much simpler and user with the same
kind of permission can be added in a group and at last addition of a policy will be much simpler to the
group in comparison to doing the same thing manually.
13. What is the difference between the Administrative Access and Power User Access in term of pre-build policy?
Ans: Administrative Access will have the Full access to AWS resources. While Power User
Access will have the Admin access except the user/group management permission.
An Administrator User will be similar to the owner of the AWS Resources. He can create, delete, modify or view the resources and also grant permissions to other users for the AWS Resources.
A Power User Access provides Administrator Access without the capability to manage the users and permissions. In other words, a user with Power User Access can create, delete, modify or see the resources, but he cannot grant permissions to other users
14. I don’t want my AWS Account id to be exposed to users how can I avoid it?
Ans: In IAM console there is option as sign in url where I can rename my own account name with
AWS account
15. What are the benefits of STS (Security Token Service)?
Ans: It help in securing the AWS environment as we need not to embed or distributed the AWS
Security credentials in the application. As the credentials are temporary we need not to rotate them
and revoke them.
16. What is the benefit of creating the AWS Organization?
Ans: It helps in managing the IAM Policies, creating the AWS Accounts programmatically, helps
in managing the payment methods and consolidated billing.
17. What is the purpose of Identity Provider?
Ans: Identity Provider helps in building the trust between the AWS and the Corporate AD environment while we create the
18. What are the policies that you can set for your user’s
passwords?
Ans:
- You can set a minimum length of the password.
- You can ask the users to add at least one number or special character to the password
- Assigning the requirements of particular character types, including uppercase letters, lowercase letters, numbers, and non-alphanumeric characters.
- You can enforce automatic password expiration, prevent the reuse of old passwords, and request for a password reset upon their next AWS sign-in.
- You can have the AWS users contact an account administrator when the user has allowed the password to expire.
AWS CloudWatch releated interview questions and answers
1. What is Cloudwatch?
Ans: Cloudwatch is a monitoring tool that you can use to monitor your various AWS resources.
Like health check, network, Application, etc.
2. What are the types in cloudwatch?
Ans: There are 2 types in cloudwatch.
- Basic monitoring (Free)
- Detailed monitoring. (Chargable)
Basic monitoring is free and detailed monitoring is chargeable.
3. What are the cloudwatch metrics that are available for EC2 instances?
Ans: Below are the metrics that we can monitor using AWS Cloudwatch:
- Diskreads
- Diskwrites
- CPU utilization
- networkpacketsIn
- networkpacketsOut
- networkIn
- networkOut
- CPUCreditUsage
- CPUCreditBalance.
4. Differentiate Basic and Detailed monitoring in cloud watch?
Ans:
Basic Monitoring: Basic monitoring sends data points to Amazon cloud watch every five minutes for
a limited number of preselected metrics at no charge.
Detailed Monitoring: Detailed monitoring sends data points to amazon CloudWatch every minute and
allows data aggregation for an additional Charge
5. Which types of metrics available by default in the AWS Cloudwatch Service?
Ans: By default, AWS CloudWatch provides metrics like
- CPU utilization
- Disk I/O
- Network traffic for EC2 instances.
Memory usage and Disk space metrics are not included out of the box.
6. How to setup AWS Cloudwatch service for 100 servers ?
Ans: To manage memory metrics for 100 servers in the cloud using AWS CloudWatch, you can follow these steps to set up and deploy the CloudWatch Agent across all servers efficiently:
1. Install the CloudWatch Agent on Your Servers:
You need to install the CloudWatch Agent on all 100 servers. This can be done manually, but for a large number of servers, automation tools like AWS Systems Manager (SSM), AWS Elastic Beanstalk, or configuration management tools like Ansible, Chef, or Puppet are more effective.
Manual Installation for linux:
sudo yum install amazon-cloudwatch-agent -y
Manual Installation for windows:
- Download the cloudwatch agent from this Link
- Install the agent.
2. Create and Configure the CloudWatch Agent Configuration File:
Create a configuration file for the CloudWatch Agent that specifies which metrics to collect. You can do this manually or generate one using the wizard provided by AWS.
Using the Wizard:
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
The wizard will guide you through creating a JSON configuration file. You can include settings for collecting memory metrics, disk space, CPU usage, etc.
Example configuration file:
{
"metrics": {
"metrics_collected": {
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 60
}
}
}
}
3. Store the Configuration in SSM Parameter Store:
Once you have the configuration file, store it in the AWS Systems Manager (SSM) Parameter Store for easy access and management.
aws ssm put-parameter --name "CloudWatchAgentConfig" --type "String" --value file://config.json --overwrite
4. Deploy the CloudWatch Agent Using Systems Manager:
Use AWS Systems Manager to deploy the CloudWatch Agent across all 100 servers. You can do this by using the “Run Command” feature in SSM.
Steps:
- Go to the AWS Systems Manager Console.
- Navigate to “Run Command”.
- Choose “AWS-ConfigureAWSPackage” as the command document.
- In the command parameters, set
action=Install
,name=AmazonCloudWatchAgent
, and specify the servers by their Instance IDs or tags. - Use the SSM Parameter Store path for the configuration file.
5. Start the CloudWatch Agent on All Servers
Ensure the CloudWatch Agent is started on each server, either manually or using the Run Command:
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \
-a start \
-m ec2 \
-c ssm:CloudWatchAgentConfig
6. Monitor Metrics in CloudWatch
Once the CloudWatch Agent is running on all servers, you can monitor memory usage and other custom metrics in the CloudWatch console. You can create dashboards, set alarms, and analyze the collected data.
For ongoing management, use Systems Manager for updates and redeployment. Automation scripts or Infrastructure as Code (IaC) tools like Terraform can also be used to maintain consistency across servers.
AWS EBS and EFS Related interview questions
1. How will you secure data at rest in EBS?
Ans: EBS data is always secure
2. An high demand of IOPS performance is expected around 15000.Which EBS volume type would you recommend?
Ans: Provisioned IOPS
3. What is EBS Volumes?
Ans: It is a persistent volumes that you can attach to the instances. By using EBS Volumes, your data will persist when you stop your instance. Amazon EBS volume is automatically replicated with its availability zone to protect component failure offering high availability and durability. Amazon EBS volumes are available in a variety of types that differ in performance characteristics and Price.
4. How EBS can be accessed?
Ans: EBS provides high performance block-level storage which can be attached with running EC2 instance. Storage can be formatted and mounted with EC2 instance, then it can be accessed.
5. What is the Process to mount EBS to EC2 instance
Ans:
Df –k
mkfs.ext4 /dev/xvdf
Fdisk –l
Mkdir /my5gbdata
Mount /dev/xvdf /my5gbdata
6. How to add volume permanently with instance.
Ans: With each restart volume will get unmounted from instance, to keep this attached need to
perform below step:
Cd /etc/fstab
/dev/xvdf /data ext4 defaults 0
0 <edit the file system name accordingly>
7. What is EBS Snapshot?
Ans: It can back up the data on the EBS Volume. Snapshots are incremental backups. If this is your first snapshot it may take some time to create. Snapshots are point in time copies of volumes.
8. How to connect EBS volume to multiple instance?
Ans: We can’t able to connect EBS volume to multiple instance, but we can able to connect
multiple EBS Volume to single instance.
9. What are the types of EBS Volumes?
Ans: Following are the types of EBS Volumes:
- General Purpose
- Provisioned IOPS
- Magnetic
- Cold HDD
- Throughput Optimized
10. What is cold HDD and Throughput-optimized HDD?
Ans:
Cold HDD: Cold HDD volumes are designed for less frequently accessed workloads. These volumes are significantly less expensive than throughput-optimized HDD volumes.
EBS Volume size: 500 GB to 16 TB Maximum IOPS: 200 IOPS Maximum throughput: 250 MB
Throughput-Optimized HDD: Throughput-optimized HDD volumes are low cost HDD volumes designed for frequent access, throughput-intensive workloads such as big data, data warehouse.
EBS Volume size: 500 GB to 16 TB Maximum IOPS: 500 IOPS Maximum throughput: 500 MB
11. Is it possible to reduce a EBS volume?
Ans: no it’s not possible, we can increase it but not reduce them
12. How to create Encrypted EBS volume?
Ans: You need to select Encrypt this volume option in Volume creation page. While creation a new master key will be created unless you select a master key that you created separately in the service. Amazon uses the AWS key management service (KMS) to handle key management.
13. Is EFS a centralised storage service in AWS?
Ans: Yes
14. What are the advantage and disadvantage of EFS?
Ans:
- Fully managed service
- File system grows and shrinks automatically to petabytes
- Can support thousands of concurrent connections
- Multi AZ replication
- Throughput scales automatically to ensure consistent low latency Disadvantages:
- Not available in all region
- Cross region capability not available
- More complicated to provision compared to S3 and EBS
15. I need to modify the ebs volumes in Linux and windows is it possible?
Ans: Yes, its possible from console use modify volumes in section give the size u need then for windows go to disk management for Linux mount it to achieve the modification
16. You want to create another Encrypted volume from this unencrypted volume. Which of the following steps can achieve this?
Ans: Below are the steps:
- Create a snapshot of the unencrypted volume (applying encryption parameters)
- copy the Snapshot
- create a new volume from the copied snapshot
17. What is Amazon EBS-Optimized instances?
Ans: Amazon EBS optimized instances to ensure that the Amazon EC2 instance is prepared to take advantage of the I/O of the Amazon EBS Volume. An amazon EBS-optimized instance uses an optimized configuration stack and provide additional dedicated capacity for Amazon EBS When you select Amazon EBS-optimized for an instance you pay an additional hourly charge for
that instance.
AWS S3 Bucket Related Interview Questions and Answers
1. What is S3?
Ans: S3 stands for Simple Storage Service. It is a storage service that provides an interface that you can use to store any amount of data, at any time, from anywhere in the world. With S3 you pay only for what you use and the payment model is pay-as-you-go.
2. What are the things we need to remember while creating s3 bucket?
Ans:
- Amazon S3 and Bucket names should be unique across all AWS
- Bucket names can contain upto 63 lowercase letters, numbers, hyphens
- You can create and use multiple buckets
- You can have upto 100 per account by
3. What is the minimum and maximum size of individual objects that you can store in S3?
Ans: The minimum size is 0 Bytes and maximum size is 5TB.
4. Can objects in Amazon S3 be delivered through amazon cloud front?
Ans: Yes
5. Difference between EBS,EFS and S3
Ans:
We can access EBS only if its mounted with instance, at a time EBS can be mounted only with one instance.
EFS can be shared at a time with multiple instances
S3 can be accessed without mounting with instances
6. Differentiate Block storage and File storage?
Ans:
Block Storage: Block storage operates at lower level, raw storage device level and manages data as a set of numbered, fixed size blocks.
File Storage: File storage operates at a higher level, the operating system level and manage data as a named hierarchy of files and folders.
7. What are the different storage classes in S3?
Ans: Following are the types of storage classes in S3:
- Amazon S3 Standard
- Amazon S3 Standard infrequently accessed
- Amazon S3 Reduced Redundancy Storage
- Glacier
8. What is the default storage class in S3?
Ans: The default storage class in S3 in Standard frequently accessed.
9. How can you send request to Amazon S3?
Ans: Every communication with Amazon S3 is either genuine or anonymous. Authentication is a process of validating the individuality of the requester trying to access an Amazon Web Services (AWS) product. Genuine requests must include a autograph value that authenticates the request sender. The autograph value is, in part, created from the requester’s AWS access keys (access key identification and secret access key).
10. What is Glacier?
Ans: Glacier is the back up or archival tool that you use to back up your data in S3.
11. What is the maximum individual archive that you can store in glacier?
Ans: You can store a maximum individual archive of upto 40 TB.
12. How can you secure the access to your S3 bucket?
Ans: There are two ways that you can control the access to your S3 buckets:
- ACL – Access Control List
- Bucket polices
13. Maximum number of bucket which can be crated in AWS.
Ans: 100 buckets can be created by default in AWS account.To get more buckets additionally you
have to request Amazon for that.
14. How can you encrypt data in S3?
Ans: You can encrypt the data by using the below methods:
- Server Side Encryption – S3 (AES 256 encryption)
- Server Side Encryption – KMS (Key management Service)
- Server Side Encryption – C (Client Side)
15. What are the parameters for S3 pricing?
Ans: The pricing model for S3 is as below:
- Storage used
- Number of requests you make
- Storage management
- Data transfer
- Transfer acceleration
16. Explain Amazon S3 lifecycle rules?
Ans: Amazon S3 lifecycle configuration rules, you can significantly reduce your storage costs by automatically transitioning data from one storage class to another or even automatically delete data after a period of time.
- Store backup data initially in Amazon S3 Standard
- After 30 days, transition to Amazon Standard IA
- After 90 days, transition to Amazon Glacier
- After 3 years, delete
17. What is the relation between Amazon S3 and AWS KMS?
Ans: To encrypt Amazon S3 data at rest, you can use several variations of Server-Side Encryption. Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypt it for you when you access it’ll SSE performed by Amazon S3 and AWS Key Management Service (AWS KMS) uses the 256-bit Advanced Encryption Standard (AES).
18. What is the function of cross region replication in Amazon S3?
Ans: Cross region replication is a feature allows you asynchronously replicate all new objects in the source bucket in one AWS region to a target bucket in another region. To enable cross-region replication, versioning must be turned on for both source and destination buckets. Cross region replication is commonly used to reduce the latency required to access objects in Amazon S3.
19. What is the pre-requisite to work with Cross region replication in S3?
Ans: Below are the pre-requisites:
- need to enable versioning on both source bucket and destination to work with cross
region replication. - both the source and destination bucket should be in different region.
20. One of my s3 is bucket is deleted but i need to restore is there any possible way?
Ans: If versioning is enabled we ca
21. what are the things we need to remember while creating s3 bucket?
Ans:
- Bucket name should be unique across all AWS
- Bucket names can contain upto 63 lowercase letters, numbers, hyphens
- You can have upto 100 bucket per AWS account
22. What are the storage class available in Amazon s3?
Ans: Below are the storage classes available for AWS S3 Bucket:
- Amazon S3 Standard
- Amazon S3 Standard-Infrequent Access
- Amazon S3 Reduced Redundancy Storage
- Amazon Glacier
AWS EC2 Related Interview Questions and Answers
1. What is the relation between the Availability Zone and Region?
Ans: Availability Zones are within Regions. They are the building blocks of AWS’s infrastructure within a particular geographical area. A Region typically consists of multiple Availability Zones, usually three or more. The exact number can vary based on the region’s size and demand.
2. What is Region in AWS?
Ans: A region is a geographical area where a cloud provider (such as AWS) has multiple data centers or availability zones. These data centers are isolated from each other but are connected via low-latency, high-throughput networking. Regions are completely independent of each other in terms of power, cooling, and network connectivity.
AWS operates multiple regions around the world, each with its own set of availability zones. For example, AWS has regions in North America, Europe, Asia Pacific, and other locations.
3. What is Zone in AWS?
Ans: An availability zone is a distinct data center within a region. These data centers are isolated from each other to prevent failures in one availability zone from affecting others. They are physically separate from each other but are connected through high-speed, low-latency networks.
Each availability zone typically consists of one or more data centers, with redundant power, networking, and connectivity to ensure high availability and fault tolerance.
The design of availability zones aims to provide resilience against failures, allowing applications and services to remain operational even if one or more availability zones encounter issues.
4. What is Autoscaling?
Ans: Auto-scaling, also known as autoscaling, is a cloud computing feature that automatically adjusts the number of compute resources allocated to an application or service based on predefined criteria.
5. What is IaaS, PaaS, SaaS? Explain with examples?
Ans: The three basic types of cloud services are as follows:
- Infrastructure as a Service (IaaS)
- Platform as a Service (PaaS)
- Software as a Service (SaaS).
1. Infrastructure as a Service (IaaS): provides virtualized computing resources over the internet. Users can rent virtual machines, storage, and networking infrastructure rather than purchasing and managing physical hardware. It allows users to have full control over the operating systems, applications, and development frameworks they use, giving them more flexibility and scalability.
Examples:
- Amazon Elastic Compute Cloud (EC2): Offers resizable compute capacity in the cloud, allowing users to launch virtual servers called instances.
- Amazon Simple Storage Service (S3): Provides scalable object storage for data backup, archival, and analytics.
- Amazon Virtual Private Cloud (VPC): Enables users to provision a logically isolated section of the AWS Cloud where they can launch AWS resources in a virtual network.
2. Platform as a Service (PaaS): PaaS provides a platform allowing customers to develop, run, and manage applications without dealing with the underlying infrastructure complexities. It abstracts away infrastructure management tasks such as hardware provisioning, middleware setup, and software patching, allowing developers to focus solely on coding and deploying applications.
Examples:
- AWS Elastic Beanstalk: Automates the deployment, scaling, and management of applications in the AWS Cloud without needing to manage the infrastructure.
- AWS Lambda: Allows developers to run code without provisioning or managing servers. It automatically scales to handle the incoming requests.
- Amazon RDS (Relational Database Service): Provides managed database services for several database engines, handling tasks such as provisioning, patching, and backups.
3. Software as a Service (SaaS): SaaS delivers software applications over the internet on a subscription basis. Users can access these applications through a web browser without needing to install or maintain any software locally. It eliminates the need for users to manage underlying infrastructure, operating systems, or application updates.
Examples:
- Amazon WorkMail: A secure, managed business email and calendaring service in the cloud.
- Amazon WorkDocs: A secure enterprise storage and sharing service that allows users to collaborate on documents, spreadsheets, and presentations.
- Amazon Connect: A cloud-based contact center service that allows businesses to set up and manage a customer contact center without needing to maintain any infrastructure.
6. How do you upgrade or downgrade a system with near-zero downtime?
Ans: Below are the steps to do:
- Take The AMI of the current server
- Create the EC2 instance with same AMI ( taken in step 1)
- Install the updates
- Run the application
- Remove the elastic ip from old server and attach to new server ( Route the application traffic to new instance)
- Check the application testing.
- If application works find then delete the old instance
- Keep the AMI of old instance for several day for any accidental issues in application. If everthing works fine for some day then you can also delete the AMI.
7. If billing usage is very high then how can you know when you are paying too much amount?
Ans:There are several tools and techniques available in AWS to identify if you are paying more than you should be and to correct it. Here are some key ones:
1. AWS Cost Explorer:
- AWS Cost Explorer provides comprehensive tools for analyzing your AWS spending. It offers various views, including cost by service, cost by linked account, and cost by usage type.
- Use Cost Explorer to identify spending trends, anomalies, and areas of high cost. You can set custom filters and groupings to drill down into specific cost categories.
- Analyze cost trends over time to detect any unexpected increases in spending. You can compare current spending to historical data to identify deviations and investigate the underlying causes.
2. AWS Budgets:
- AWS Budgets allows you to set custom budgets and alerts to track your AWS spending. You can define budgets based on cost, usage, or specific AWS services.
- Set up budget notifications to receive alerts via email or Amazon SNS when your spending exceeds predefined thresholds.
- Use AWS Budgets to proactively monitor and manage your AWS spending, ensuring that you stay within budgetary constraints.
3. AWS Trusted Advisor:
- AWS Trusted Advisor provides recommendations for optimizing your AWS infrastructure in various areas, including cost optimization, performance, security, and fault tolerance.
- Use the Cost Optimization section of Trusted Advisor to identify opportunities for reducing costs, such as unused or underutilized resources, idle instances, or unoptimized storage.
- Follow Trusted Advisor’s recommendations to implement cost-saving measures and improve the efficiency of your AWS environment.
4. Resource Tagging:
- Implement resource tagging to categorize and track your AWS resources based on different criteria such as environment, application, department, or project.
- Use tags to allocate costs accurately and identify spending patterns for specific groups of resources. This allows you to attribute costs more effectively and make informed decisions
8. What is Status Checks in AWS Ec2?
Ans:
System Status checks: It will look into problems with instance which needs AWS help to resolve the issue. When we see system status check failure, you can wait for AWS to resolve the issue, or do it by our self.
- Network connectivity
- System power
- Software issues at Data Centre
- Hardware issues
Instance Status checks: It will look into issues which need our involvement to fix the issue. if status check fails, we can reboot that particular instance.
- Failed system status checks
- Memory Full
- Corrupted file system
- Kernel issues
9. What are instance states and its solutions?
Ans:
- If the instance state is 0/2- there might be some hardware issue
- If the instance state is ½-there might be issue with OS.
10. Maximum number of EC2 which can be created in VPC?
Ans: Maximum 20 instances can be created in a VPC. we can create 20 reserve instances and
request for spot instance as per demand.
11. What are the pricing models for EC2instances?
Ans: The different pricing model for EC2 instances are as below,
- On-Demand
- reserved
- Spot Instance
- Scheduled
- Dedicated
12. What are the types of volumes for EC2 instances?
Ans: There are 2 types of volumes used in AWS EC2 Instances:
- Instance Store Volume
- Elastic Block Store (EBS) Volume
An Instance Store Volume is temporary storage that is used to store the temporary data required by an instance to function. The data is available as long as the instance is running. As soon as the instance is turned off, the Instance Store Volume gets removed and the data gets deleted.
EBS Volume represents a persistent storage disk. The data stored in an EBS Volume will be available even a er the instance is turned off.
13. What are the different types of instances?
Ans:
- General Purpose
- Compute Optimized
- Storage Optimized
- Memory Optimized
- Accerleated Computing
14. What are reserved instances?
Ans: Reserved instances are the instance that you can reserve a fixed capacity of EC2 instances. In reserved instances you will have to get into a contract of 1 year or 3 years.
15. How to plan autoscaling?
Ans: There are following ways, we can use to do the autoscaling:
- Manual Autoscaling
- Scheduled Autoscaling
- Dynamic Autoscaling
15. What does AMI Include?
Ans: An AMI (Amazon Machine Image) includes following things:
- A template for the root volume for the instance.
- Launch permissions to decide which AWS accounts can avail the AMI to launch instances
- A block device mapping that determines the volumes to attach to the instance when it is launched
16. What is the relation between the Availability Zone and Regions?
Ans: An AWS Availability Zone is a physical location where an Amazon data center is located. On the other hand, an AWS Region is a collection or group of Availability Zones or Data Centers.
17. What do you understand by stopping and terminating an EC2 Instance?
Ans:
Stopping an EC2 instance: It means to shut it down as you would normally do on your Personal Computer. This will not delete any volumes attached to the instance and the instance can be started again when needed.
Terminating an instance: It is equivalent to deleting an instance. All the volumes attached to the instance get deleted and it is not possible to restart the instance if needed at a later point in time.
18. What are Spot Instances and On-Demand Instances?
Ans: When AWS creates EC2 instances, there are some blocks of computing capacity and processing power le unused. AWS releases these blocks as Spot Instances. Spot Instances run whenever capacity is available. These are a good option if you are flexible about when your applications can run and if your applications can be interrupted.
On the other hand, On-Demand Instances can be created as and when needed. The prices of such instances are static. Such instances will always be available unless you explicitly terminate them.
19. Explain Connection Draining?
Ans: Connection Draining is a feature provided by AWS which enables your servers which are either going to be updated or removed, to serve the current requests.
If Connection Draining is enabled, the Load Balancer will allow an outgoing instance to complete the current requests for a specific period but will not send any new request to it. Without Connection Draining, an outgoing instance will immediately go off and the requests pending on that instance will error out.
20. Can you change the Private IP Address of an EC2 instance
while it is running or in a stopped state?
Ans: No, a Private IP Address of an EC2 instance cannot be changed. When an EC2 instance is launched, a private IP Address is assigned to that instance at the boot time. This private IP Address is attached to the instance for its entire lifetime and can never be changed.
21. What is the use of lifecycle hooks is Autoscaling?
Ans: Lifecycle hooks are used for Auto-scaling to put an additional wait time to a scale-in
or a scale-out event.
22. What is the difference between stopping and terminating an EC2 instance?
Ans:
Stopping EC2 Instance: When you stop an EC2 instance, it performs a normal shutdown on the instance and moves to a stopped state.
Terminating EC2 Instance: when you terminate the instance, it is moved to a stopped state and the EBS
volumes attached to it are deleted and can never be recovered.
23. How can you recover/login to an EC2 instance for which you have lost the key?
Ans: Follow Below Steps to recover the lost key:
- Verify that the EC2Config service is running.
- Detach the root volume for the instance.
- Attach the volume to a temporary instance
- Modify the configuration file
- Restart the original instance
24. How do you configure CloudWatch to recover an EC2 instance?
Ans: Follow are some steps to recover EC2 instance:
- Create an Alarm using Amazon Cloudwatch
- In the Alarm, go to Define Alarm -> Action Tab
- Choose Recover this instance option.
24. What are the common types of AMI Design?
Ans: There are many types of AMIs, but some of the common AMIs are:
- Fully Baked AMI
- Just Enough Baked AMI (JeOS AMI)
- Hybrid AMI
25. How do you upgrade or downgrade a system with near zero downtime?
Ans: You can upgrade or downgrade a system with near zero downtime using the
following steps of migration:
- Open EC2 console
- Choose Operating System AMI
- Launch an instance with the new instance type
- Install all the updates
- Install applications
- Test the instance to see if it’s working
- If working, deploy the new instance and replace the older instance
- Once it’s deployed, you can upgrade or downgrade the system with near zero downtime
26. What are the tools and techniques that you can use in AWS to identify if you are paying more than you should be, and how to correct it?
Ans: You can know that you are paying the correct amount for the resources that you are using by employing the following resources:
- Check the top services table – It is a dashboard in the cost management console that shows you the top five most used services. This will let you know how much money you are spending on the resources in question.
- Cost explorer – There are cost explorer services available which will help you to view and analyze your usage costs for the last 13 months. You can also get a cost forecast for the upcoming three months.
- AWS Budgets – This allows you to plan a budget for the services. Also, it allows you to check if the current plan meets your budget and the details of how you use the services.
- Cost allocation tags – This helps in identifying the resource that has cost more in a particular month. It lets you organize your resources and cost allocation tags to keep track of your AWS costs.
27. What are the native AWS Security logging capabilities?
Ans: Most of the AWS services have their own logging options. Also, some of them have an account level logging, like in AWS CloudTrail, AWS Config, and others. Let’s take a look at two services in specific:
- AWS CloudTrail – This is a service that provides a history of the AWS API calls for every account. It lets you perform security analysis, resource change tracking, and compliance auditing of your AWS environment as well. The best part about this service is that it lets you configure it to send notifications via AWS SNS when new logs are delivered.
- AWS Config – This helps you understand the configuration changes that happen in your environment. This service provides an AWS inventory that includes configuration history, configuration change notification, and relationships between AWS resources. It can also be configured to send notification via AWS SNS when new logs are delivered.
28. What is a DDoS attack and what services can minimize them?
Ans: DDoS is a cyber-attack in which the perpetrator accesses a website and creates multiple sessions so that the other legitimate users cannot access the service. The native tools that can help you deny the DDoS attacks on your AWS services are:
- AWS Shield
- AWS WAF
- Amazon Route53
- Amazon CloudFront
- ELB
- VPC
29. Name some of the AWS services that are not region specific
Ans:
- IAM
- Route 53
- Web Application Firewall
- CloudFront
Your comment is awaiting moderation.
https://femalecricket.com/women-cricket-news/54143-top-features-of-the-1xbet-app-you-should-know-about.html