AWS Interview Question Part 3

What are different types of instances?

Following are the different types of instances:

  • General Purpose Instance type
    General purpose instances are the instances mainly used by the companies. There are two types of General Purpose instances: Fixed performance (eg. M3 and M4) and Burstable performance (eg. T2). Some of the sectors use this instance such as Development environments, build servers, code repositories, low traffic websites and web applications, micro-services, etc.

    Following are the General Purpose Instances:
    • T2 instances: T2 instances are the instances that receive CPU credits when they are sitting idle and they use the CPU credits when they are active. These instances do not use the CPU very consistently, but it has the ability to burst to a higher level when required by the workload.
    • M4 instances: M4 instances are the latest version of General purpose instances. These instances are the best choice for managing memory and network resources. They are mainly used for the applications where demand for the micro-servers is high.
    • M3 instances: M3 instance is a prior version of M4. M4 instance is mainly used for data processing tasks which require additional memory, caching fleets, running backend servers for SAP and other enterprise applications.
  • Compute Optimized Instance type
    Compute Optimized Instance type consists of two instance types: C4 and C3.
    • C3 instance: C3 instances are mainly used for those applications which require very high CPU usage. These instances are mainly recommended for those applications that require high computing power as these instances offer high performing processors.
    • C4 instance: C4 instance is the next version of C3 instance. C4 instance is mainly used for those applications that require high computing power. It consists of Intel E5-2666 v3 processor and use Hardware virtualization. According to the AWS specifications, C4 instances can run at a speed of 2.9 GHz, and can reach to a clock speed of 3.5 GHz.
  • GPU Instances
    GPU instances consist of G2 instances which are mainly used for gaming applications that require heavy graphics and 3D application data streaming. It consists of a high-performance NVIDIA GPU which is suitable for audio, video, 3D imaging, and graphics streaming kinds of applications. To run the GPU instances, NVIDIA drivers must be installed.
  • Memory Optimized Instances
    Memory Optimized Instances consists of R3 instances which are designed for memory- intensive applications. R3 instance consists of latest Intel Xeon lvy Bridge processor. R3 instance can sustain a memory bandwidth of 63000 MB/sec. R3 instance offers a high- performance databases, In memory analytics, and distributed memory caches.
  • Storage Optimized Instances
    Storage Optimized Instances consist of two types of instances: I2 and D2 instances.
    • I2 instance: It provides heavy SSD which is required for the sequential read, and write access to a large data sets. It also provides random I/O operations to your applications. It is best suited for the applications such as high-frequency online transaction processing systems, relational databases, NoSQL databases, Cache for in-memory databases, Data warehousing applications and Low latency Ad- Tech serving applications.
    • D2 instance: D2 instance is a dense storage instance which consists of a high-frequency Intel Xeon E5-2676v3 processors, HDD storage, High disk throughput.

What is the default storage class in S3?

The default storage class is Standard Frequently Accessed.

What is a snowball?

Snowball is a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of AWS cloud.

AWS Interview Question

Difference between Stopping and Terminating the instances?

Stopping: You can stop an EC2 instance and stopping an instance means shutting down the instance. Its corresponding EBS volume is still attached to an EC2 instance, so you can restart the instance as well.

Terminating: You can also terminate the EC2 instance and terminating an instance means you are removing the instance from your AWS account. When you terminate an instance, then its corresponding EBS is also removed. Due to this reason, you cannot restart the EC2 instance.

How many Elastic IPs can you create?

5 elastic IP addresses that you can create per AWS account per region.

What is a Load Balancer?

Load Balancer is a virtual machine that balances your web application load that could be Http or Https traffic that you are getting in. It balances a load of multiple servers so that no web server gets overwhelmed.

Advance AWS Interview Question

What is VPC?

VPC stands for Virtual Private Cloud. It is an isolated area of the AWS cloud where you can launch AWS resources in a virtual network that you define. It provides a complete control on your virtual networking environment such as selection of an IP address, creation of subnets, configuration of route tables and network gateways.

What is VPC peering connection?

  • A VPC peering connection is a networking connection that allows you to connect one VPC with another VPC through a direct network route using private IP addresses.
  • By using VPC peering connection, instances in different VPC can communicate with each other as if they were in the same network.
  • You can peer VPCs in the same account as well as with the different AWS account

What are NAT Gateways?

NAT stands for Network Address Translation. It is an AWS service that enables to connect an EC2 instance in private subnet to the internet or other AWS services. 

AWS Interview Question

How can you control the security to your VPC?

You can control the security to your VPC in two ways:

  • Security Groups
    It acts as a virtual firewall for associated EC2 instances that control both inbound and outbound traffic at the instance level.
  • Network access control lists (NACL)
    It acts as a firewall for associated subnets that control both inbound and outbound traffic at the subnet level.

What are the different database types in RDS?

Following are the different database types in RDS:

  • Amazon Aurora
    It is a database engine developed in RDS. Aurora database can run only on AWS infrastructure not like MySQL database which can be installed on any local device. It is a MySQL compatible relational database engine that combines the speed and availability of traditional databases with the open source databases. PostgreSQL
    • PostgreSQL is an open source relational database for many developers and startups.
    • It is easy to set up, operate, and can also scale PostgreSQL deployments in the cloud.
    • You can also scale PostgreSQL deployments in minutes with cost-efficient.
    • PostgreSQL database manages time-consuming administrative tasks such as PostgreSQL software installation, storage management, and backups for disaster recovery.
  • MySQL
    • It is an open source relational database.
    • It is easy to set up, operate, and can also scale MySQL deployments in the cloud.
    • By using Amazon RDS, you can deploy scalable MySQL servers in minutes with cost-efficient.
  • MariaDB
    • It is an open source relational database created by the developers of MySQL.
    • It is easy to set up, operate, and can also scale MariaDB server deployments in the cloud.
    • By using Amazon RDS, you can deploy scalable MariaDB servers in minutes with cost-efficient.
    • It frees you from managing administrative tasks such as backups, software patching, monitoring, scaling and replication.
  • Oracle
    • It is a relational database developed by Oracle.
    • It is easy to set up, operate, and can also scale Oracle database deployments in the cloud.
    • You can deploy multiple editions of Oracle in minutes with cost-efficient.
    • It frees you from managing administrative tasks such as backups, software patching, monitoring, scaling and replication.
    • You can run Oracle under two different licensing models: “License Included” and “Bring Your Own License (BYOL)”. In License Included service model, you do need have to purchase the Oracle license separately as it is already licensed by AWS. In this model, pricing starts at $0.04 per hour. If you already have purchased the Oracle license, then you can use the BYOL model to run Oracle databases in Amazon RDS with pricing starts at $0.025 per hour.
  • SQL Server
    • SQL Server is a relational database developed by Microsoft.
    • It is easy to set up, operate, and can also scale SQL Server deployments in the cloud.
    • You can deploy multiple editions of SQL Server in minutes with cost-efficient.
    • It frees you from managing administrative tasks such as backups, software patching, monitoring, scaling and replication.

What is Redshift?

  • Redshift is a fast, powerful, scalable and fully managed data warehouse service in the cloud.
  • It provides ten times faster performance than other data warehouse by using machine learning, massively parallel query execution, and columnar storage on high-performance disk.
  • You can run petabytes of data in Redshift datawarehouse and exabytes of data in your data lake built on Amazon S

Advance AWS Interview Question

What is SNS?

SNS stands for Simple Notification Service. It is a web service that provides highly scalable, cost-effective, and flexible capability to publish messages from an application and sends them to other applications. It is a way of sending messages.

What are the different types of routing policies in route53?

Following are the different types of routing policies in route53:

  • Simple Routing Policy
    • Simple Routing Policy is a simple round-robin policy which is applied to a single resource doing the function for the domain, For example, web server is sending the content to a website where web server is a single resource.
    • It responds to DNS queries based on the values present in the resource.
  • Weighted Routing Policy
    • Weighted Routing Policy allows you to route the traffic to different resources in specified proportions. For example, 75% in one server, and 25% in another server.
    • Weights can be assigned in the range from 0 to 255.
    • Weight Routing policy is applied when there are multiple resources accessing the same function. For example, web servers accessing the same website. Each web server will be given a unique weight number.
    • Weighted Routing Policy associates the multiple resources to a single DNS name.
  • Latency-based Routing Policy
    • Latent-based Routing Policy allows Route53 to respond to the DNS query at which data center gives the lowest latency.
    • Latency-based Routing policy is used when there are multiple resources accessing the same domain. Route53 will identify the resource that provides the fastest response with lowest latency.
  • Failover Routing Policy
  • Geolocation Routing Policy

What is the maximum size of messages in SQS?

The maximum size of message in SQS IS 256 KB.

AWS Interview Question

Differences between Security group and Network access control list?

Security GroupNACL (Network Access Control List)
It supports only allow rules, and by default, all the rules are denied. You cannot deny the rule for establishing a connection.It supports both allow and deny rules, and by default, all the rules are denied. You need to add the rule which you can either allow or deny it.
It is a state full means that any changes made in the inbound rule will be automatically reflected in the outbound rule. For example, If you are allowing an incoming port 80, then you also have to add the outbound rule explicitly.It is a stateless means that any changes made in the inbound rule will not reflect the outbound rule, i.e., you need to add the outbound rule separately. For example, if you add an inbound rule port number 80, then you also have to explicitly add the outbound rule.
It is associated with an EC2 instance.It is associated with a subnet.
All the rules are evaluated before deciding whether to allow the traffic.Rules are evaluated in order, starting from the lowest number.
Security Group is applied to an instance only when you specify a security group while launching an instance.NACL has applied automatically to all the instances which are associated with an instance.
It is the first layer of defense.It is the second layer of defense.

What are the two types of access that you can provide when you are creating users?

There are two types of access:

  • Console Access
    If the user wants to use the Console Access, a user needs to create a password to login in an AWS account.
  • Programmatic access
    If you use the Programmatic access, an IAM user need to make an API calls. An API call can be made by using the AWS CLI. To use the AWS CLI, you need to create an access key ID and secret access key.

What is subnet?

When large section of IP address is divided into smaller units is known as subnet.

AWS Interview Questions

A Virtual Private Cloud (VPC) is a virtual network provided to your AWS account. When you create a virtual cloud, you need to specify the IPv4 addresses which is in the form of CIDR block. After creating a VPC, you need to create the subnets in each availability zone. Each subnet has a unique ID. When launching instances in each availability zone, it will protect your applications from the failure of a single location.

Advance AWS Interview Question

Differences between Amazon S3 and EC2?


  • It is a storage service where it can store any amount of data.
  • It consists of a REST interface and uses secure HMAC-SHA1 authentication keys.


  • It is a web service used for hosting an application.
  • It is a virtual machine which can run either Linux or Windows and can also run the applications such as PHP, Python, Apache or other databases.

Can you establish a peering connection to a VPC in a different region?

No, it’s not possible to establish a peering connection to a VPC in a different region. It’s only possible to establish a peering connection to a VPC in the same region.

How many subnets can you have per VPC?

You can have 200 subnets per VPC.

AWS Interview Question

When EC2 officially launched?

EC2 was officially launched in 2006.

What is Amazon Elastic ache?

An Amazon Elastic ache is a web service allows you to easily deploy, operate, and scale an in-memory cache in the cloud.

What are the types of AMI provided by AWS?

There are two types of AMI provided by AWS:

  • Instance store backed
    • An instance-store backed is an EC2 instance whose root device resides on the virtual machine’s hard drive.
    • When you create an instance, then AMI is copied to the instance.
    • Since “instance store-backed” instances root device is stored in the virtual machine’s hard drive, so you cannot stop the instance. You can only terminate the instance, and if you do so, the instance will be deleted and cannot be recovered.
    • If the virtual machine’s hard drive fails, then you can lose your data.
    • You need to leave this instance-store instance in a running state until you are completely done with it.
    • You will be charged from the moment when your instance is started until your instance is terminated.
  • EBS backed
    • An “EBS backed” instance is an EC2 instance that uses EBS volume as a root device
    • EBS volumes are not tied to a virtual hardware, but they are restricted to an availability zone. This means that EBS volume is moved from one machine to another machine within the same availability zone.
    • If the virtual machine’s fails, then the virtual machine can be moved to another virtual machine.
    • The main advantage of “EBS backed” over “instance store-backed” instances is that it can be stopped. When an instance is in a stopped state, then EBS volume can be stored for a later use. The virtual machine is used for some other instance. In stopped state, you are not charged for the EBS storage.

Advance AWS Interview Question

What is Amazon EMR?

An Amazon EMR stands for Amazon Elastic MapReduce. It is a web service used to process the large amounts of data in a cost-effective manner. The central component of an Amazon EMR is a cluster. Each cluster is a collection of EC2 instances and an instance in a cluster is known as node. Each node has a specified role attached to it known as a node type, and an Amazon EMR installs the software components on node type.

Following are the node types:

AWS Interview Questions
  • Master node
    A master node runs the software components to distribute the tasks among other nodes in a cluster. It tracks the status of all the tasks and monitors the health of a cluster.
  • Core node
    A core node runs the software components to process the tasks and stores the data in Hadoop Distributed File System (HDFS). Multi-node clusters will have at least one core node.
  • Task node
    A task node with software components processes the task but does not store the data in HDFS. Task nodes are optional.

How to connect EBS volume to multiple instances?

You cannot connect the EBS volume to multiple instances. But, you can connect multiple EBS volumes to a single instance.

What is the use of lifecycle hooks in Autoscaling?

Lifecycle hooks perform custom actions by pausing instances when Autoscaling group launches or terminates an instance. When instance is paused, an instance moves in a wait state. By default, an instance remains in a wait state for 1 hour. For example, when you launch a new instance, lifecycle hooks pauses an instance. When you pause an instance, you can install a software on it or make sure that an instance is completely ready to receive the traffic.

AWS Interview Question

What is Amazon Kinesis Firehose?

An Amazon Kinesis Firehose is a web service used to deliver real-time streaming data to destinations such as Amazon Simple Storage Service, Amazon Redshift, etc.

What is the use of Amazon Transfer Acceleration Service?

An Amazon Transfer Acceleration Service is a service that enables fast and secure transfer of data between your client and S3 bucket. 

How will you access the data on EBS in AWS?

EBS stands for Elastic Block Store. It is a virtual disk in a cloud that creates the storage volume and attach it to the EC2 instances. It can run the databases as well as can store the files. All the files that it store can be mounted as a file system which can be accessed directly.

Advance AWS Interview Question

Differences between horizontal scaling and vertical scaling?

Vertical scaling means scaling the compute power such as CPU, RAM to your existing machine while horizontal scaling means adding more machines to your server or database. Horizontal scaling means increasing the number of nodes, and distributing the tasks among different nodes.

Compare between AWS and OpenStack.

LicenseAmazon proprietaryOpen source
Operating systemWhatever the cloud administrator providesWhatever AMIs provided by AWS
Performing repeatable operationsThrough templatesThrough text files

What is the importance of buffer in Amazon Web Services?

An Elastic Load Balancer ensures that the incoming traffic is distributed optimally across various AWS instances. A buffer will synchronize different components and makes the arrangement additionally elastic to a burst of load or traffic. The components are prone to work in an unstable way of receiving and processing requests. The buffer creates an equilibrium linking various apparatus and crafts them work at an identical rate to supply more rapid services.

AWS Interview Question

How are Spot Instance, On-demand Instance, and Reserved Instance different from one another?

Both Spot Instance and On-demand Instance are models for pricing.

Spot InstanceOn-demand Instance
With Spot Instance, customers can purchase compute capacity with no upfront commitment at all.With On-demand Instance, users can launch instances at any time based on the demand.
Spot Instances are spare Amazon instances that you can bid for.On-demand Instances are suitable for high-availability needs of applications.
When the bidding price exceeds the spot price, the instance is automatically launched, and the spot price fluctuates based on supply and demand for instances.On-demand Instances are launched by users only with the pay-as-you-go model.
When the bidding price is less than the spot price, the instance is immediately taken away by Amazon.On-demand Instances will remain persistent without any automatic termination from Amazon.
Spot Instances are charged on an hourly basis.On-demand Instances are charged on a per-second basis

Why do we make subnets?

Creating subnets means dividing a large network into smaller ones. These subnets can be created for several reasons. For example, creating and using subnets can help reduce congestion by making sure that the traffic destined for a subnet stays in that subnet. This helps in efficiently routing the traffic coming to the network that reduces the network’s load. 

Which one of the storage solutions offered by AWS would you use if you need extremely low pricing and data archiving?

Amazon Glacier. AWS Glacier is an extremely low-cost storage service offered by Amazon that is used for data archiving and backup purposes. The longer you store data in Glacier, the lesser it will cost you.

Advance AWS Interview Question

Describe RTO & RPO from AWS perspective?

RTO (Recovery Time Objective) refers to the maximum waiting time for resumption of AWS services/operations during an outage/disaster. Due to unexpected failure, firms have to wait for the recovery process, and the maximum waiting time for an organization is defined as the RTO. When an organization starts using AWS, they have to set their RTO, which can also be called a metric. It defines the time firms can wait during disaster recovery of applications and business processes on AWS. Organizations calculate their RTO as part of their BIA (Business Impact Analysis).

Like RTO, RPO (Recovery Point Objective) is also a business metric calculated by a business as part of its BIA. RPO defines the amount of data a firm can afford to lose during an outage or disaster. It is measured in a particular time frame within the recovery period. RPO also defines the frequency of data backup in a firm/organization. For example, if a firm uses AWS services and its RPO is 3 hours, then it implies that all its data/disk volumes will be backed up every three hours.

Explain the auto-scaling feature of EC2 along with its benefits.

The auto-scaling feature in AWS EC2 automatically scales up the computing capacity according to the need. It helps in maintaining a steady performance of business processes. Auto Scaling can help to scale multiple resources in AWS within a few minutes. Besides EC2, one can also choose to automatically scale other AWS resources and tools as and when needed. The benefits of the EC2 auto-scaling feature are as follows:

  • The auto-scaling feature of AWS EC2 is easy to set up. The utilization levels of various resources can be found under the same interface. You do not have to move to different consoles to check the utilization level of multiple resources.
  • The auto-scaling feature is innovative and automates the scaling processes. It also monitors the response of various resources to changes and scales them automatically. Besides adding computing capacity, the auto-scaling feature also removes/lessens the computing capacity if needed.
  • Even if the workload is unpredictable, the auto-scaling feature optimizes the application performance. The optimum performance level of an application is maintained with the help of auto-scaling.

What are S3 storage classes, and explain various types of S3 storage classes?

S3 storage classes are used for data integrity and assisting concurrent data loss. Whatever object you store in S3 will be associated with a respective storage class. It is also involved in maintaining the object lifecycle that helps in automatic migration and thus saves cost. The four types of S3 storage classes are as follows:

  • S3 Standard – The data is duplicated and stored across multiple devices in various facilities via the S3 standard storage class. A loss of a maximum of 2 facilities simultaneously can be coped up via the S3 standard. With its low latency and high throughput, it provides increased durability and availability.
  •  S3 Standard IA – ‘S3 Standard Infrequently Accessed’ is used for conditions when data is not accessed regularly, but it should be fast when there is a need to access data. Like S3 Standard, it can also sustain the loss of data at a maximum of 2 facilities concurrently. 
  • S3 One Zone Infrequent Access – Many of its features are similar to that of S3 Standard IA. The primary difference between S3 one zone infrequent access and the rest of the storage class is that its availability is low, i.e., 99.5%. The availability of S3 standard and standard IA is 99.99%.
  • S3 Glacier – S3 glacier provides the cheapest storage class as compared to other storage classes. One can use the data stored in the S3 glacier for the archive only.

AWS Interview Question

What is a policy in AWS? Explain various types of AWS policies in brief.

A policy is an object in AWS that is associated with a respective resource and defines whether the user request is to be granted or not. The six different types of policies in AWS are as follows:

  •  Identity-based policies – These policies  are concerned with an identity user, multiple users, or any particular role. Identity-based policies store permissions in the JSON format. They are also further divided into managed and inline policies.
  • Resource-based policies – The policies that are concerned with resources in AWS are called resource-based policies. An example of a resource in AWS is the S3 bucket.
  • Permissions boundaries – Permissions boundaries define the maximum number of permissions that can be granted to an object/entity by identity-based policies.
  •  SCP – SCP (Service Control Policies) are also stored in JSON format and define the maximum number of permissions concerning a firm/organization.
  • ACL – ACL (Access Control Lists) defines the principles in some other AWS account that can access the resources. It is also the only AWS policy that is not stored in the JSON format.
  • Session policies – Session policies limit the number of permissions granted by a user’s identity-based policies.

 Explain in detail about AWS VPC.

Amazon VPC (Virtual Private Cloud) lets a user launch AWS resources into a virtual network defined by the user only. Since the user defines the virtual network, various aspects of the virtual network can be controlled by the user, like subnet creation, IP address, etc.

Firms can install a virtual network within their organization and use all the AWS benefits for that network. Users can also create a routing table for their virtual network using VPC. A routing table is a set of rules that defines the direction of the incoming traffic.

The communication between your virtual network and the internet can also be established using the internet gateway offered by AWS VPC. One can access the VPC offered by Amazon via various interfaces that are AWS management console, AWS CLI (Command Line Interface), AWS SDKs, and Query API. Users can pay for additional VPC components if required like NAT gateway, traffic mirroring, private link, etc.

Your firm wants to connect the data center of its organization to the Amazon cloud environment for faster accessibility and performance. What course of action will you suggest for the stated scenario?

AWS data engineer interview questions can be asked if a candidate is applying for data scientist/engineer. The data center of my firm can be connected to the Amazon cloud environment with the help of VPC (Virtual Private Cloud). I would suggest my firm establish a virtual private network and then connect VPC and the data center. My firm can then launch AWS resources in the virtual private network using VPC. A virtual private network will establish a secure connection between the firm’s data center and the AWS global network. Adding cloud services to our organization will help us do more work in less time while successfully slashing costs in the long run.

I would also suggest creating multiple backups of the company data before moving it successfully to the cloud. AWS offers affordable backup plans, and one can also automate backups after a fixed interval.

AWS Interview Question

Explain various types of elastic load balancers in AWS.

Elastic load balancing in AWS supports three different types of load balancers. The load balancers are used to route the incoming traffic in AWS. The three types of load balancers in AWS are as follows:

  • Application load balancer – The application load balancer is concerned with the routing decisions made at the application layer. It does path-based routing at the HTTP/HTTPS (layer 7). It also helps in routing requests to various container instances. You can route a request to more than one port in the container instances using the application load balancer.
  • Network load balancer – The network load balancer is concerned with routing decisions made at the transport layer (SSL/TCP). It uses a flow hash routing algorithm to determine the target on the port from the group of targets. Once the target is selected, a TCP connection is established with the chosen target based on the listener configuration that is known.
  • Classic load balancer – A classic load balancer can decide on either the application layer or the transport layer. One can map a load balancer port to only one container instance (fixed mapping) via the classic load balancer.

What do you know about NAT gateways in AWS?

NAT (Network Address Translation) is an AWS service that helps in connecting an EC2 instance to the internet. The EC2 instance used via NAT should be in a private subnet. Not only the internet but NAT can also help in connecting an EC2 instance to other AWS services.

Since we are using the EC2 instance in a private subnet, connecting to the internet via any other means would make it public. NAT helps in retaining the private subnet while establishing a connection between the EC2 instance and the internet. Users can create NAT gateways or NAT instances for establishing a connection between EC2 instances and internet/AWS services.

NAT instances are single EC2 instances, while NAT gateways can be used across various availability zones. If you are creating a NAT instance, it will support a fixed amount of traffic decided by the instance’s size.

Explain various AWS RDS database types in brief.

Various types of AWS RDS database types are as follows:

  • Amazon Aurora – Aurora database is strictly developed in AWS RDS, which means it cannot run on any local device with an AWS infrastructure. This relational database is preferred for its enhanced availability and speed.
  • PostgreSQL – PostgreSQL is a relational database that is developed especially for start-ups and AWS developers. This easy-to-use and open-source database help users in scaling deployments in the cloud environment. Not only the PostgreSQL deployments are fast, but they are also cost-effective (economical).
  • MySQL – It is also an open-source database used for its high scalability during deployments in the cloud.
  • MariaDB – MariaDB is an open-source database that is used for deploying scalable servers in the cloud environment. You can deploy MariaDB servers in the cloud environment within a few minutes. The scalable MariaDB server deployment is also cost-effective. MariaDB is also preferred for its management of administrative jobs like scaling, replication, software patching, etc.
  • Oracle – Oracle is a relational database in AWS RDS that can also scale the respective deployments in the cloud. Just like MariaDB, it also performs the management of various administrative tasks.
  • SQL server – It is another relational database that can also manage administrative tasks like scaling, backup, replication, etc. Users can deploy multiple versions of SQL servers in the cloud within minutes. The SQL server deployment is also cost-effective in AWS.

Advance AWS Interview Question

What do you know about Amazon Redshift?

Redshift is a data warehouse service offered by Amazon that is deployed in the cloud. It is fast and highly scalable as compared to other data warehouses in the cloud. On average, Redshift provides around ten times more performance & speed than different data warehouses in the cloud. It uses new-age technologies like machine learning, columnar storage, etc. that justifies its high stability and performance. You can scale up to petabytes and terabytes using the AWS Redshift.

Redshift uses OLAP as its analytics processing system and comprises two nodes for storing data/information. With its advanced compression and parallel processing, it offers high speed during AWS operations in the cloud. One can easily add new nodes in the warehouse using AWS Redshift. Developers can answer a query faster and can also solve complex problems using Redshift.

What do you know about AMI?

AMI (Amazon Machine Image) is used to create a virtual machine within the EC2 environment. The services that are delivered via EC2 are deployed with the help of AMI only. The main part of AMI is its read-only filesystem image that also comprises an operating system. AMI also consists of launch permission that decides which AWS account is permitted to launch instances using AMI. The volumes attached to an instance while the launching process is decided by block device mapping in AMI. The AMI consists of three different types of images.

A Public image is an AMI that any user/client can use, while users can also opt for ‘Paid’ AMI. You can also use a ‘Shared’ AMI that provides more flexibility to the developer. Users can access A shared AMI who are allowed as per the developer’s orders.

Explain horizontal and vertical scaling in AWS?

When RDS/EC2 servers alter the instance size for scaling purposes, it is called vertical scaling. A larger instance size is picked for scaling up in vertical scaling, while a smaller instance size is picked for scaling down. The size of the instance is altered on-demand via vertical scaling in AWS. 

Unlike vertical scaling, the size of an instance is altered as per the requirements in horizontal scaling. The number of nodes/instances in a system is changed without altering their size via horizontal scaling. The horizontal auto-scaling is based on the number of connections between an instance and the integrated ELB (Elastic Load Balancer).

AWS Interview Question

What are the main differences between AWS and OpenStack?

Both AWS and OpenStack are indulged in providing cloud computing services to their users. AWS is owned and distributed by Amazon, whereas OpenStack is an open-source cloud computing platform. AWS offers various services in cloud computing and offers IaaS, PaaS, etc., whereas OpenStack is an IaaS cloud computing platform. You can use OpenStack for free as it is open source, but you have to pay for AWS services as you use it.

Another significant difference between AWS and OpenStack is in terms of performing repeatable operations. While AWS performs repeatable functions via templates, OpenStack does it via text files. OpenStack is good for understanding and learning cloud computing, but AWS is better and equipped for businesses. AWS also offers business development tools that OpenStack does not offer.

What do you know about AWS Cloud Trail?

People using an AWS account can audit it using the AWS Cloud Trail. It also helps in ensuring compliance and governance of the AWS account. As soon as an AWS account is activated, Cloud Trail also starts working and records every AWS activity as an event. One can visit the Cloud Trail console anytime and can view the recent events/actions. All the efforts by a user or a role are recorded in the Cloud Trail. The actions taken by various AWS services are also recorded in the Cloud Trail.

With Cloud Trail, you will have enhanced visibility of your AWS account and the associated actions. In an AWS infrastructure in any organization, you can quickly get to know any particular activity and gain control over the AWS infrastructure.

AWS Interview Question Part 1AWS Interview Question Part 2

Leave a Comment

Your email address will not be published. Required fields are marked *

Back to top