AWS Quick Recap

Amazon Web Services (AWS) currently leads the public cloud market, followed by Microsoft’s Azure and Google Cloud platform. Per Gartner’s 2018 Magic quadrant for Cloud IaaS, the public cloud market consolidation is almost complete. Only three hyper-scale cloud providers remain among the Leaders, with most of the others dropped from the list since they didn’t meet this year’s “inclusion criteria”. After over 10 years since they pioneered the Cloud landscape, AWS seems to have finally made a solid place for themselves at the top.

Except few legacy or proprietary systems that need dedicated hardware to run, any application architecture that can be virtualized can be on-boarded to AWS cloud. The challenge comes in the form of identifying the most optimal solution that is cost-effective, manageable and efficient in the long run.

AWS offers a 1-year free tier for you to test out their services. Sign up at: https://aws.amazon.com/free/

Once you login to the console, you will be taken to the Console homepage that lists all the services that AWS has on offer. From here, you can select an available service and start building. There is extensive documentation for each service that helps you setup your project. AWS Console homepage listing the various services available. Source: Amazon Web Services

AWS provides a variety of IaaS and PaaS services, ranging from Virtual machines to Block and Object storage, Microservices support, Databases, Virtual private networks (VPC’s), DNS services, Big Data Analytics, Serverless Compute, Content distribution (CDN) and AI / ML, IOT, VR, etc…

In this article, we will take a quick look at some of the core services available.

AWS Foundation Services

source: Amazon Web Services

AWS releases new features regularly. Always refer to the official documentationfor updated information.

AWS Platform Services

source: Amazon Web Services

AWS Global Infrastructure

Amazon maintains multiple datacenters around the world from where it provides Cloud services. As a general rule, users can provision resources from the region closest to them for better performance and lower latency. Note that some services are only available in certain regions (AWS Govcloud, AWS MobileHub, AWS Organizations), while some services are global (IAM, Route 53, STS, Cloudfront etc).

See this link for a detailed list of services by region: https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/

As of this writing, the AWS Cloud spans 55 Availability Zones within 18 geographic Regions and 1 Local Region around the world, with announced plans for 12 more Availability Zones and four more Regions in Bahrain, Hong Kong SAR, Sweden, and a second AWS GovCloud Region in the US.source: Amazon Web Servicessource: Amazon Web Services

AWS Regions

An AWS Region is a geographical location with a collection of availability zones mapped to physical data centers in that region. Every Region is physically isolated from and independent of every other Region in terms of location, power, water supply, etc.source: Amazon Web Services

AWS Availability Zones

  • An Availability Zone (AZ) is made of 1 or more data centers.

  • Each DC = 50–80k servers

  • Each zone in a Region has redundant and separate power, networking and connectivity to reduce the likelihood of two zones failing simultaneously.

  • A common misconception is that a single zone equals a single data center. In fact, each zone is backed by one or more physical data centers, with the largest backed by five.

  • AZs within a region are connected through low-latency links (<2 ms).

  • Synchronous Replication supported between Availability Zones.

High Availability and Resilience with AZ’s

  • When choosing where to deploy a virtual machine, services and applications into AWS you select the Availability Zone, not a specific DC within a AZ.

source: Amazon Web Services

  • To achieve high availability you deploy virtual machines, services and applications across multiple Availability Zones, so that you guarantee they are deployed in Data Centers that are isolated.

source: Amazon Web Services

EC2 — Elastic Cloud Compute

Elastic Compute Cloud (EC2) is the compute offering of AWS that provides re-sizable compute capacity in the cloud. Amazon EC2 reduces the time to obtain and boot new server instances into minutes, allowing you to quickly scale capacity up and down as computing requirements change.

You can provision a variety of virtual machines in a number of sizes depending on your requirement. EC2 offers both multi-tenant and single-tenant VMs, as well as bare-metal servers.

EC2 VM instances are built from an AMI (Base image). You will be asked to select an AMI while creating a new EC2 instance (virtual machine).Source: Amazon Web Services

  • Instances are based on an Amazon Machine Image, and several open-source and commercial OS types are available (Windows Server, RHEL, Ubuntu, Suse Linux etc).

AMI Lifecycle.

An AMI includes the following:

  • A template for the root volume for the instance (for example, an operating system, an application server, and applications).

  • Launch permissions that control which AWS accounts can use the AMI to launch instances.

  • A block device mapping that specifies the volumes to attach to the instance when it is launched (think /etc/fstab).

EC2 Instance types

Users can select from a growing number of EC2 classifications, each one customized at the hardware level for different use cases. While some applications may do well with general purpose instances, others may require optimized hardware for specialized workloads.source: Amazon Web Services

Choosing the right instance type for your application can drastically improve performance and alleviate bottlenecks.source: Amazon Web Servicessource: Amazon Web Services

More info: https://aws.amazon.com/ec2/instance-types/

Next, configure various options for your new instance — network assignments, IAM roles, Monitoring etc… You can also request multiple instances at this time if required.source: Amazon Web Services

Add Storage. It is now possible to add additional disk capacity, either in the form of EBS volumes (persistent storage) or instance stores (ephemeral storage that only persist for the life of your instance).source: Amazon Web Services

Optionally, you can add some tags to manage your instances. Tags enable you to categorize your EC2 instances. i.e. webserver / appservers etc.. This can be especially useful if you have a large server estate with similar instances.source: Amazon Web Services

An important step is to configure the security groups for your instance. Security groups act as firewalls that operate at the EC2 instance level or the ENI level (elastic network interface). You can allow or disallow specific types of traffic to your instance, a large number of protocols are supported (SSH, TCP, HTTP, HTTPS, RDP, LDAP, UDP, ICMP etc)

Finally, review all the options you selected and Submit. You will be presented with an option to create a key file. Save the key file in a secure location. If you lose your key, you won’t be able to access your instance

In a few minutes, you should be able to see your EC2 instance up and running.

You can now connect to your instance easily — https://docs.aws.amazon.com/quickstarts/latest/vmlaunch/step-2-connect-to-instance.html

Ensure to terminate your instances once you are done with them, to avoid charges to your account

Monitoring

  • Instances are launched into an existing VPC subnet.

  • CloudWatch monitoring is enabled by default for CPU Utilization & Network I/O.

  • Memory and Disk require an additional script that will post to a custom CloudWatch metric (chargeable).

  • You can also setup CloudWatch monitoring to automatically recover EC2 instances if they become impaired due to underlying hardware failure problems that requires AWS involvement to repair.

User Data

You can also bootstrap your EC2 instances at launch by passing user data, and perform automated configuration tasks. User data scripts run once per instance ID by default, and can even run scripts after the instance is launched.

User data can be:

  • Linux scripts –executed by cloud-init

  • Windows batch or PowerShell scripts –executed by EC2Configservice

EC2 Purchase options

Billing for your EC2 VM’s is metered-by-the-second, but you can purchase them from a variety of options based on your application usage and budget.source: Amazon Web Services

Amazon Simple Storage Service (S3)

S3 is an object storage service for the cloud with a simple web service interface to store and retrieve any amount of data from anywhere on the web.

S3 supports multiple file formats, and is a highly available and durable storage for your photos, videos, documents and logs etc.. S3 can also be used to store backup snapshots and as a storage vault for long term archival.

S3 features

  • S3 is Object-based i.e. data is stored as objects within buckets

  • An object is composed of a file and optionally any metadata that describes that file

  • Can store an unlimited number of objects in a bucket. You can have up to 100 buckets in each account

  • Objects can be up to 5 TB; no bucket size limit

  • Designed for 99.999999999% durability and 99.99% availability of objects over a given year

  • Can use HTTP/S endpoints to store and retrieve any amount of data, at any time, from anywhere on the web

  • Auditing is provided by access logs

  • You can control access to the bucket and its objects

  • Provides standards-based REST and SOAP interfaces

  • Supports versioning and Multi-factor authentication (MFA) to prevent accidental deletion of data

  • Supports static website hosting.

source: Amazon Web Services

  • S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations.

  • Cross-region replication can be enabled for asynchronous copying of objects across different AWS Regions.

  • Pay only for what you use, no minimum fee

S3 Storage Classes

You can choose from different storage classes with S3 based on requirement:source: Amazon Web Services

Amazon Elastic Block Store (EBS)

Amazon Elastic Block Store (EBS) is persistent block storage for EC2 instances (Disk Drive) that offer low-latency performance. Data stored in EBS volumes are automatically replicated within its Availability zone.

Use Cases

  • Databases: Scales with your performance needs

  • Business continuity: Minimize data loss and recovery time by regularly backing up using EBS Snapshots

  • Applications: Install and persist any application

source: Amazon Web Services

EBS volumes can be encrypted during launch. Once encrypted, data stored at rest on the volume, disk I/O, and snapshots created from the volume are all encrypted.

You can create snapshots for the volumes, which are backed up in S3 storage. Note that any snapshots taken of encrypted volumes, are also encrypted.

Comparing S3 and EBS source: Amazon Web Services

Amazon VPC

Amazon VPC lets you provision a logically isolated section of the AWS cloud where you can launch AWS resources in a virtual network that you define.

You can build a private, isolated virtual network on the cloud from the ground up with complete control over the networking environment. Applications / Servers can be placed into private and public subnets as required (e.g. web servers in the public subnets and DB servers in the private subnets).

There are various configurations available:

VPC with a Single Public Subnet, VPC with Public and Private Subnets (NAT), VPC with Public and Private Subnets AWS Managed VPN Access and VPC with a Private Subnet Only and AWS Managed VPN Access.

Learn how to set up a VPC —https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/getting-started-ipv4.html

source: Amazon Web Services

A VPC is made up of multiple configurable components:

Route tables: A route table contains rules for routing traffic within a subnet and from the subnet to outside world. You can create multiple route tables and attach subnets to it as per your requirement. A subnet can be associated with only one route table at a time.

Internet gateways (IGW): An IGW allows you to make a subnet public by providing a route to the internet. All EC2 instances within the subnet can access the internet only through this gateway. Also, resources from the internet can access the instances in your subnet using this gateway.

NAT instances: Resources like EC2 instances that live inside a private subnet cannot have a public IP address attached to its ENI. Therefore, it cannot communicate across the Internet directly via the IGW. A network address translation (NAT) instance is an EC2 instance that is used to allow resources in a private subnet to communicate with resources on the Internet.

Security groups: Security groups act as firewalls that operate at the EC2 instance level. Security groups are stateful, meaning if you open inbound port 80, it will automatically open outbound port 80 as well.

Network access control lists (NACLs): NACLs act as firewalls that allow or block traffic at the subnet level. NACLs are stateless, meaning you need to open both inbound and outbound ports explicitly.

Customer gateway (CGW) and virtual private gateway (VGW): These are used to create a VPN connection between the customer network and AWS. The customer gateway is the gateway or firewall in your corporate network. The virtual private gateway is the VPN concentrator that sits on the edge of your VPC.

VPC Peering

You can also connect VPC’s together in the same AWS region. Once connected, instances in both VPC’s can communicate with very low latency as if they are within the same DC. source: Amazon Web Services

Note that you cannot do transitive peering i.e. you cannot connect to VPC-C from VPC-B through VPC-A.

Wrap up

What we have seen here today is just the tip of the iceberg, but I believe will be enough to get you interested in what AWS has to offer.

While getting started with AWS is easy, it takes a lot of expertise to efficiently leverage Amazon’s extensive service portfolio — finding the right fit and optimizing for performance and cost can be challenging.

Gaining a strong understanding of the building blocks and knowing how to fit and re-fit those pieces will be helpful in the long run as organizations move on to larger, complex environments.