AWS Solution Architect Associate – Study Notes: Terminologies

Once, you start your journey for AWS Solution Architect Associate exam you are expected to know and get familiar with various key terminologies which you will come across during your learning. I’ll start here with some basic terminologies in this post and build it up by updating this post gradually till I appear for the certification.

Cloud Computing [AWS Definition]

Cloud computing is the on-demand delivery of compute power, database storage, applications, and other IT resources through a cloud services platform via the internet with pay-as-you-go pricing.

AWS Global Infrastructure: The AWS infrastructure is defined by global regions, Availability Zones and edge locations to help customers achieve lower latency and higher throughput, and to ensure that their data resides only in the region they specify.

  • Region: A region is a geographic area with two or more Availability Zones. When you provision new AWS services, you will
    choose a geographical region where you data will be stored. The region you choose may take into consideration:

    • Optimize Latency
    • Minimize Costs
    • Regulatory Requirements
    • 2 Regions are Completely Separate from each other.
    • Traffic transfers over the Internet (Encrypt your data)
  • Availability Zones
    • Collection of data centers within each region.
    • Isolated from other Availability Zones
    • Connected by a low-latency link (for HA)
    • Protect them from failures on other AZs
    • Handle requests in case of failures.

      Tip: Provision resources across multiple AZs.

  • Edge Locations: AWS edge locations host a content delivery network (or CDN) called Amazon CloudFront. CloudFront can be used to deliver websites and content that is dynamic, static, or streaming. Requests for content are automatically routed to the nearest edge location, so content is delivered faster to customers.

Fault Tolerance: Fault tolerance enables a system to continue running despite a failure. This could be a failure of one or more components to the system or maybe a third-party service. In AWS, this could mean operating your system in multiple availability zones.

High Availability (HA): High availability means having little or no down time of your systems. The gold industry standard in high availability is the five nines or 99.999, which equates to less than 5 1/2 minutes of downtime per year.

Vertical Scaling/Scale up:  means an increase in capacity on a single resource. Eg. adding additional memory to a server to increase the number of processes the server can run is vertical scaling. In AWS, this could be upgrading to a new instance type.

Horizontal Scaling/Scale Out: involves adding more physical or virtual resources. Scaling horizontally in AWS is exactly what a service like horizontal scaling does. It will add additional service based on resource utilization.

Stateless Systems: means a system that is not storing any state. The output of the system will depend solely on the inputs into it. Eg. UDP Protocol

Serverless Architecture: Serverless architectures refer to applications that significantly depend on third-party services (known as Backend as a Service/BaaS) or on custom code that’s run in ephemeral containers (Function as a Service/FaaS). Eg. AWS Lambda

Elastic IP Addresses: An Elastic IP address is a static IP address designed for dynamic cloud computing. An Elastic IP address is associated with your AWS account. With an Elastic IP address, you can mask the failure of an instance or software by rapidly remapping the address to another instance in your account.

An Elastic IP address is a public IP address, which is reachable from the Internet. If your instance does not have a public IP address, you can associate an Elastic IP address with your instance to enable communication with the Internet; for example, to connect to your instance from your local computer.

Network Address Translation/NAT: is a method for placing all systems on a network behind a single IP address. Each system on the network has its own private IP address. Externally, traffic originating from any of those systems appears as the same IP address. This is how a network that is assigned an IP address from an internet service provider can have multiple systems connected to the internet resources without each needing to be assigned its own public IP address. NAT is a common service and is available to VPCs in AWS.

Routing Tables: are a collection of rules that specify how internet protocol traffic should be guided to reach an endpoints. A common route in a routing table will direct all traffic headed outside of your network through a router. This is how a system can reach websites. Another route might direct all traffic in a certain range to another network over a virtual private network connection. AWS lets you manage your own routing tables for your VPC.

Access Control List/ACL: defines permissions that are attached to an object. In the world of AWS, you can attach network ACLs to subnets which will grant or deny protocols to and from various endpoints.

Load Balancer: works to distribute traffic across a number of servers. It can be a physical or virtual resource. Traffic is directed to registered servers based on algorithms that typically seek an even load or round robin style distribution. A client may be directed to different servers on each request.