Introduction
The AWS Command Line Interface (CLI) is a powerful tool that allows you to manage and interact with AWS services directly from your terminal. This guide provides detailed instructions on configuring the AWS CLI, managing S3 buckets, handling IAM roles and policies, operating EC2 instances, and deploying Lambda functions. Each section includes command examples and explanations to help you use the AWS CLI effectively.
TL;DR
You can find a shorter cheat sheet version of this article here.
Table of contents
Open Table of contents
Configuration
AWS Command Line Interface (CLI) configuration allows users to set up their environment for interacting with AWS services from the command line. Configuration typically includes specifying access credentials, default region, output format, and other options.
Setting Up Your AWS CLI
To configure the default AWS CLI profile, run:
aws configure
This command will prompt you to enter your AWS Access Key ID, Secret Access Key, region, and output format. These credentials are saved in ~/.aws/credentials
and ~/.aws/config
.
To configure a specific profile, use:
aws configure --profile project1
This allows you to create and manage multiple profiles, useful for handling different projects or environments.
To execute CLI commands with a specific profile:
aws --profile PROFILE-NAME iam get-user
Replace PROFILE-NAME
with the name of your configured profile. This becomes handy, when you have different credentials or configurations for various AWS environments or projects.
AWS Credentials File Example
You can manually edit the ~/.aws/credentials
file to include multiple profiles:
[default]
aws_access_key_id = YOUR_ACCESS_KEY
aws_secret_access_key = YOUR_SECRET_KEY
[project1]
aws_access_key_id = YOUR_ACCESS_KEY
aws_secret_access_key = YOUR_SECRET_KEY
This file stores your AWS credentials, allowing the CLI to authenticate your requests. Each profile section contains the AWS Access Key ID and Secret Access Key for a specific set of AWS credentials.
Managing S3 Buckets
Amazon S3 buckets are containers used to store objects in Amazon Simple Storage Service (S3). Each bucket can hold an unlimited number of objects, such as files, images, videos, and data backups. Buckets provide a way to organize and manage data, allowing users to set permissions, configure access controls, and define storage policies. They are globally unique within S3 and can be configured for various functionalities, including versioning, lifecycle management, logging, and cross-region replication. S3 buckets are essential for scalable, secure, and efficient cloud storage.
Listing S3 Buckets and Objects
To list the contents of an S3 bucket, use:
aws s3 ls <bucket-name>
or with S3Uri:
aws s3 ls s3://<bucket-name>
This command will display all objects within the specified bucket.
You can list bucket contents without signing the request (useful for public buckets):
aws s3 ls --no-sign-request s3://<bucket-name>/
This is helpful when accessing public S3 buckets that do not require authentication.
Specify a region if necessary:
aws s3 ls s3://<bucket-name>/ --no-sign-request --region <region-name>
This command is useful when the bucket is located in a specific AWS region and you want to avoid the default region settings.
To find the region where a bucket is located, we can craft a short one-liner in Bash:
for r in `cat aws-regions.txt`; do echo -n "testing region: $r: "; aws s3 ls s3://<bucket-name>/ --region $r; done
This loop iterates through a list of regions and attempts to list the bucket’s contents, helping you identify the correct region.
Downloading from S3
To sync an S3 bucket to your local machine:
aws s3 sync s3://bucket-name/ . --no-sign-request --region us-west-2
This command will download all objects from the specified S3 bucket to your current directory, using the specified region and without signing the request.
Copying to and from S3
Copy a file from an S3 bucket:
aws s3 cp s3://s3.amazonaws.com/bucket-name/test.jpg test.jpg
or specifying an alternate endpoint for the API requests:
aws --endpoint-url https://s3.amazonaws.com s3 cp s3://bucket-name/test.jpg test.jpg
These commands download the specified file from the S3 bucket to your local directory. The second command specifies an endpoint URL.
To copy a directory to an S3 bucket:
aws s3 cp mydir s3://mybucket/dir --recursive
This command uploads all files within the mydir
directory to the specified S3 bucket, maintaining the directory structure.
To sync a local directory with an S3 bucket:
aws s3 sync mydir s3://mybucket/dir --exclude "*.tmp"
This command synchronizes the contents of the local directory with the S3 bucket, excluding files that match the specified pattern.
Creating and Removing S3 Buckets
Create a new S3 bucket:
aws s3 mb s3://test-bucket-name
This command creates a new S3 bucket with the specified name.
Alternatively, using s3api
:
aws s3api create-bucket --bucket test-bucket-name
This command achieves the same result but uses the s3api
command for more granular control.
Remove a file from a bucket:
aws --endpoint-url https://s3.amazonaws.com s3 rm s3://bucket-name/test.jpg
This command deletes the specified file from the S3 bucket.
Remove a directory from a bucket:
aws s3 rm s3://mybucket/dir --recursive
This command deletes all files within the specified directory in the S3 bucket.
Remove non-empty S3 buckets created on a specific date:
aws s3 ls | grep '2024-06' | cut -d" " -f3 | xargs -I{} aws s3 rb s3://{} --force
This command lists all S3 buckets, filters them by the creation date, and forcefully removes the matching buckets.
Managing IAM Roles and Policies
AWS Identity and Access Management (IAM) roles and policies are essential components for managing permissions in AWS.
IAM Roles: These are identities you can create in your AWS account with specific permissions. Unlike IAM users, roles are not associated with a specific user but can be assumed by anyone or anything that needs them, such as an AWS service, an application, or a user in another AWS account. Roles help enhance security by granting temporary access to resources.
IAM Policies: These are documents that define permissions and can be attached to IAM users, groups, or roles. Policies specify allowed or denied actions and resources, enabling fine-grained control over AWS services. Policies are written in JSON format and can be managed and applied to ensure that only the necessary access is granted, following the principle of least privilege.
Retrieving IAM Information
To get the account name connected with the key:
aws sts get-caller-identity
This command returns details about the AWS account and user making the request, including the account ID, user ID, and ARN.
To get the username associated with your IAM credentials:
aws iam get-user
This command retrieves information about the specified IAM user, including the user’s ARN, creation date, and path.
Managing IAM Policies
List policies attached to a user:
aws iam list-attached-user-policies --user-name USER-NAME
This command lists all managed policies attached to the specified IAM user.
Get details of a specific policy:
aws iam get-policy --policy-arn POLICY-ARN
This command retrieves information about the specified managed policy, including its ARN, default version ID, and policy details.
Retrieve a specific version of a policy:
aws iam get-policy-version --policy-arn POLICY-ARN --version-id VERSION-ID
This command fetches the details of a specific version of the specified managed policy.
Operating EC2 Instances
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. With EC2, you can launch virtual servers, known as instances, to run applications and services. You have full control over the instances, including the choice of operating system, storage, networking, and instance type. EC2 offers flexible scaling, allowing you to increase or decrease capacity within minutes, and pay only for the compute time you use. It supports a wide range of workloads, from small-scale applications to enterprise-grade systems, providing robust performance, security, and reliability.
Managing EC2 Instances
Start an EC2 instance:
aws ec2 start-instances --instance-ids INSTANCE-ID
This command starts the specified EC2 instance, making it available for use.
Terminate an EC2 instance:
aws ec2 terminate-instances --instance-ids INSTANCE-ID
This command terminates the specified EC2 instance, permanently deleting it.
List all instances in a region:
aws ec2 --region us-east-1 describe-instances
This command lists all EC2 instances in the specified region, providing details such as instance IDs, states, and types.
Check the status of instances:
aws ec2 --region us-east-1 describe-instance-status
This command retrieves the status of all EC2 instances in the specified region, including information about their reachability and system status.
List all available AWS regions:
aws ec2 --region us-east-1 describe-regions
This command lists all AWS regions, providing their region names and endpoint URLs.
Managing Snapshots
Snapshots in Amazon Elastic Compute Cloud (EC2) are backups of Amazon Elastic Block Store (EBS) volumes. They capture the state of the volume at a specific point in time, including all its data, configurations, and settings. Snapshots are incremental backups, meaning only the changed data since the last snapshot is stored, which reduces storage costs and backup time.
List your snapshots:
aws ec2 describe-snapshots --owner-ids self
This command lists all snapshots owned by your AWS account, including snapshot IDs, descriptions, and creation dates.
List attributes of a specific snapshot:
aws ec2 describe-snapshot-attribute --snapshot-id SNAPSHOT-ID --attribute createVolumePermission
This command retrieves the specified attribute of the specified snapshot, such as its create volume permission.
Create an EC2 volume from a snapshot:
aws ec2 create-volume --availability-zone AZ --region REGION --snapshot-id SNAPSHOT-ID
This command creates a new EBS volume from the specified snapshot in the specified availability zone and region.
After creating the volume, mount it in your VM:
sudo mount /dev/xvdb1 /mnt
This command mounts the newly created EBS volume to the specified mount point in your VM.
Managing Lambda Functions
AWS Lambda is a serverless computing service provided by Amazon Web Services (AWS). It enables you to run code without provisioning or managing servers. With Lambda, you can upload your code, and AWS will automatically scale and manage the infrastructure needed to execute it. Lambda supports multiple programming languages, including Node.js, Python, Java, and more.
Lambda functions are triggered by events such as changes to data in Amazon S3 buckets, updates to DynamoDB tables, HTTP requests via API Gateway, or custom events from other AWS services. You pay only for the compute time consumed by your code, making Lambda cost-effective for various use cases, including real-time data processing, backend services, automation, and event-driven architectures.
Listing and Retrieving Lambda Functions
List all Lambda functions in a region:
aws --region us-west-2 lambda list-functions
This command lists all Lambda functions in the specified region, providing details such as function names, ARNs, and runtime environments.
Get the policy of a specific Lambda function:
aws --region us-west-2 lambda get-policy --function-name FUNCTION-NAME
This command retrieves the resource-based policy of the specified Lambda function, including its permissions and principal entities.
Deploying a Lambda Function
This example demonstrates how to deploy a Python Flask application as an AWS Lambda function using AWS Serverless Application Model (SAM).
Steps to deploy Lambda function:
- Create Your Flask Application: Write your Python Flask application code, defining routes and functionality. Ensure your application follows the Lambda function requirements, such as stateless, idempotent, and short-lived executions.
- Prepare SAM Template: Write a SAM template (template.yaml) defining your Lambda function, specifying the handler, runtime, memory size, and timeout. Include any necessary permissions and event sources.
- Package Your Application: Package your Flask application and dependencies, ensuring they are compatible with AWS Lambda’s execution environment. You can use a virtual environment or containerize your application for better isolation.
- Deploy with SAM CLI: Use the AWS SAM CLI to validate your SAM template and package your application. Then, deploy your application to AWS Lambda with the sam deploy command. Follow the guided prompts to configure deployment options, such as IAM roles and resource names.
- Test Your Lambda Function: After deployment, test your Lambda function by invoking it manually or triggering it with events from AWS services like API Gateway or S3. Ensure your Flask application behaves as expected in the Lambda environment.
Creating Lambda Function Files
Example Python Flask application (app.py
):
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello, World!"
def handler(event, context):
return {"statusCode": 200, "body": hello()}
if __name__ == "__main__":
app.run()
This simple Flask application responds with “Hello, World!” when accessed.
requirements.txt
:
Flask
This file lists the dependencies for the Flask
application.
CloudFormation template (template.yaml
):
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
HelloWorld:
Type: AWS::Serverless::Function
Properties:
Handler: app.handler
Runtime: python3.9
CodeUri: .
MemorySize: 128
Timeout: 3
Events:
HelloWorldApi:
Type: Api
Properties:
Path: /
Method: GET
This CloudFormation template defines the AWS Lambda function and its associated API Gateway endpoint.
Deploying with AWS SAM CLI
Validate the CloudFormation template:
sam validate --lint
This command checks the template for syntax errors and best practices.
Build the Lambda function code:
sam build
This command packages your application and its dependencies for deployment.
Deploy the SAM application:
sam deploy --guided
This command deploys your application to AWS, guiding you through the configuration steps.
Delete the SAM application and its artifacts:
sam delete
This command removes the deployed application and cleans up the associated resources.
Conclusion
The AWS CLI is a versatile tool that empowers you to manage your AWS resources efficiently from the command line. By mastering the commands and configurations outlined in this guide, you can streamline your workflows and automate complex tasks across various AWS services, including S3, IAM, EC2, and Lambda. Whether you are setting up your AWS environment, managing your storage, handling security and access controls, or deploying applications, the AWS CLI offers robust capabilities to enhance your productivity and operational efficiency.
For further details and advanced use cases, refer to the AWS CLI Command Reference and explore more about the AWS services and their integrations.