Configuration
Configure default profile
aws configure
Configure specified profile
aws configure --profile project1
To execute CLI command with given profile ID use --profile
argument, example:
aws --profile PROFILE-NAME iam get-user
AWS credentials file example
Save AWS key id and secret in ~/.aws/credentials
, example:
[PROFILE-NAME]
aws_access_key_id = ...
aws_secret_access_key = ...
S3 buckets
List
aws s3 ls <bucket-name>
or
aws s3 ls s3://<bucket-name>
Do not sign request (credentials will not be loaded - “Unable to locate credentials.” error)
aws s3 ls --no-sign-request s3://<bucket-name>/
or for specific region
aws s3 ls s3://<bucket-name>/ --no-sign-request --region <region-name>
Find in which region given bucket is present
for r in `cat aws-regions.txt`; do echo -n "testing region: $r: "; aws s3 ls s3://<bucket-name>/ --region $r; done
Download
aws s3 sync s3://bukcet.name/ . --no-sign-request --region us-west-2
Copy
Copy from S3 bucket
aws s3 cp s3://s3.amazonaws.com/bucket-name/test.jpg test.jpg
or
aws --endpoint-url https://s3.amazonaws.com s3 cp s3://bucket-name/test.jpg test.jpg
Copy directory to S3 bucket
aws s3 cp mydir s3://mybucket/dir --recursive
Sync
aws s3 sync mydir s3://mybucket/dir --exclude *.tmp
Create
aws s3 mb s3://test-bucket-name
or using s3api
aws s3api create-bucket --bucket test-bucket-name
Remove
Remove file from bucket
aws --endpoint-url https://s3.amazonaws.com s3 rm s3://bucket-name/test.jpg
Remove directory from the bucket
aws s3 rm s3://mybucket/dir --recursive
Removing S3 buckets
Remove non-empty S3 buckets created in specific date
aws s3 ls | grep '2023-01' | cut -d" " -f3 | xargs -I{} aws s3 rb s3://{} --force
Python snippets
Create S3 bucket
import boto3
bucket_name = 'my-test-bucket'
region='us-east-2'
s3 = boto3.client('s3', region_name = region)
location = {'LocationConstraint': region}
response = s3.create_bucket(Bucket = bucket_name, CreateBucketConfiguration=location)
print(f'Bucket {bucket_name} created successfully.')
Put object in the bucket
import boto3
bucket_name = 'my-test-bucket'
region='us-east-2'
object_key = 'file.txt'
file_path = 'file.txt'
s3 = boto3.client('s3', region_name = region)
with open(file_path, 'rb') as data:
s3.put_object(Bucket = bucket_name, Key = object_key, Body = data)
print(f'Object {object_key} uploaded to bucket {bucket_name} successfully.')
STS
Get account name connected with the key
aws sts get-caller-identity
Get account number
aws sts get-caller-identity --query Account --output text
IAM
Get username
aws iam get-user
Policies
List user policies
aws iam list-attached-user-policies --user-name USER-NAME
List role attached policies
aws iam list-attached-role-policies --role-name ROLE-NAME
Get policy
aws iam get-policy --policy-arn POLICY-ARN
Get policy versions
aws iam list-policy-versions --policy-arn POLICY-ARN
Get specific version of the policy
aws iam get-policy-version --policy-arn POLICY-ARN --version-id VERSION-ID
Set default policy version
aws iam set-default-policy-version --policy-arn POLICY-ARN --version-id VERSION-ID
EC2
Start instance
aws ec2 start-instances --instance-ids INSTANCE-ID
Terminate instance
aws ec2 terminate-instances --instance-ids INSTANCE-ID
List instances
aws ec2 --region us-east-1 describe-instances
List instance status
aws ec2 --region us-east-1 describe-instance-status
List regions
aws ec2 --region us-east-1 describe-regions
Snapshots
List snapshots
aws ec2 describe-snapshots --owner-id OWNER-ID
List own snapshots
aws ec2 describe-snapshots --owner-ids self
List snapshot attribute
aws ec2 describe-snapshot-attribute --snapshot-id SNAPSHOT-ID --attribute createVolumePermission
Create EC2 volume from snapshot
aws ec2 create-volume --availability-zone AZ --region REGION --snapshot-id SNAPSHOT-ID
then mount additional drive in VM: sudo mount /dev/xvdb1 /mnt
Python snippets
Create EC2 micro instance
import boto3
ec2 = boto3.resource('ec2', region_name = 'us-east-2')
instance_type = 't2.micro'
image_id = 'ami-<image-id>'
instance = ec2.create_instances(
ImageId=image_id,
InstanceType=instance_type,
MinCount=1,
MaxCount=1,
TagSpecifications=[
{
'ResourceType': 'instance',
'Tags': [
{
'Key': 'Name',
'Value': 'my-instance'
},
]
},
]
)
print(f'Instance {instance[0].id} created successfully.')
Create EC2 keypair
import boto3
ec2 = boto3.client('ec2', region_name = 'us-west-1')
key_name = 'my-key'
response = ec2.create_key_pair(KeyName = key_name, KeyType='ed25519')
with open(f'{key_name}.pem', 'w') as key_file:
key_file.write(response['KeyMaterial'])
print(f'Key pair {key_name} created successfully.')
Instance Metadata
The metadata endpoint can be accessed from inside any EC2 machine, it’s accesible at: http://169.254.169.254
Generate token to access meta-data on EC2 instance
TOKEN=`curl -s -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"`
Query metadata endpoint
curl -H "X-aws-ec2-metadata-token: $TOKEN" http://169.254.169.254/latest/meta-data/
Lambda
List lambda functions
aws --region us-west-2 lambda list-functions
Get lambda function policy
aws --region us-west-2 lambda get-policy --function-name FUNCTION-NAME
Example deployment of Lambda function
Create Lambda function files
Example Python Flask application app.py
:
from flask import Flask
app = Flask(__name__)
@app.route("/")
def hello():
return "Hello, World!"
def handler(event, context):
return {"statusCode": 200, "body": hello()}
if __name__ == "__main__":
app.run()
Python requirements.txt
:
Flask
Cloud Formation template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
HelloWorld:
Type: AWS::Serverless::Function
Properties:
Handler: app.handler
Runtime: python3.9
CodeUri: .
MemorySize: 128
Timeout: 3
Events:
HelloWorldApi:
Type: Api
Properties:
Path: /
Method: GET
Deploy with AWS SAM CLI
Validate template file
sam validate --lint
Build Function code
sam build
Deploy AWS SAM application
sam deploy --guided
Delete SAM application and its artifacts
sam delete
ECR
Describe images
aws ecr describe-images --repository-name REPO-NAME
List images
aws ecr batch-get-image --repository-name REPO-NAME --registry-id REGISTRY-ID --image-ids imageTag=latest | jq '.images[].imageManifest | fromjson'
Get image details
aws ecr batch-get-image --repository-name REPO-NAME --image-ids imageDigest=IMAGE-DIGEST
Get image layer download URL
aws ecr get-download-url-for-layer --repository-name REPO-NAME --layer-digest LAYER-DIGEST
with registry ID
aws ecr get-download-url-for-layer --repository-name REPO-NAME --registry-id REGISTRY-ID --layer-digest LAYER-DIGEST
Secrets manager
List secrets
aws secretsmanager list-secrets
Get secret value
aws secretsmanager get-secret-value --secret-id SECRET-ARN
DynamoDb
List available tables
aws dynamodb list-tables
Return items and attributes from the table
aws dynamodb scan --table-name TABLE-NAME
SSM
Execute command on a given EC2 instance
aws ssm send-command --instance-ids 'INSTANCE-ID' --document-name "AWS-RunShellScript" --output text --parameters commands="COMMAND"