On this article, we’ll discover how AWS CloudFormation simplifies establishing and managing cloud infrastructure. As an alternative of manually creating assets like servers or databases, you’ll be able to write down your necessities in a file, and CloudFormation does the heavy lifting for you. This method, often called Infrastructure as Code (IaC), saves time, reduces errors, and ensures every part is constant.
We’ll additionally have a look at how Docker and GitHub Actions match into the method. Docker makes it straightforward to package deal and run your software, whereas GitHub Actions automates duties like testing and deployment. Along with CloudFormation, these instruments create a robust workflow for constructing and deploying purposes within the cloud.
Studying Targets
- Discover ways to simplify cloud infrastructure administration with AWS CloudFormation utilizing Infrastructure as Code (IaC).
- Perceive how Docker and GitHub Actions combine with AWS CloudFormation for streamlined software deployment.
- Discover a pattern venture that automates Python documentation technology utilizing AI instruments like LangChain and GPT-4.
- Discover ways to containerize purposes with Docker, automate deployment with GitHub Actions, and deploy by way of AWS CloudFormation.
- Perceive arrange and handle AWS assets like EC2, ECR, and safety teams utilizing CloudFormation templates.
This text was revealed as part of the Information Science Blogathon.
What’s AWS Cloud-Formation?
On this planet of cloud computing, managing infrastructure effectively is essential. So, AWS CloudFormation comes into image, that makes it simpler to arrange and handle your cloud assets. It means that you can outline every part you want — servers, storage, and networking in a easy file.
AWS CloudFormation is a service that helps you outline and handle your cloud assets utilizing templates written in YAML or JSON. Consider it as making a blueprint in your infrastructure. When you hand over this blueprint, CloudFormation takes care of setting every part up, step-by-step, precisely as you described.
Infrastructure as Code (IaC), is like turning your cloud into one thing you’ll be able to construct, rebuild, and even enhance with just some traces of code. No extra handbook clicking round, no extra guesswork — simply constant, dependable deployments that prevent time and scale back errors.
Pattern ProjectPractical Implementation: A Arms-On Challenge Instance
Streamlining Code Documentation with AI: The Doc Era Challenge:
To begin Cloud Formation, we want one pattern venture to deploy it in AWS.
I already created a venture utilizing Lang-chain and OPEN AI GPT-4. Let’s focus on about that venture then we are going to take a look on how that venture is deployed in AWS utilizing cloud Formation.
GitHub code hyperlink: https://github.com/Harshitha-GH/CloudFormation
On this planet of software program improvement, documentation performs a significant function in making certain codebases are understandable and maintainable. Nevertheless, creating detailed documentation is commonly a time-consuming and boring process. However we’re techies, we wish automation in every part. So to deploy a venture in AWS utilizing CloudFormation, I developed an automation venture utilizing AI (Lang-Chain and Open AI GPT-4) to create the Doc Era Challenge — an modern answer that makes use of AI to automate the documentation course of for Python code.
Right here’s a breakdown of how we constructed this software and the influence it goals to create. To create this venture we’re following just a few steps.
Earlier than beginning a brand new venture, we have now to create a python setting to put in all required packages. This can assist us to keep up crucial packages.
I wrote a perform to parse the enter file , which generally takes a python file as an enter and print the names of all features.
Producing Documentation from Code
As soon as the perform particulars are extracted, the following step is to feed them into OpenAI’s GPT-4 mannequin to generate detailed documentation. Utilizing Lang-Chain, we assemble a immediate that explains the duty we wish GPT-4 to carry out.
prompt_template = PromptTemplate(
input_variables=["function_name", "arguments", "docstring"],
template=(
"Generate detailed documentation for the next Python perform:nn"
"Perform Identify: {function_name}n"
"Arguments: {arguments}n"
"Docstring: {docstring}nn"
"Present a transparent description of what the perform does, its parameters, and the return worth."
)
)#import csv
With assist of this immediate, Doc Generator perform takes the parsed particulars and generates an entire, human-readable rationalization for every perform.
Flask API Integration
To make the software user-friendly, I constructed a Flask API the place customers can add Python information. The API parses the file, generates the documentation utilizing GPT-4, and returns it in JSON format.
We are able to take a look at this Flask API utilizing postman to verify our output.
Dockerizing the Software
To deploy into AWS and use our software, we have to containerize our software utilizing docker after which use GitHub actions to automate the deployment course of. We might be utilizing AWS CloudFormation for the automation in AWS. Service-wise we might be utilizing Elastic Container Registry to retailer our containers and EC2 for deploying our software. Allow us to see this step-by-step.
Creation of Docker Compose
We are going to create the Docker file. The Docker file is answerable for spinning up our respective containers
# Use the official Python 3.11-slim picture as the bottom picture
FROM python:3.11-slim
# Set setting variables to forestall Python from writing .pyc information and buffering output
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set the working listing contained in the container
WORKDIR /app
# Set up system dependencies required for Python packages and clear up apt cache afterwards
RUN apt-get replace && apt-get set up -y --no-install-recommends
gcc
libffi-dev
libpq-dev
python3-dev
build-essential
&& rm -rf /var/lib/apt/lists/*
# Copy the necessities file to the working listing
COPY necessities.txt /app/
# Improve pip and set up Python dependencies with out cache
RUN pip set up --no-cache-dir --upgrade pip &&
pip set up --no-cache-dir -r necessities.txt
# Copy the whole software code to the working listing
COPY . /app/
# Expose port 5000 for the applying
EXPOSE 5000
# Run the applying utilizing Python
CMD ["python", "app.py"]#import csv
Docker Compose
As soon as Docker information are created, we are going to create a Docker compose file that may spin up the container.
model: '3.8'
companies:
app:
construct:
context: .
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- .:/app
setting:
- PYTHONDONTWRITEBYTECODE=1
- PYTHONUNBUFFERED=1
command: ["python", "app.py"]#import csv
You’ll be able to take a look at this by operating the command
docker-compose up –construct#import csv
After the command executes efficiently, the code will perform precisely because it did earlier than.
Creating AWS Providers for Cloud-Formation Stack

I create an ECR repository. Aside from that we are going to make GitHub actions later to create all our different required companies.
The repository, I’ve created has namespace cloud_formation repo identify as demo. Then, I’ll proceed with the CloudFormationtemplate, a yaml file that helps in spinning up required occasion, pulling the photographs from ECR and different assets.
As an alternative of manually establishing servers and connecting every part, AWS CloudFormation is used to arrange and handle cloud assets (like servers or databases) routinely utilizing a script. It’s like giving a blueprint to construct and manage your cloud stuff with out doing it manually !
Consider CloudFormation as writing a easy instruction handbook for AWS to observe. This handbook, referred to as as ‘template’, tells AWS to:
- Begin the servers required for the venture.
- Pull the venture’s container photographs from the ECR storage repository.
- Arrange all different dependencies and configurations wanted for the venture to run.
By utilizing this automated setup, I don’t need to repeat the identical steps each time I deploy or replace the venture — it’s all accomplished routinely by AWS.
Cloud-formation Template
AWS CloudFormation templates are declarative JSON or YAML scripts that describe the assets and configurations wanted to arrange your infrastructure in AWS. They permit you to automate and handle your infrastructure as code, making certain consistency and repeatability throughout environments.
# CloudFormation Template
AWSTemplateFormatVersion: "2010-09-09"
Description: Deploy EC2 with Docker Compose pulling photographs from ECR
Assets:
BackendECRRepository:
Kind: AWS::ECR::Repository
Properties:
RepositoryName: backend
EC2InstanceProfile:
Kind: AWS::IAM::InstanceProfile
Properties:
Roles:
- !Ref EC2InstanceRole
EC2InstanceRole:
Kind: AWS::IAM::Position
Properties:
AssumeRolePolicyDocument:
Model: "2012-10-17"
Assertion:
- Impact: Permit
Principal:
Service: ec2.amazonaws.com
Motion: sts:AssumeRole
Insurance policies:
- PolicyName: ECROpsPolicy
PolicyDocument:
Model: "2012-10-17"
Assertion:
- Impact: Permit
Motion:
- ecr:GetAuthorizationToken
- ecr:BatchGetImage
- ecr:GetDownloadUrlForLayer
Useful resource: "*"
- PolicyName: SecretsManagerPolicy
PolicyDocument:
Model: "2012-10-17"
Assertion:
- Impact: Permit
Motion:
- secretsmanager:GetSecretValue
Useful resource: "*"
EC2SecurityGroup:
Kind: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Permit SSH, HTTP, HTTPS, and application-specific ports
SecurityGroupIngress:
# SSH Entry
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0
# Ping (ICMP)
- IpProtocol: icmp
FromPort: -1
ToPort: -1
CidrIp: 0.0.0.0/0
# HTTP
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
# HTTPS
- IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: 0.0.0.0/0
# Backend Port
- IpProtocol: tcp
FromPort: 5000
ToPort: 5000
CidrIp: 0.0.0.0/0
EC2Instance:
Kind: AWS::EC2::Occasion
Properties:
InstanceType: t2.micro
KeyName: demo
ImageId: ami-0c02fb55956c7d316
IamInstanceProfile: !Ref EC2InstanceProfile
SecurityGroupIds:
- !Ref EC2SecurityGroup
UserData:
Fn::Base64: !Sub |
#!/bin/bash
set -e # Exit script on error
yum replace -y
yum set up docker git python3 -y
pip3 set up boto3
service docker begin
usermod -aG docker ec2-user
# Set up Docker Compose
curl -L "https://github.com/docker/compose/releases/obtain/$(curl -s https://api.github.com/repos/docker/compose/releases/newest | grep tag_name | lower -d '"' -f 4)/docker-compose-$(uname -s)-$(uname -m)" -o /usr/native/bin/docker-compose
chmod +x /usr/native/bin/docker-compose
# Retrieve secrets and techniques from AWS Secrets and techniques Supervisor
SECRET_NAME="backend-config"
REGION="us-east-1"
SECRET_JSON=$(aws secretsmanager get-secret-value --secret-id $SECRET_NAME --region $REGION --query SecretString --output textual content)
echo "$SECRET_JSON" > /tmp/secrets and techniques.json
# Create config.py dynamically
mkdir -p /backend
cat < /backend/config.py
import json
secrets and techniques = json.load(open('/tmp/secrets and techniques.json'))
OPENAI_API_KEY = secrets and techniques["OPENAI_API_KEY"]
EOL
# Authenticate with ECR
aws ecr get-login-password --region ${AWS::Area} | docker login --username AWS --password-stdin ${AWS::AccountId}.dkr.ecr.${AWS::Area}.amazonaws.com
# Pull photographs from ECR
docker pull ${AWS::AccountId}.dkr.ecr.${AWS::Area}.amazonaws.com/personage/dodge-challenger:backend-latest
# Create Docker Compose file
cat < docker-compose.yml
model: "3.9"
companies:
backend:
picture: ${AWS::AccountId}.dkr.ecr.${AWS::Area}.amazonaws.com/personage/dodge-challenger:backend-latest
ports:
- "5000:5000"
volumes:
- /backend/config.py:/app/config.py
- /tmp/secrets and techniques.json:/tmp/secrets and techniques.json
setting:
- PYTHONUNBUFFERED=1
EOL
# Begin Docker Compose
docker-compose -p demo up -d
Outputs:
EC2PublicIP:
Description: Public IP of the EC2 occasion
Worth: !GetAtt EC2Instance.PublicIp#import csv
Let’s decode the up to date template step-by-step:
We’re defining a single ECR useful resource, which is the repository the place our Docker picture is saved.
Subsequent, we create an EC2 occasion. We’ll connect important insurance policies to it, primarily for interacting with the ECR and AWS Secrets and techniques Supervisor. Moreover, we connect a Safety Group to regulate community entry. For this setup, we are going to open:
- Port 22 for SSH entry.
- Port 80 for HTTP entry.
- Port 5000 for backend software entry.
A t2.micro occasion might be used, and contained in the Person Information part, we outline the directions to configure the occasion:
- Set up crucial dependencies like Python, boto3, and Docker.
- Entry secrets and techniques saved in AWS Secrets and techniques Supervisor and save them to a config.py file.
- Login to ECR, pull the Docker picture, and run it utilizing Docker.
Since just one Docker container is getting used, this configuration simplifies the deployment course of, whereas making certain the backend service is accessible and correctly configured.
Importing and Storing Secrets and techniques to AWS Secret Supervisor
Until now we have now saved the secrets and techniques like Open AI key in config.py file. However, we can not push this file to GitHub, because it incorporates Secrets and techniques. So, we use AWS Secrets and techniques supervisor to retailer our secrets and techniques after which retrieve it by our CloudFormation template.
Until now we have now saved the secrets and techniques like Open AI key in config.py file. However, we can not push this file to GitHub, because it incorporates Secrets and techniques. So, we use AWS Secrets and techniques supervisor to retailer our secrets and techniques after which retrieve it by our CloudFormation template.


Creating GitHub Actions

GitHub Actions is used to automate duties like testing code, constructing apps, or deploying tasks everytime you make modifications. It’s like establishing a robotic to deal with repetitive give you the results you want !
Our main intention right here is that as we push to a selected department of github, routinely the deployment to AWS ought to begin. For this we are going to choose ‘essential’ department.
Storing the Secrets and techniques in GitHub
Check in to your github and observe the trail beneath:
repository > settings > Secrets and techniques and variables > Actions
Then you could add your secrets and techniques of AWS extracted from you AWS account, as in beneath picture.

Initiating the Workflow
After storing, we are going to create a .github folder and, inside it, a workflows folder. Contained in the workflows folder, we are going to add a deploy.yaml file.
identify: Deploy to AWS
on:
push:
branches:
- essential
jobs:
deploy:
runs-on: ubuntu-latest
steps:
# Step 1: Checkout the repository
- identify: Checkout code
makes use of: actions/checkout@v3
- identify: Configure AWS credentials
makes use of: aws-actions/configure-aws-credentials@v4 # Configure AWS credentials
with:
aws-access-key-id: ${{ secrets and techniques.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets and techniques.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets and techniques.AWS_REGION }}
# Step 2: Log in to Amazon ECR
- identify: Log in to Amazon ECR
id: login-ecr
makes use of: aws-actions/amazon-ecr-login@v2
# Step 3: Construct and Push Backend Picture to ECR
- identify: Construct and Push Backend Picture
run: |
docker construct -t backend .
docker tag backend:newest ${{ secrets and techniques.AWS_ACCOUNT_ID }}.dkr.ecr.${{ secrets and techniques.AWS_REGION }}.amazonaws.com/personage/dodge-challenger:backend-latest
docker push ${{ secrets and techniques.AWS_ACCOUNT_ID }}.dkr.ecr.${{ secrets and techniques.AWS_REGION }}.amazonaws.com/personage/dodge-challenger:backend-latest
# Step 5: Delete Present CloudFormation Stack
- identify: Delete Present CloudFormation Stack
run: |
aws cloudformation delete-stack --stack-name docker-ecr-ec2-stack
echo "Ready for stack deletion to finish..."
aws cloudformation wait stack-delete-complete --stack-name docker-ecr-ec2-stack || echo "Stack doesn't exist or already deleted."
# Step 6: Deploy CloudFormation Stack
- identify: Deploy CloudFormation Stack
makes use of: aws-actions/aws-cloudformation-github-deploy@v1
with:
identify: docker-ecr-ec2-stack
template: cloud-formation.yaml
capabilities: CAPABILITY_NAMED_IAM
Right here’s a simplified rationalization of the stream:
- We pull the code from the repository and arrange AWS credentials utilizing the secrets and techniques saved in GitHub.
- Then, we log in to ECR and construct/push the Docker picture of the applying.
- We verify if there’s an current CloudFormation stack with the identical identify. If sure, delete it.
- Lastly, we use the CloudFormation template to launch the assets and set every part up.
Testing
As soon as every part is deployed, observe down the IP tackle of the occasion after which simply name it utilizing postman to verify every part works fantastic.

Conclusion
On this article, we explored use AWS CloudFormation to simplify cloud infrastructure administration. We learnt create an ECR repository, deploy a Dockerized software on EC2 occasion, and automate the whole course of utilizing GitHub Actions for CI/CD. This method not solely saves time but in addition ensures consistency and reliability in deployments.
Key Takeaways
- AWS CloudFormation simplifies cloud useful resource administration with Infrastructure as Code.
- Docker containers streamline software deployment on AWS-managed infrastructure.
- GitHub Actions automates construct and deployment pipelines for seamless integration.
- LangChain and GPT-4 improve Python documentation automation in tasks.
- Combining IaC, Docker, and CI/CD creates scalable, environment friendly, and fashionable workflows.
Incessantly Requested Questions
A. AWS CloudFormation is a service that lets you mannequin and provision AWS assets utilizing Infrastructure as Code (IaC).
A. Docker packages purposes into containers, which could be deployed on AWS assets managed by CloudFormation.
A. GitHub Actions automates CI/CD pipelines, together with constructing, testing, and deploying purposes to AWS.
A. Sure, LangChain and GPT-4 can generate and replace Python documentation as a part of your workflow.
A. IaC ensures constant, repeatable, and scalable useful resource administration throughout your infrastructure.
The media proven on this article isn’t owned by Analytics Vidhya and is used on the Writer’s discretion.