#
Add S3 Bucket to Existing Deployment
#
Prerequisites
Before starting, gather the following information:
- AWS Account ID: Locate this in the deployment organization's settings page.
- Deployment ID: Found in the deployment's settings page.
Log in to the production SSO account, and assume the ProdAutoPilotSupportLevelTwo role in the correct AWS account.
#
Locating The Correct Stack
- Open the CloudFormation service in the AWS console.
- Use the Deployment ID to filter and locate the main stack.
- To simplify the search, untoggle the "View nested" switch.
Once the main stack is identified, proceed to update it.
#
Editing The Template
The recommended method for editing the stack's template is through the "Infrastructure Composer" since you can edit it through your browser. Once in the editor, follow these steps:
- Navigate to the Template tab within the editor.
- Ensure the template remains in JSON format.
danger
Do not convert the template to YAML. AutoPilot requires JSON format, and converting from YAML can introduce discrepancies that prevent the deployment page from loading correctly.
Proceed to update the template.
#
Add S3 Bucket Resources
Locate the end of the Resources section in the template and merge the following JSON with the existing resources:
{
"MediaS3Bucket": {
"DeletionPolicy": "Retain",
"Type": "AWS::S3::Bucket",
"Properties": {
"AccessControl": "Private",
"VersioningConfiguration": {
"Status": "Enabled"
},
"PublicAccessBlockConfiguration": {
"BlockPublicAcls": true,
"BlockPublicPolicy": true,
"IgnorePublicAcls": true,
"RestrictPublicBuckets": true
}
},
},
"MediaS3BucketPolicy": {
"Type": "AWS::S3::BucketPolicy",
"Properties": {
"Bucket": {
"Ref": "MediaS3Bucket"
},
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowReadFromWebNodesNginx",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
{
"Fn::Sub": "arn:aws:s3:::${MediaS3Bucket}/*"
}
],
"Condition": {
"IpAddress": {
"aws:SourceIp": {
"Fn::GetAtt": [
"LayerJrcNetwork",
"Outputs.ElasticIpPrimary"
]
}
}
}
}
]
}
},
}
}
Note
The above snippet adds permissions for the web nodes to read the contents of the S3 bucket.
It gives permission by allowing access to the bucket's objects from the web nodes' IP addresses.
The snippet above references LayerJrcNetwork.Outputs.ElasticIpPrimary which is the web nodes egress IP address since it is in a private subnet on cluster deployments.
For AIO deployments we will need to change the reference to point to LayerJrcJump.Outputs.PublicIp instead.
#
Modify Instance Profile Role
Navigate to the InstanceProfileRole resource in the template and append the following policy statement:
{
"PolicyName": "AllowS3AccessForMediaBucket",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
{
"Fn::Sub": "arn:aws:s3:::${MediaS3Bucket}"
},
{
"Fn::Sub": "arn:aws:s3:::${MediaS3Bucket}/*"
}
]
}
]
}
}
#
Adding An Output Value
Navigate to the Outputs section of the template and merge the following JSON with the existing outputs:
{
"MediaBucketName": {
"Value": {
"Ref": "MediaS3Bucket"
}
}
}
#
Add Metadata To Display Info In AutoPilot
Navigate to the Metadata section of the template and add merge the following JSON with the JetRails::Gui::Output section as well as the JetRails::Gui::OutputGroup section.
For the JetRails::Gui::Output section merge the following:
{
"MediaBucketName": {
"Component": "ListItem",
"FriendlyName": "Bucket Name"
}
}
For the JetRails::Gui::OutputGroup section merge the following:
{
"S3Buckets": {
"FriendlyName": "Media S3 Bucket",
"Variant": "ValueTable",
"Outputs": [
"MediaBucketName"
]
}
}
#
Applying the Changes
- Validate the updated template.
- Update the stack's new template.
- Use the wizard to apply the changes.
Once the update is complete, the new S3 bucket name will be outputted in the AutoPilot interface in the Overview tab. Web nodes will also have read access to the S3 bucket at the same time it will not be publically visible. The jump host user will also have access to the S3 bucket to manage it's contents.
#
Verify Changes
First thing you can do is SSH into the jump host and run the following command to verify that you can list the contents of the S3 bucket:
aws s3 ls s3://BUCKET_NAME
If all goes well, you should see nothing. Otherwise you'll see an error. If you see an error, you can try to go to the EC2 dashboard and detach/reattach the instance profile to the instance. In most cases you will not need to do that.
Next you can upload a dummy file to the bucket:
date > test.txt
aws s3 cp ./test.txt s3://BUCKET_NAME
rm test.txt
Now you can list it again to ensure it's there:
aws s3 ls s3://BUCKET_NAME
Next lets verify that the web nodes are able to read the bucket, if you are on a cluster then you should SSH into one of the web nodes and run the following curl command:
curl hhttps://BUCKET_NAME.s3.us-east-1.amazonaws.com/test.txt
Finally, clean up your test files:
aws s3 rm s3://BUCKET_NAME/test.txt