# Add S3 Bucket to Existing Deployment

# Prerequisites

Before starting, gather the following information:

  1. AWS Account ID: Locate this in the deployment organization's settings page.
  2. Deployment ID: Found in the deployment's settings page.

Log in to the production SSO account, and assume the ProdAutoPilotSupportLevelTwo role in the correct AWS account.

# Locating The Correct Stack

  1. Open the CloudFormation service in the AWS console.
  2. Use the Deployment ID to filter and locate the main stack.
  3. To simplify the search, untoggle the "View nested" switch.

Once the main stack is identified, proceed to update it.

# Editing The Template

The recommended method for editing the stack's template is through the "Infrastructure Composer" since you can edit it through your browser. Once in the editor, follow these steps:

  1. Navigate to the Template tab within the editor.
  2. Ensure the template remains in JSON format.

Proceed to update the template.

# Add S3 Bucket Resources

Locate the end of the Resources section in the template and merge the following JSON with the existing resources:

{
	"MediaS3Bucket": {
		"DeletionPolicy": "Retain",
		"Type": "AWS::S3::Bucket",
		"Properties": {
			"AccessControl": "Private",
			"VersioningConfiguration": {
				"Status": "Enabled"
			},
			"PublicAccessBlockConfiguration": {
				"BlockPublicAcls": true,
				"BlockPublicPolicy": true,
				"IgnorePublicAcls": true,
				"RestrictPublicBuckets": true
			}
		},
	},
	"MediaS3BucketPolicy": {
		"Type": "AWS::S3::BucketPolicy",
		"Properties": {
			"Bucket": {
				"Ref": "MediaS3Bucket"
			},
			"PolicyDocument": {
				"Version": "2012-10-17",
				"Statement": [
					{
						"Sid": "AllowReadFromWebNodesNginx",
						"Effect": "Allow",
						"Principal": "*",
						"Action": [
							"s3:GetObject"
						],
						"Resource": [
							{
								"Fn::Sub": "arn:aws:s3:::${MediaS3Bucket}/*"
							}
						],
						"Condition": {
							"IpAddress": {
								"aws:SourceIp": {
									"Fn::GetAtt": [
										"LayerJrcNetwork",
										"Outputs.ElasticIpPrimary"
									]
								}
							}
						}
					}
				]
			}
		},
	}
}

# Modify Instance Profile Role

Navigate to the InstanceProfileRole resource in the template and append the following policy statement:

{
	"PolicyName": "AllowS3AccessForMediaBucket",
	"PolicyDocument": {
		"Version": "2012-10-17",
		"Statement": [
			{
				"Effect": "Allow",
				"Action": [
					"s3:GetObject",
					"s3:PutObject",
					"s3:DeleteObject",
					"s3:ListBucket"
				],
				"Resource": [
					{
						"Fn::Sub": "arn:aws:s3:::${MediaS3Bucket}"
					},
					{
						"Fn::Sub": "arn:aws:s3:::${MediaS3Bucket}/*"
					}
				]
			}
		]
	}
}

# Adding An Output Value

Navigate to the Outputs section of the template and merge the following JSON with the existing outputs:

{
	"MediaBucketName": {
		"Value": {
			"Ref": "MediaS3Bucket"
		}
	}
}

# Add Metadata To Display Info In AutoPilot

Navigate to the Metadata section of the template and add merge the following JSON with the JetRails::Gui::Output section as well as the JetRails::Gui::OutputGroup section.

For the JetRails::Gui::Output section merge the following:

{
	"MediaBucketName": {
		"Component": "ListItem",
		"FriendlyName": "Bucket Name"
	}
}

For the JetRails::Gui::OutputGroup section merge the following:

{
	"S3Buckets": {
		"FriendlyName": "Media S3 Bucket",
		"Variant": "ValueTable",
		"Outputs": [
			"MediaBucketName"
		]
	}
}

# Applying the Changes

  1. Validate the updated template.
  2. Update the stack's new template.
  3. Use the wizard to apply the changes.

Once the update is complete, the new S3 bucket name will be outputted in the AutoPilot interface in the Overview tab. Web nodes will also have read access to the S3 bucket at the same time it will not be publically visible. The jump host user will also have access to the S3 bucket to manage it's contents.

# Verify Changes

First thing you can do is SSH into the jump host and run the following command to verify that you can list the contents of the S3 bucket:

aws s3 ls s3://BUCKET_NAME

If all goes well, you should see nothing. Otherwise you'll see an error. If you see an error, you can try to go to the EC2 dashboard and detach/reattach the instance profile to the instance. In most cases you will not need to do that.

Next you can upload a dummy file to the bucket:

date > test.txt
aws s3 cp ./test.txt s3://BUCKET_NAME
rm test.txt

Now you can list it again to ensure it's there:

aws s3 ls s3://BUCKET_NAME

Next lets verify that the web nodes are able to read the bucket, if you are on a cluster then you should SSH into one of the web nodes and run the following curl command:

curl hhttps://BUCKET_NAME.s3.us-east-1.amazonaws.com/test.txt

Finally, clean up your test files:

aws s3 rm s3://BUCKET_NAME/test.txt