guest@ctrl-alt-secure: ~$

After my AWS CLI setup project, I decided to host my Eleventy blog online using S3 for static hosting. I did this because it was a great exercise and the next logical step to dive into AWS CLI and site hosting.

Why AWS S3 for Static Hosting?

AWS S3 is cost-effective for static sites like my blog. No servers needed, just upload files and configure for public access. I used the AWS CLI for everything to keep it command-line based.

Step 1: Create an S3 Bucket

The first step was creating a bucket, but several naming attempts resulted in InvalidBucketName errors. After prompting AI with "Why am I getting InvalidBucketName errors when creating an S3 bucket?", the rules became clear: lowercase only, hyphens allowed, globally unique, and 3-63 characters. With a valid name chosen:

aws s3 mb s3://my-bucket-name --region us-east-1

A quick verification with aws s3 ls confirmed the bucket was successfully created.

Step 2: Configure for Static Website Hosting

Enabling website hosting required a configuration file. The AWS S3 documentation explained that index and error documents needed to be specified. Using vim to create the config:

vim website-config.json

With the contents:

{
    "IndexDocument": { "Suffix": "index.html" },
    "ErrorDocument": { "Key": "error.html" }
}

Researching the AWS CLI documentation for S3 static hosting led to asking AI "How do I enable static website hosting on an S3 bucket using the AWS CLI?" The response pointed to the aws s3api put-bucket-website command. A follow-up question — "What does aws s3api put-bucket-website do and what are the required parameters?" — clarified that it enables static website hosting and requires the bucket name and config file path. Time to apply it:

aws s3api put-bucket-website --bucket my-bucket-name --website-configuration file://website-config.json

Verifying with aws s3api get-bucket-website --bucket my-bucket-name showed the config was applied. The endpoint was now live: http://my-bucket-name.s3-website-us-east-1.amazonaws.com.

Step 3: Set Bucket Permissions

Making the bucket public required a bucket policy. Referencing AWS IAM policy documentation, the policy was created in vim:

vim site-bucket-policy.json

With the policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::my-bucket-name/*"
        }
    ]
}

Attempting to apply it resulted in an AccessDenied error. Prompting AI with "Why am I getting AccessDenied when applying a public bucket policy?" revealed that Block Public Access settings were the culprit. Another config file was needed to disable these protections:

vim public-access-block.json

With all settings to false:

{
    "BlockPublicAcls": false,
    "IgnorePublicAcls": false,
    "BlockPublicPolicy": false,
    "RestrictPublicBuckets": false
}

Before applying, asking AI "What are the options for aws s3api put-public-access-block?" helped understand what each setting controlled:

aws s3api put-public-access-block --bucket my-bucket-name --public-access-block-configuration file://public-access-block.json

With that in place, the bucket policy applied successfully. Verification with aws s3api get-bucket-policy --bucket my-bucket-name confirmed everything was set correctly.

Step 4: Upload Site Files

With permissions configured, the site files were ready to upload:

aws s3 sync _site s3://my-bucket-name

All files transferred successfully. Testing the endpoint confirmed the site was live.

Step 5: Add a Custom Domain

A custom domain would make the site more professional. Checking availability first:

aws route53domains check-domain-availability --domain-name ctrlaltsecure.io

Asking AI "What parameters are required to register a domain via AWS CLI?" revealed that contact information in JSON format was necessary. The file was created in vim:

vim contact.json

Before registering, researching the register-domain command options and asking AI "Why does domain registration require admin, registrant, and tech contacts?" clarified the ICANN requirements. With that understanding, the registration proceeded:

aws route53domains register-domain --domain-name ctrlaltsecure.io --duration-in-years 1 --admin-contact file://contact.json --registrant-contact file://contact.json --tech-contact file://contact.json

Step 6: Set Up CloudFront and DNS

For better performance and HTTPS support, CloudFront was the next step. Consulting AWS CloudFront documentation, a distribution config was created in vim with the S3 origin details:

vim cloudfront-config.json

Asking AI "What parameters does aws cloudfront create-distribution need and what does each option control?" helped understand the distribution config structure before creating it:

aws cloudfront create-distribution --distribution-config file://cloudfront-config.json

The CloudFront domain came back: d123.cloudfront.net.

Pointing the domain to CloudFront required the hosted zone ID. Prompting AI with "How do I get my Route 53 hosted zone ID via CLI?" provided the query command:

aws route53 list-hosted-zones --query "HostedZones[?Name=='ctrlaltsecure.io.'].Id" --output text

Researching the change-resource-record-sets command and asking AI "What's the structure for a Route 53 change batch to point a domain to CloudFront?" gave the information needed to create the DNS change batch file in vim with the alias record:

aws route53 change-resource-record-sets --hosted-zone-id Z1D633PJN98FT9 --change-batch file://dns-change.json

With DNS configured, it was time to test the site. This is where things got interesting.

Troubleshooting Issue #1: CloudFront 403 Error

After initial setup, accessing the site resulted in a CloudFront 403 error. This one stumped me, so I leaned heavily on AI to debug it. I provided the error message and asked "Why am I getting a 403 error from CloudFront when accessing my site?" After AI provided the solution, I didn't just run the commands blindly. I asked follow-up questions: "Why does CloudFront need the website endpoint instead of the bucket endpoint?" and "Break down what each step in the fix does and why it's necessary." Understanding the root cause was crucial.

The issue? CloudFront was pointing to the wrong S3 endpoint. When using S3 for static website hosting, CloudFront needs the website endpoint (bucket-name.s3-website-region.amazonaws.com), not the bucket endpoint (bucket-name.s3.amazonaws.com). The bucket endpoint expects S3 API calls, while the website endpoint serves static content with proper index document handling.

Checking the current origin:

aws cloudfront list-distributions --query "DistributionList.Items[0].[Id,Origins.Items[0].DomainName]" --output text

This showed CloudFront was using ctrlaltsecuresite.s3.amazonaws.com instead of ctrlaltsecuresite.s3-website-us-east-1.amazonaws.com. To fix it, the distribution config needed updating:

  1. Get the current config and ETag:
aws cloudfront get-distribution-config --id DISTRIBUTION_ID --output json > cloudfront-config.json
  1. Extract just the DistributionConfig and update the origin:
jq '.DistributionConfig' cloudfront-config.json > cloudfront-dist-only.json
jq '.Origins.Items[0].DomainName = "ctrlaltsecuresite.s3-website-us-east-1.amazonaws.com" | del(.Origins.Items[0].S3OriginConfig) | .Origins.Items[0].CustomOriginConfig = {"HTTPPort": 80, "HTTPSPort": 443, "OriginProtocolPolicy": "http-only", "OriginSslProtocols": {"Quantity": 1, "Items": ["TLSv1.2"]}, "OriginReadTimeout": 30, "OriginKeepaliveTimeout": 5}' cloudfront-dist-only.json > updated-config.json
  1. Apply the update:
aws cloudfront update-distribution --id DISTRIBUTION_ID --distribution-config file://updated-config.json --if-match ETAG_VALUE

The key changes: switching to the website endpoint and changing from S3OriginConfig to CustomOriginConfig with http-only protocol (S3 website endpoints don't support HTTPS origin). After 5-15 minutes for CloudFront to deploy globally, testing the CloudFront URL directly (d2qz9dtvjut0ti.cloudfront.net) worked perfectly. But accessing via the custom domain still failed.

Troubleshooting Issue #2: Custom Domain SSL Certificate Required

Accessing https://ctrlaltsecure.io still resulted in a 403 error, even though the CloudFront URL worked. Checking the CloudFront configuration revealed the problem:

aws cloudfront get-distribution --id DISTRIBUTION_ID --query "Distribution.DistributionConfig.Aliases"

The aliases were empty — CloudFront didn't know about ctrlaltsecure.io. Attempting to add the domain as an alias resulted in an InvalidViewerCertificate error: CloudFront requires a trusted SSL certificate to use custom domains.

The solution required AWS Certificate Manager (ACM). Importantly, certificates for CloudFront must be created in the us-east-1 region, regardless of where your resources are located. Here's the process:

  1. Request the certificate:
aws acm request-certificate --domain-name ctrlaltsecure.io --validation-method DNS --region us-east-1
  1. Get the DNS validation record:
aws acm describe-certificate --certificate-arn CERTIFICATE_ARN --region us-east-1 --query "Certificate.DomainValidationOptions[0].ResourceRecord"
  1. Add the CNAME validation record to Route 53:
aws route53 change-resource-record-sets --hosted-zone-id HOSTED_ZONE_ID --change-batch file://cert-validation.json
  1. Wait for certificate validation (usually 2-10 minutes):
aws acm describe-certificate --certificate-arn CERTIFICATE_ARN --region us-east-1 --query "Certificate.Status"
  1. Once the certificate status shows "ISSUED", update CloudFront with both the certificate and domain alias:
# Get current config and ETag
aws cloudfront get-distribution-config --id DISTRIBUTION_ID --output json > cloudfront-config.json

# Update config with jq to add alias and certificate
jq '.DistributionConfig.Aliases.Quantity = 1 | .DistributionConfig.Aliases.Items = ["ctrlaltsecure.io"] | .DistributionConfig.ViewerCertificate = {"ACMCertificateArn": "CERTIFICATE_ARN", "SSLSupportMethod": "sni-only", "MinimumProtocolVersion": "TLSv1.2_2021", "CertificateSource": "acm"}' cloudfront-config.json | jq '.DistributionConfig' > updated-config.json

# Apply update
ETAG=$(jq -r '.ETag' cloudfront-config.json) && aws cloudfront update-distribution --id DISTRIBUTION_ID --distribution-config file://updated-config.json --if-match $ETAG

After another 5-15 minutes for CloudFront to deploy globally, https://ctrlaltsecure.io was finally live with full HTTPS support!

Final Thoughts

Hosting on AWS turned out to be a great learning experience. Using AI to ask targeted questions about errors and AWS services made it possible to understand what each command did instead of blindly copying solutions. Total cost: Domain (~$70 one-time) + minimal S3/CloudFront usage.

A Note on Using AI for Learning

While using AI is great for understanding concepts and commands, it's vital to validate its responses. There were several times where AI would hallucinate or provide incorrect information, requiring clarifying questions to get it back on track. Cross-referencing with AWS whitepapers, StackOverflow threads, and Reddit discussions helped verify what AI was telling me. Don't blindly trust AI responses — always validate against official documentation and community knowledge. This approach helped me actually learn instead of just copying commands.