I’ve had this plan for a while, but finally spent the time to actually write it down.
The plan is to have the following:
- A github repository that will automatically build and push the site to S3 when build.
- Obviously, host the site in an S3 bucket.
- Use CloudFront for the site, with an AWS supplied TLS certificate.
- Redirect www to the naked domain name.
Initial requirements:
- Github repository set up. You might want to make it private.
- Create two S3 buckets; one called
www.sitename.xxx
and another calledsitename.xxx
. Remember S3 bucket names are global; but using DNS names should be unique unless someone else made a bucket with the same name for some reason. - Set up Hugo to build the site locally. Have it set up with your github repo as the remote. This is beyond the scope of this guide; Hugo has some complexity and it can take a while to get it to the point where you’ll be happy.
- Set up the domain you are hosing on AWS on Route53. You must use Route53 to use CloudFront ALIAS record types. This guide won’t work if you don’t host your DNS on Route53.
Set up the S3 buckets
The region of the S3 bucket doesn’t really matter, but I recommend you do all work in us-east-1 as you will have trouble with some things if they are not in us-east-1 (I forget what it was, might have been cloudfront).
sitename.xxx bucket
This will be the bucket holding all the content.
- Properties: Set up Static website hosting, with
index.html
and404.html
for the index and error documents. - Permissions: Turn off “block *all *public access”.
- Permissions: Bucket Policy should be configured with the following:
{
"Version": "2012-10-17",
"Id": "Policy1594427255725",
"Statement": [
{
"Sid": "Stmt1594427253085",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::sitename.xxx/*"
}
]
}
Obviously, replace sitename.xxx
with your domain name.
This will allow all objects in the bucket to be publically read. Obviously, this is for a public website, so this is what you want.
www.sitename.xxx bucket
This bucket does not need all of the configuration of the other bucket, but you do need to configure static website hosting and set it to Redirect requests to the base domain and protocol https.
At this point you will have two S3 HTTP sites that you should be able to access via HTTP directly that look like
http://BUCKET-NAME.s3-website-us-east-1.amazonaws.com
Set up a Certificate for your domain
Using AWS Certificate Manager, create a certificate request for your domain.
You should add DOMAIN
and www.DOMAIN
to the certificate; you can use *.DOMAIN
if you wish to use that certificate elsewhere.
As you have configured the domain to be hosted on Route53, you can use the button to automate the DNS request to validate it.
Set up a CloudFront Distribution for each bucket
This is fairly straight forward; simply create a distribution for the DOMAIN
bucket and the www.DOMAIN
bucket, using the certificate you created earlier. Do NOT set the “Default Root Object” field; do enable HTTP and HTTPS, and redirect HTTP to HTTPS.
Security Policy should be set to TLSv1.2_2018
unless you have a good reason.
Also, enable IPv6. LOOK AT YOU ALL IPv6 ENABLED!
Set up a role and account that can upload to S3 and can invalidate the CloudFront cache.
In the IAM management console, set up something like this:
You then need to add a user for managing the site; it should not be a login but rather an ACCESS_KEY_ID and SECRET_ACCESS_KEY should be generated for the user.
You may need to tweak the permissions for the S3 stuff. I had some issues.
Set up Github secrets for the AWS keys
AWS_ACCESS_KEY_ID
andSECRET_ACCESS_KEY
should be what you just generated.DISTRIBUTION_ID
should be set to the CloudFront Distribution ID.
Set up Github to build your Hugo site and publish to S3
In your hugo config.toml
you will need the following:
[deployment]
order = [".png$", ".jpg$", ".gif$", ".svg$"]
[[deployment.targets]]
URL = "s3://SITE_NAME?region=us-east-1"
[[deployment.matchers]]
# Cache static assets for 20 years.
pattern = "^.+\\.(js|css|png|jpg|gif|svg|ttf)$"
cacheControl = "max-age=630720000, no-transform, public"
gzip = true
[[deployment.matchers]]
pattern = "^.+\\.(html|xml|json)$"
gzip = true
where SITE_NAME
is the domain.
You will need to add a .github/workflows/build.yml
file in the repo:
name: Build and Deploy
on:
push:
branches: [master]
jobs:
build:
name: Build and Deploy
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout@v1
- name: Checkout submodules
uses: textbook/git-checkout-submodule-action@master
with:
remote: true
- name: Install Hugo
run: |
HUGO_DOWNLOAD=hugo_extended_${HUGO_VERSION}_Linux-64bit.tar.gz
wget https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/${HUGO_DOWNLOAD}
tar xvzf ${HUGO_DOWNLOAD} hugo
mv hugo $HOME/hugo
env:
HUGO_VERSION: 0.73.0
- name: Hugo Build
run: $HOME/hugo -v
- name: Deploy to S3
if: github.ref == 'refs/heads/master'
run: $HOME/hugo -v deploy --maxDeletes -1
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Invalidate Cloudfront Cache
uses: awact/cloudfront-action@master
env:
SOURCE_PATH: "/*"
AWS_REGION: "us-east-1"
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
DISTRIBUTION_ID: ${{ secrets.DISTRIBUTION_ID }}
Test that updating the site works
So, at this point, the DNS is not configured, but you should be able to access the site after you update on github.
Push git up to master, and go check the Actions tab in github, and you will see the Build and Deploy for that commit running and monitor the action.
Finally set up DNS
Add a record pointing to the cloudfront.net
address for the cloudfront distribution for each of the two sites.
Conclusion
I’ve yet to analyse the costs yet, but I think you’re looking at
- .50c per month for Route53. Requests for CloudFront addresses are free.
- 8.5c per GB for cloudfront; really depends how much traffic your site gets but for a site like mine that gets very littel traffic I should be looking at about $10-20 a month.
- S3 will be small enough to fit within free tier.
I think ultimately it’s going to be worth it, and not running any servers or anything is a plus. And the site is basically bulletproof.
…