Static Site with Hugo on AWS

I’ve had this plan for a while, but finally spent the time to actually write it down.

The plan is to have the following:

  • A github repository that will automatically build and push the site to S3 when build.
  • Obviously, host the site in an S3 bucket.
  • Use CloudFront for the site, with an AWS supplied TLS certificate.
  • Redirect www to the naked domain name.

Initial requirements:

  • Github repository set up. You might want to make it private.
  • Create two S3 buckets; one called www.sitename.xxx and another called sitename.xxx. Remember S3 bucket names are global; but using DNS names should be unique unless someone else made a bucket with the same name for some reason.
  • Set up Hugo to build the site locally. Have it set up with your github repo as the remote. This is beyond the scope of this guide; Hugo has some complexity and it can take a while to get it to the point where you’ll be happy.
  • Set up the domain you are hosing on AWS on Route53. You must use Route53 to use CloudFront ALIAS record types. This guide won’t work if you don’t host your DNS on Route53.

Set up the S3 buckets

The region of the S3 bucket doesn’t really matter, but I recommend you do all work in us-east-1 as you will have trouble with some things if they are not in us-east-1 (I forget what it was, might have been cloudfront).

sitename.xxx bucket

This will be the bucket holding all the content.

  1. Properties: Set up Static website hosting, with index.html and 404.html for the index and error documents.
  2. Permissions: Turn off “block *all *public access”.
  3. Permissions: Bucket Policy should be configured with the following:
{
    "Version": "2012-10-17",
    "Id": "Policy1594427255725",
    "Statement": [
        {
            "Sid": "Stmt1594427253085",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::sitename.xxx/*"
        }
    ]
}

Obviously, replace sitename.xxx with your domain name.

This will allow all objects in the bucket to be publically read. Obviously, this is for a public website, so this is what you want.

www.sitename.xxx bucket

This bucket does not need all of the configuration of the other bucket, but you do need to configure static website hosting and set it to Redirect requests to the base domain and protocol https.

At this point you will have two S3 HTTP sites that you should be able to access via HTTP directly that look like

http://BUCKET-NAME.s3-website-us-east-1.amazonaws.com

Set up a Certificate for your domain

Using AWS Certificate Manager, create a certificate request for your domain.

You should add DOMAIN and www.DOMAIN to the certificate; you can use *.DOMAIN if you wish to use that certificate elsewhere.

As you have configured the domain to be hosted on Route53, you can use the button to automate the DNS request to validate it.

Set up a CloudFront Distribution for each bucket

This is fairly straight forward; simply create a distribution for the DOMAIN bucket and the www.DOMAIN bucket, using the certificate you created earlier. Do NOT set the “Default Root Object” field; do enable HTTP and HTTPS, and redirect HTTP to HTTPS.

Security Policy should be set to TLSv1.2_2018 unless you have a good reason.

Also, enable IPv6. LOOK AT YOU ALL IPv6 ENABLED!

Set up a role and account that can upload to S3 and can invalidate the CloudFront cache.

In the IAM management console, set up something like this:

Role Summary

You then need to add a user for managing the site; it should not be a login but rather an ACCESS_KEY_ID and SECRET_ACCESS_KEY should be generated for the user.

You may need to tweak the permissions for the S3 stuff. I had some issues.

Set up Github secrets for the AWS keys

  • AWS_ACCESS_KEY_ID and SECRET_ACCESS_KEY should be what you just generated.
  • DISTRIBUTION_ID should be set to the CloudFront Distribution ID.

Set up Github to build your Hugo site and publish to S3

In your hugo config.toml you will need the following:

[deployment]	
order = [".png$", ".jpg$", ".gif$", ".svg$"]	

[[deployment.targets]]	
URL = "s3://SITE_NAME?region=us-east-1"	

[[deployment.matchers]]	
# Cache static assets for 20 years.	
pattern = "^.+\\.(js|css|png|jpg|gif|svg|ttf)$"	
cacheControl = "max-age=630720000, no-transform, public"	
gzip = true	

[[deployment.matchers]]	
pattern = "^.+\\.(html|xml|json)$"	
gzip = true

where SITE_NAME is the domain.

You will need to add a .github/workflows/build.yml file in the repo:

name: Build and Deploy

on:
  push:
    branches: [master]

jobs:
  build:
    name: Build and Deploy
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Repository
        uses: actions/checkout@v1
      - name: Checkout submodules
        uses: textbook/git-checkout-submodule-action@master
        with:
          remote: true
      - name: Install Hugo
        run: |
          HUGO_DOWNLOAD=hugo_extended_${HUGO_VERSION}_Linux-64bit.tar.gz
          wget https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/${HUGO_DOWNLOAD}
          tar xvzf ${HUGO_DOWNLOAD} hugo
          mv hugo $HOME/hugo
        env:
          HUGO_VERSION: 0.73.0
      - name: Hugo Build
        run: $HOME/hugo -v
      - name: Deploy to S3
        if: github.ref == 'refs/heads/master'
        run: $HOME/hugo -v deploy --maxDeletes -1
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
      - name: Invalidate Cloudfront Cache
        uses: awact/cloudfront-action@master
        env:
          SOURCE_PATH: "/*"
          AWS_REGION: "us-east-1"
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          DISTRIBUTION_ID: ${{ secrets.DISTRIBUTION_ID }}

Test that updating the site works

So, at this point, the DNS is not configured, but you should be able to access the site after you update on github.

Push git up to master, and go check the Actions tab in github, and you will see the Build and Deploy for that commit running and monitor the action.

Github Actions

Finally set up DNS

Add a record pointing to the cloudfront.net address for the cloudfront distribution for each of the two sites.

Conclusion

I’ve yet to analyse the costs yet, but I think you’re looking at

  • .50c per month for Route53. Requests for CloudFront addresses are free.
  • 8.5c per GB for cloudfront; really depends how much traffic your site gets but for a site like mine that gets very littel traffic I should be looking at about $10-20 a month.
  • S3 will be small enough to fit within free tier.

I think ultimately it’s going to be worth it, and not running any servers or anything is a plus. And the site is basically bulletproof.

Zoom Needs to Fix Stuff

So, this was a concerning development regarding Zoom that hit my feed yesterday.

From https://objective-see.com/blog/blog_0x56.html:

Today, we uncovered two (local) security issues affecting Zoom’s macOS application. Given Zoom’s privacy and security track record this should surprise absolutely zero people.

First, we illustrated how unprivileged attackers or malware may be able to exploit Zoom’s installer to gain root privileges.

Following this, due to an ‘exception’ entitlement, we showed how to inject a malicious library into Zoom’s trusted process context. This affords malware the ability to record all Zoom meetings, or, simply spawn Zoom in the background to access the mic and webcam at arbitrary times! 😱

The former is problematic as many enterprises (now) utilize Zoom for (likely) sensitive business meetings, while the latter is problematic as it affords malware the opportunity to surreptitious access either the mic or the webcam, with no macOS alerts and/or prompts.

Given most companies don’t really have a choice right now but to run meetings remotely, Zoom needs to fix its shit. This is not acceptable.

Using Drone, Gitea and Docker with Hugo

So I recently went on a little adventure to dockerify my personal website. Among other things, I wanted to be able to run Hugo to generate the site and have it publish using a CI system.

My website used to be published via Wordpress. I was nevery super happy with Wordpress; mostly because I really felt that for a personal blog that didn’t see much traffic it was a bit overkill. And of course, it requires a lot of maintenance - security issues regularly crop up and you need to tend to it.

Hugo was attractive because you can in theory just keep everything in a git repo, and have a CI system build the html and push it to a server on build success.

So, I started going down a path that lead me to the following;

  • RancherOS for running docker in a Virtual Machine

    I currently have a dedicated server that I pay for that I run KVM virtual machines with. I intend to at some point get rid of this.

    In the meantime I wanted a place to run my public website, a gitea instance for my git repos and a drone instance for CI.

    RancherOS seems ideal as it gives you a very minimal Linux environment for running Docker applications; the intent is that you just install RancherOS then manage it remotely via Portainer. Obviously, they want you to use Rancher, but I’m familiar with Portainer.

  • Portainer for managing my Docker installations.

    I have a server in my homelab which runs a lot of docker containers for my home stuff; the server already runs a container runnign Portainer which I use to manage my docker stacks. I use this interface to spin up stacks based on Docker Compose files.

    The above gives me a Poor Man’s Kubernetes - basically if you don’t have the need for a fully fledged kubernetes installation with the whole fault tolerance thing, if all you want is to “run a bunch of containers” this works well.

  • Gitea for my git repo management.

    Lightweight and definitely acceptable for my needs.

  • Drone for CI

    I use this to build my Hugo site into HTML files that are then shipped into the container hosting the site.

I’ve also got an nginx proxy container with a letsencrypt companion that provides me with a way to load applications up on my RancherOS instance and proxy them all behind a single IP but with dedicated certificates for each service.

All in all, it’s pretty cool. When I get a moment I’ll document how I did it.

edit: Holy shit, it loads so much faster than Wordpress :-)

20 years

So, I was looking at the whois data for stupendous.net and I realised that I’ve owned this domain for nearly 20 years.

   Domain Name: STUPENDOUS.NET
   Registry Domain ID: 2627376_DOMAIN_NET-VRSN
   Registrar WHOIS Server: whois.dyndns.com
   Registrar URL: http://www.oracle.com
   Updated Date: 2017-08-06T16:04:49Z
   Creation Date: 1998-05-08T04:00:00Z

I thought that was pretty cool, though it made me sad to realise Oracle has subsumed yet another company and owns dyndns.com who I’ve used for at least 10 years.

On another note. I’ve added facebook login for commenting on this site. I think pretty much everyone has a facebook login these days.

Obviously, it’s going to send me your email address. I promise I won’t spam you. I won’t do anything with the email address other than use it as your primary key in the database.