I’ve been struggling to keep this blog up to date with the things I’m learning, and I end up posting small things that don’t show off anything I’m actually doing (the 6-sided die, for example). I think this is because I’ve been learning a lot, but not doing anything with it. Therefore it’s difficult to put into any useful format. I can’t say hey here is a cool thing I made, or show step-by-step what I’ve been up to in any given week.
Even when I do useful things, it’s difficult to explain them while abstracting it from the products we make at my company. I don’t want to write about the place I work at specifically, since I am afraid I wouldn’t be a good voice/advocate for them due to my lack of experience. Anything I say or misunderstand could reflect badly on them.
In the interest of keeping this up to date with my learner journey, I’m going to write a few more posts simply explaining concepts I’ve been learning about.
So what did I learn about Docker?
Good thing you asked!
This week, we had an impromptu mob-coding workshop on contract testing – I wish I had something useful to report about this, but none of us got very far. I’ll be pairing with one of the developers hopefully next week on completing what we started though. A side-effect of this was that I used Docker for the first time. I’ve been aware of Docker, and had it installed on my mac since I first started here 7 months ago, but I never really had a concept of why it was there, other than ‘dev stuff’.
Docker uses things called Containers. These are essentially boxes containing the minimum required things you need to run an application. It does the same job as a Virtual Machine, except in a much lighter and more stackable way – it has only the essential code you need for running the app, leaving out the heavy stuff like networking and an operating system (other than the kernel, which is the core brainy bit). Containers are shared and accessible, and having all your dependencies up there in Docker instead of on your local machine means that everything still works when you look at it from another machine that might have different versions of software packages, or a different OS.
To launch a Docker container, you need to define what’s called an Image. Docker builds the image for you, based on instructions you lay out in a Dockerfile. The image is basically all the layers of instructions you give Docker in order to build the environment you want for the application to run – what OS and language you’re using, versioned dependencies such as nodejs, and a copy of your application files. These make up the Image Layers, and can’t be messed around with once your container is running. There’s an extra layer put on top, called the Container Layer, in which you make changes such as making, changing and deleting files. Sharing an Image with someone else allows them to go through the same process of building the exact same environment you’re running the application in, avoiding the classic “it can’t be broken, it’s working fine on my machine” scenario.
In our company, we use Docker for development and for implementing our CircleCI system for release processes and the like. All the steps in our CircleCI flow are individual Docker instances (an instance is just a single occurence of of an object, ie it gets realised and used).
Writing a Dockerfile
As I said, the Image is a set of instructions on how to build your app and the environment it runs nicely in. These instructions are written in the Dockerfile. The format for writing this is simple:
Instructions are keywords that define what happens to the argument passed to them (e.g. RUN yarn dev, WORKDIR /usr/projects/appFolder COPY myapp.js) and are written in uppercase to distinguish them from the arguments.
You start with the most basic requirements, then work your way up through the parts that depend on what came before. Therefore, the first line specifies what Image you’re building from, ie the language package. You don’t want to go building that from scratch though, but luckily there’s a great community at https://hub.docker.com/ who have built files you can reference in your Dockerfile to do this for you.
See below the names of some install files you can use for Node. They’re for different versions, so you can choose the one most suited to you:
For example, if I wanted carbon-alpine, I’d start the Dockerfile with the line:
On running, Docker will find the carbon-alpine file and run through all the instructions inside it. Looking at that file, you can see that it starts with FROM alpine:3.8, so it’d find that file and run through the instructions inside that, then back to carbon-alpine to finish up there, then to the next line in the Dockerfile on your machine.
After this, you can copy any files you need using COPY, make a folder with MKDIR, run stand-alone commands using RUN etc. In its most basic form, a Dockerfile looks like this:
FROM <base image>
COPY <application file>
RUN <command to start your application>
Of course, applications are generally more complex and have more dependencies than that so you’d have instructions for yarn commands and adding ports for viewing you application etc.
I plan to write another post that builds on this one, so I’ll be talking about how to run docker containers from terminal, as well as dockercompose for running a bunch of tiny apps in the same container, what volumes are, and the .dockerignore file too. Again, I don’t have practical experience implementing these things yet so it’s not instructions for anyone else. I’m simply logging my own progress. I also hope to get time to build my nodejs project this weekend, but I’m packing to move house so we’ll have to see.