Building docker containers on small cloud instances can result in failing builds. Let's see how we can work around this issue.
With many of my hobby projects running on tiny cloud instances, there's one issue that I frequently face when I need to build a docker container on such a machine.
Luckily there's a cheap and easy way to work around this issue.
The easiest way to work around this problem is to build the image locally and then transfer it to the VPS. This is where docker's
load commands come in handy.
docker save <imagename> | gzip > <name>.tar.gz
allows us to save a locally built image into a .tar.gz file, which in turn can be loaded on another machine via:
docker load -i <path/to/<name>.tar.gz>
Therefore, for simple cases where I don't want to use additional external services to build an application, I've set up a script which is called from a git-hook:
#!/bin/bash # build locally docker build -t sample-app . # expor to .tar.gz docker save sample-app | gzip > sample-app.tar.gz # transfer .tar.gz to remote machine scp sample-app.tar.gz username@smallinstance:/tmp # invoke load on remote machine ssh username@smallinstance docker load -i /tmp/sample-app.tar.gz
While the process explained above is probably the easiest way without any additional services and installations, there are alternatives providing advantages at the cost of complexity.
Using a docker registry:
Instead of transfering the image in a .tar.gz file, its usually a better idea to use a docker registry. For private reposities, we need to either pay a small monthly fee for this service on docker.com or self-host a docker registry.
Using an external build server: There's a multitude of services available to build our applications and a point where the advantages of using such a service is reached very quickly. When we're using docker.com to host our images, there's also the possibility to build the image on their servers.