Dockerized Apps on $2.50/mo Vultr IPv6 instances
Published: Jan 16, 2023 Updated: Mar 23, 2023
A VPS service called Vultr has available $2.50/mo servers. You can easily avoid all issues by paying an extra $1/mo for an IPv4 address, but in pursuit of cheap hosting, I’ve troubleshooted using one of the $2.50/mo instances. For a quick and dirty list of tasks, here’s a link to the end of this post.
Currently, the specs are as follows:
- 1 vCPU core. The most info on this is “previous generation intel”
- 0.5GB on RAM
- 10 GB of SSD storage, which is fairly limiting. For some debugging workloads, I had to grab additional spinning disk storage at the rate of 40GB for $1/mo
- 0.5TB of outbound bandwidth. Distributed hourly to you with a $0.01/GB overage charge. I have no idea how reasonable this value is
With the following caveats:
- The instances are IPv6 only
- You may only have 2 of these cheap instances
- There are some image limitations. e.g. You can’t use Ubuntu as your starting image. I don’t really understand this one… who cares what source image I use?
- The instances are limited to certain regions. At the time of writing, they are only available in the New York(NJ) and Atlanta US regions
- …and more!
We’re going to setup a fairly complex docker-compose system. Namely a Nakama instance, which is linked to a containerized database and monitoring system. This docker-compose file. Maybe this system will be resource-starved, but time will tell.
Some caveats of the implementation is:
- We will have to modify the docker-compose.yml, however the change is fixed size.
- We have to disable our firewall. I don’t know enough about ufw and iptables to properly route the internal container dns server.
- I do not use a rootless implementation of podman
Overall, is this worth it? Not at all; Anywho here’s an affiliate link to Vultr.
Spin up a Debian 11 instance because it feels homely like Ubuntu, but gives you that server software vibe.
After logging into your new machine, you’ll find that you can’t reach half the
internet.
This includes github.com so no git clone
ing of anything.
To resolve this we’ll need NAT64/DNS64.
A DNS64 server embeds IPv4 addresses into IPv6 addresses using one of these schemes based on desired prefix length.
These addresses route to a NAT64 server which forwards the request to the target
IPv4 server.
The ugly Cisco-esque diagram from wiki is a decent summary:
The first server from nat64.xyz works for me. Add the server into resolvconf and update /etc/resolv.conf.
echo "nameserver 2602:fc23:18::7" >> /etc/resolvconf/resolv.conf.d/head
resolvconf -u
ping github.com
and git clone
should now work.
If all you want is a VPS to run single programs and such, this should be enough to make the server functional.
However, if you tried to use docker now, you’ll find that the container is unable to communicate to the outside world. This is because despite using DNS64, existing A records/IPv4 addresses are still returned by the dns server.
root@lounge3:~# nslookup github.com
Server: 2602:fc23:18::7
Address: 2602:fc23:18::7#53
Non-authoritative answer:
Name: github.com
Address: 140.82.112.4
Name: github.com
Address: 2602:fbf6:800:0:8c:5270:400:0
Docker networking by default creates an IPv4 network for internal networking. This comes with a default route into the host system for NAT management. As a result, when attempting to connect to github.com, the container attempts to route the traffic through the host NAT through the IPv4 interface and expectedly fails because the host system doesn’t support IPv4 traffic.
I couldn’t find a way around this.
No hackery with Linux’s name resolution à la resolvconf
or gai.conf
.
Couldn’t find a dns server with an option to filter IPv4 responses.
I did find this interestingly IETF
presentation
on the topic though.
It seems they used a hacky patch on the currently existing AAAA record
filtering in Bind to test this out.
Which is great but… I’m sorry,
I’m just not interested in patching and compiling a custom build of Bind.
There might be a solution through some iptables hackery,
but I have an irrational aversion to iptables.
I think you’d have to reimplement NAT64 within iptables
which sounds a bit worse than death.
Maybe I’ll come around to the whole iptables thing like I did with docker though.
The solution I landed on was to completely remove IPv4 networking within containers. Unfortunately, docker doesn’t support IPv6 only networks. However, podman - a daemonless docker alternative - does.
For an assortment of reasons, we need podman v4+.
To do this we’ll need to update from Debian stable to testing.
Convert from Debian bullseye to bookworm by replacing all instances of
“bullseye” with “bookworm” in /etc/apt/sources.list
.
Then update the system:
root@lounge3:~# sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list
root@lounge3:~# apt update
root@lounge3:~# apt full-upgrade
root@lounge3:~# reboot
Install podman.
root@lounge3:~# apt install podman
root@lounge3:~# systemctl enable --now podman
root@lounge3:~# podman --version
podman version 4.3.1
For a fresh install of podman v4+, podman should use the netavark network backend. We can double check like so:
root@lounge3:~# podman system info | grep networkBackend
networkBackend: netavark
We need this backend as it has better IPv6 support.
If you ran an older podman version before,
you can clear all containers and networks with podman system reset
and reboot.
Podman should automatically use the netavark system.
The next step is temporary - I’ll update this post when it’s resolved - but we have to compile a more recent version of aardvark-dns rather than installing from the apt repos.
apt install cargo
git clone https://github.com/containers/aardvark-dns
cargo build --release
cp target/release/aardvark-dns /usr/lib/podman/
Container hostname resolution was only working with 1.4.1-dev. Debian used to have a grossly outdated version of the package, but it’s since been updated to a more recent but still outdated version. See the debian aardvark-dns changelogs
With podman and aardvark setup, we can move on to creating an IPv6 only container network:
podman network create podmanv6 --subnet fd00:1::/112
Note that we are using the fd00:1::/112 subnet which is within the private IPv6 subnet space. Similar to 10.0.0.0/8 in IPv4. Specifically a Unique Local Address and not the deprecated site-local IPv6 address space. I don’t know the difference. From here we can do a quick test that networking is working:
root@lounge3:~# podman run --cap-add=NET_RAW --network podmanv6 --rm -it --name test debian
root@2c208c0e6e65:/# apt update
root@2c208c0e6e65:/# apt install iputils-ping -y
root@2c208c0e6e65:/# ping github.com
PING github.com(lb-140-82-112-3-iad.github.com (2602:fbf6:800:0:8c:5270:300:0)) 56 data bytes
64 bytes from lb-140-82-112-3-iad.github.com (2602:fbf6:800:0:8c:5270:300:0): icmp_seq=1 ttl=46 time=55.4 ms
Whilst the container is running, from another terminal a name server lookup for
the container hostname should work.
Podman creates an instance of aardvark-dns which serves the hostname under the
dns.podman domain.
I couldn’t figure out how to properly configure ufw
to route dns queries, so
I’m disabling it instead:
root@lounge3:~# ufw disable
root@lounge3:~# ps aux | grep aardvark
root 11377 0.0 0.0 276272 468 ? Ssl 02:32 0:00 /usr/lib/podman/aardvark-dns --config /run/containers/networks/aardvark-dns -p 53 run
root@lounge3:~# nslookup test.dns.podman fd00:1::1
Server: fd00:1::1
Address: fd00:1::1#53
Non-authoritative answer:
Name: test.dns.podman
Address: fd00:1::5
Next we’ll install docker-compose.
We need the --no-install-recommends
flag here.
Otherwise, apt will install docker.
root@lounge3:~# apt install docker-compose --no-install-recommends
docker-compose communicates with the docker daemon located at /var/run/docker.sock
.
For compatibility with docker, podman creates a similar socket at /var/run/podman/podman.sock
.
We’ll just symlink these in a systemd unit so it occurs at startup:
/etc/systemd/system/docker-podman-sock.service
[Unit]
Description=Create symlink for docker.sock
Requires=podman.socket
After=podman.socket
[Service]
ExecStart=/bin/ln -sf /var/run/podman/podman.sock /var/run/docker.sock
Restart=always
User=root
[Install]
WantedBy=multi-user.target
Enable it with systemctl enable --now docker-podman-sock.service
If you aren’t a fan of 11 lines to run a single command at boot,
you can add export DOCKER_HOST=unix:///var/run/podman/podman.sock
into your .bashrc
equivalent.
docker-compose should be functional now. Install Nakama:
git clone https://github.com/heroiclabs/nakama
cd nakama
We have to make a slight modification to the docker-compose.yml to make it use our IPv6 network. Append the following to the end of the docker-compose.yml:
networks:
default:
external: true
name: podmanv6
You can run docker-compose up
now although the latest yml throws errors.
For my version, docker-compose version 1.29.2, build unknown
, I had to remove the following lines:
links:
- 'cockroachdb:db'
From here everything should work. Because we are using recent versions of podman, port forwarding should just work without mucking about with iptables. In this case, the public address of your server should be hosting a Nakama web interface on port 7351.
Checklist
#!/bin/sh
# Use NAT64/DNS64
echo "nameserver 2602:fc23:18::7" >> /etc/resolvconf/resolv.conf.d/head
resolvconf -u
# Use Debian Testing
sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list
apt update
apt full-upgrade -y
# Install podman
apt install -y podman
systemctl enable --now podman
# Install aardvark-dns/Compile a version of aardvark-dns
# TODO remove this
git clone https://github.com/containers/aardvark-dns
cargo build --release
cp target/release/aardvark-dns /usr/lib/podman
# Install docker-compose
apt install docker-compose --no-install-recommends
# Disable firewall
ufw disable
# Link /var/run/docker.sock to /var/run/podman/podman.sock
ln -sf /var/run/podman/podman.sock /var/run/docker.sock
Bonus tasks
Set PasswordAuthentication No
in /etc/ssh/sshd_config.
Disable syslog because 10GB is so little space.
systemctl disable --now syslog