On cloud hosting it is usually possible to connect machines with a VPN, so that it looks as if all of them were in a real physical subnet. At Amazon and Google this is called Virtual Private Cloud (VPC) and Microsoft calls it Virtual Network (VNet). Google also tells us that in their case it is implemented using Andromeda. I’m currently trying to perform these actions on a VM playground. The idea is to do a fun hobby project and to learn more about networking and Linux tooling.
Since I cannot start and stop physical servers automatically, I limited myself to VMs only. The goal is to be able to automatically boot VMs on different machines and connect them to different VPNs possibly spanning multiple machines. For the user of the VM this should all be transparent, this means that no VPN client must be installed inside the VM.
My solution is quite similar to my last post about public IP addresses inside QEMU VMs. The VMs will again use a TAP interface. In order for this to work with VPN we need a Layer-2-capable VPN client, e.g. tinc.
tinc actually is quite simple to setup, it only needs a specific folder structure and some files:
/etc/tinc/ |-- netname/ |-- hosts/ | |-- myhost | -- remotehost |-- rsa_key.priv |-- tinc.conf |-- tinc-down -- tinc-up
netname is a name for the VPN that you want to use. It can be
chosen according to your liking and it’s possible to have
more than one VPN on the same host (using different names and ports).
tincd this name has to be defined via
tincd -n netname.
tinc.conf we can define the basic configuration of our tinc VPN,
e.g. that we want to use Layer-2 mode (
switch) and the name of the
created interface. This is an example of a
# It's possible to let tinc read the hostname with $HOST # or you can specify another name (must be unique within # one VPN) Name = $HOST # Layer-2 VPN Mode = switch # You can specify a custom name for the created interface Interface = tinc-vpn ConnectTo = remotehost
tinc-down are scripts that are executed when starting or
stopping tinc. Most commonly they are used to setup the interfaces correctly,
e.g. assigning an IP address.
hosts/ contains configuration files for
the local machine as well as for the remote machines from the
ConnectTo directives. Each of these files contains the public key of
the respective machine.
If we use a host for
ConnectTo in the
it must also contain an IP address in its configuration file.
Subnet entries are not needed for Layer-2 VPN, the routing table will
be constructed dynamically.
# Address is needed if we use a host in `ConnectTo`, it must be the # real public IP of the host # Address = -----BEGIN RSA PUBLIC KEY----- LoremIpsum -----END RSA PUBLIC KEY-----
The public and private keys can be generated with
tincd -n netname -K.
This command runs in interactive mode if there is an stdin available,
otherwise it will run in non-interactive mode. For automated install we can force this by
reading stdin from
tincd -n netname -K </dev/null
After we have successfully setup tinc on multiple machines we need a software bridge. The setup is analogous to the bridging setup from the previous article.
o br0-priv / \ / \ tinc-vpn o o tap0-priv | - - - - - - - - - - - - - - - Host / VM boundary | o ens1 (inside VM)
The commands are also quite analogous, except that we do not need to assign an IP to the VPN interface. It’s enough if the VMs have IP addresses (at least this worked for me when I connected VMs on my PC to a remote root server, both inside a tinc VPN).
tincd -n netname ip link add br0-priv type bridge ip link set br0-priv up ip link set tinc-vpn up ip link set tinc-vpn master br0-priv ip addr flush dev tinc-vpn # delete IP in case one was assigned ip tuntap add dev tap0-priv mode tap user $YOUR_USER ip link set dev tap0-priv up ip link set tap0-priv master br0-priv
tap0-priv can now be connected to the VM as a secondary
-netdev arguments in QEMU.