On cloud hosting it is usually possible to connect machines with a VPN, so that it looks as if all of them were in a real physical subnet. At Amazon and Google this is called Virtual Private Cloud (VPC) and Microsoft calls it Virtual Network (VNet). Google also tells us that in their case it is implemented using Andromeda. I’m currently trying to perform these actions on a VM playground. The idea is to do a fun hobby project and to learn more about networking and Linux tooling.

Since I cannot start and stop physical servers automatically, I limited myself to VMs only. The goal is to be able to automatically boot VMs on different machines and connect them to different VPNs possibly spanning multiple machines. For the user of the VM this should all be transparent, this means that no VPN client must be installed inside the VM.

My solution is quite similar to my last post about public IP addresses inside QEMU VMs. The VMs will again use a TAP interface. In order for this to work with VPN we need a Layer-2-capable VPN client, e.g. tinc.

## tinc Setup

tinc actually is quite simple to setup, it only needs a specific folder structure and some files:

/etc/tinc/
|-- netname/
|-- hosts/
|   |-- myhost
|    -- remotehost
|-- rsa_key.priv
|-- tinc.conf
|-- tinc-down
-- tinc-up


netname is a name for the VPN that you want to use. It can be chosen according to your liking and it’s possible to have more than one VPN on the same host (using different names and ports). When starting tincd this name has to be defined via tincd -n netname.

In tinc.conf we can define the basic configuration of our tinc VPN, e.g. that we want to use Layer-2 mode (switch) and the name of the created interface. This is an example of a tinc.conf:

# It's possible to let tinc read the hostname with $HOST # or you can specify another name (must be unique within # one VPN) Name =$HOST
# Layer-2 VPN
Mode = switch
# You can specify a custom name for the created interface
Interface = tinc-vpn

ConnectTo = remotehost


tinc-up and tinc-down are scripts that are executed when starting or stopping tinc. Most commonly they are used to setup the interfaces correctly, e.g. assigning an IP address.

The folder hosts/ contains configuration files for the local machine as well as for the remote machines from the ConnectTo directives. Each of these files contains the public key of the respective machine. If we use a host for ConnectTo in the tinc.conf it must also contain an IP address in its configuration file. The Subnet entries are not needed for Layer-2 VPN, the routing table will be constructed dynamically.

# Address is needed if we use a host in ConnectTo, it must be the
# real public IP of the host

-----BEGIN RSA PUBLIC KEY-----
LoremIpsum
-----END RSA PUBLIC KEY-----


The public and private keys can be generated with tincd -n netname -K. This command runs in interactive mode if there is an stdin available, otherwise it will run in non-interactive mode. For automated install we can force this by reading stdin from /dev/null:

tincd -n netname -K </dev/null


## Bridging

After we have successfully setup tinc on multiple machines we need a software bridge. The setup is analogous to the bridging setup from the previous article.

            o br0-priv
/ \
/   \
tinc-vpn o     o tap0-priv
|
- - - - - - - - - - - - - - - Host / VM boundary
|
o ens1 (inside VM)


The commands are also quite analogous, except that we do not need to assign an IP to the VPN interface. It’s enough if the VMs have IP addresses (at least this worked for me when I connected VMs on my PC to a remote root server, both inside a tinc VPN).

tincd -n netname

The interface tap0-priv can now be connected to the VM as a secondary interface with -device and -netdev arguments in QEMU.