VyOS as Kubernetes Pod
I spotted that Google Doc in my Twitter feed back in december, and this picked up my interest, i.e. what if we were to use VyOS as a Pod, i.e. to basically run some network functionality (like IPSec tunnel, etc) ?
Turned out it works quite well, especially if you pair it with Multus, the framework to add supplemental interfaces via CNI plugins.
Multus
First, we deploy multus on our k8s homelab cluster:
kubectl apply -f https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/deployments/multus-daemonset-thick-plugin.yml
We create a Multus NetworkAttachmentDefinition CRD:
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: vyos
namespace: default
spec:
config: '{
"cniVersion": "0.3.0",
"plugins": [{
"type": "macvlan",
"master": "eth0",
"mode": "bridge"
}]
}'
This would basically enable the pod to have a second interface bridged on the main baremetal node eth0 main interface, once the proper annotation is provided.
VyOS configuration file
We can store the configuration of VyOS under a configmap object:
apiVersion: v1
kind: ConfigMap
metadata:
name: vyos
namespace: default
data:
config.boot: |
interfaces {
ethernet eth1 {
address dhcp
}
}
service {
ssh {
disable-host-validation
}
}
system {
config-management {
commit-revisions 100
}
host-name vyosk8s
login {
user vyos {
authentication {
encrypted-password $6$QxPS.uk6mfo$9QBSo8u1FkH16gMyAVhus6fU3LOzvLR9Z9.82m3tiHFAxTtIkhaZSWssSgzt4v4dGAL8rhVQxTg0oAG9/q11h/
plaintext-password ""
}
level admin
}
}
name-server 10.96.0.10
time-zone UTC
}
/* Warning: Do not remove the following line. */
/* === vyatta-config-version: "wanloadbalance@3:ntp@1:webgui@1:dhcp-server@5:pptp@1:webproxy@2:quagga@6:qos@1:firewall@5:ssh@1:dhcp-relay@2:config-management@1:l2tp@1:dns-forwarding@1:ipsec@5:cluster@1:system@9:conntrack-sync@1:conntrack@1:snmp@1:mdns@1:nat@4:webproxy@1:broadcast-relay@1:vrrp@2:zone-policy@1" === */
/* Release version: 1.3.0 */
.vyatta_config: |
Note that the password stored is the default vyos
. The rest is matching the notes of the documentation provided in the Google document, with the addition of a DHCP client in the eth1 interface, which will be our Multus one.
VyOS pod
Now let’s try to write a minimal deployment CRD. I came up with this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: vyos
namespace: default
labels:
app: vyos
spec:
replicas: 1
selector:
matchLabels:
app: vyos
template:
metadata:
name: vyos
labels:
app: vyos
annotations:
k8s.v1.cni.cncf.io/networks: '[{"name":"vyos","interface":"eth1"}]'
spec:
initContainers:
- name: fix-all-the-things
image: busybox
command:
- /bin/sh
- -c
- |
mount -o remount, rw /sys
securityContext:
privileged: true
containers:
- name: vyos
image: vyos/image:1.3
ports:
- containerPort: 22
command:
- /sbin/init
volumeMounts:
- name: config-boot
mountPath: /config/config.boot
subPath: config.boot
readOnly: false
- name: empty-file
mountPath: /config/.vyatta_config
subPath: .vyatta_config
readOnly: false
- name: host-modules
mountPath: /lib/modules
readOnly: true
securityContext:
privileged: true
volumes:
- name: host-modules
hostPath:
path: /lib/modules
- name: config-boot
configMap:
name: vyos
items:
- key: config.boot
path: config.boot
- name: empty-file
configMap:
name: vyos
items:
- key: .vyatta_config
path: .vyatta_config
Several interesting things to note here:
- Multus interfaces are named net1, net2… we override this because VyOS is expecting eth* interfaces, and we have our multus interface running as eth1.
- Host network mode could be interesting (to run VyOS on public nodes, maybe?) but would require to have the kernel booted with legacy interface names. This might be hard to do without control on the booting process.
- I was hitting a /sys read-only mount even with privileged mode. This might be related to namespace tag, so remounting might be actually necessary in that particular case.
Nevertheless, after a kubectl apply
of those two YAMLs we have a nice VyOS container up and running:
❯ kubectl get pods | grep vyos
vyos-58579d958-nr8lt 1/1 Running 0 3m10s
We can now login using kubectl
stdin access, and then pivot to the vyos user:
❯ kubectl exec -it vyos-58579d958-nr8lt -- /bin/bash
Defaulted container "vyos" out of: vyos, fix-all-the-things (init)
root@vyosk8s:/# su vyos
vyos@vyosk8s:/$ show version
Version: VyOS 1.3.0-epa3
Release train: equuleus
Built by: Sentrium S.L.
Built on: Sun 31 Oct 2021 17:38 UTC
Build UUID: 383e45ad-b32a-4359-8183-9baacc8e69d9
Build commit ID: bb511522cc3bb2-dirty
Architecture: x86_64
Boot via: installed image
System type: bare metal
Neat, we can run configure
on the VyOS instance and use commit
, but the save command will not work due to the ConfigMap mounted in the pod - a PVC could do the trick, but this is totally fine for my lab CI/CD usage.
Let’s create an LLDP daemon for our Multus interface as an example:
vyos@vyosk8s:/$ configure
[edit]
vyos@vyosk8s# set service lldp interface eth1
[edit]
vyos@vyosk8s# commit
[edit]
vyos@vyosk8s# exit
Warning: configuration changes have not been saved.
We can check that the interface is sending stuff:
vyos@vyosk8s:~$ sudo tcpdump -i eth1 not port 22 | grep -i lldp
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
14:34:44.685761 LLDP, length 227: vyosk8s
Which is also reported by my UniFi network controller at home.
We can probably also access via ssh:
❯ ssh IP -l vyos
Welcome to VyOS
vyos@IP's password:
Linux vyosk8s 5.11.0-43-generic #47~20.04.2-Ubuntu SMP Mon Dec 13 11:06:56 UTC 2021 x86_64
Welcome to VyOS!
Check out project news at https://blog.vyos.io
and feel free to report bugs at https://phabricator.vyos.net
Visit https://support.vyos.io to create a support ticket.
You can change this banner using "set system login banner post-login" command.
VyOS is a free software distribution that includes multiple components,
you can check individual component licenses under /usr/share/doc/*/copyright
Use of this pre-built image is governed by the EULA you can find at
/usr/share/vyos/EULA
Last login: Sun Jan 9 14:34:48 2022 from IP
vyos@vyosk8s:~$
Great! We can build more complex things, like testing VyOS configuration, annonce BGP in a declarative way, interconnect k8s services with some VPN based peers etc.
Public cloud and future ideas
Could this leverage this in a public cloud environment? Well, some early thoughts:
- you would probably need a cloud that enable multiple network interface, like AWS ENIs.
- if there is only one interface, it would mean running it in host networking mode, and the host should have ethX as interface name.
- host networking mode means probably a high risk of messing up with the node if the configuration is broken.
Nevertheless, I find the whole exercice interesting: VyOS could be used in a operator pattern, with the following approach:
- user defines some CRDs format, like “BGPpeers”, “VPNEndpoint” high level utilities.
- a couple of network template with VyOS config covering firewall, routing and service entries matching those CRDs.
- operator generate minimalist VyOS config from the template and CRDs and inject the deployment in the cluster, and track the deployment lifecycle
- eventually test/analyse the configuration with Batfish
- some interfacing at BGP level with the underlying CNI driver (like Calico) could lead to interesting automations with the rest of the k8s cluster.
so yes, Kubernetes is still exciting (reference to that tweet), at least for network engineers.
Lots of things to build.