Timotej Lazar
e7f9132571
Firewall policy is set in NetBox as cluster services¹. For Proxmox we have to manually allow communication between nodes when using L3, since the default management ipset does not get populated correctly. We also need to open VTEP communication between nodes, which the default rules don’t. We allow all inter-node traffic, as SSH without passwords must be permitted anyway. This also adds some helper filters that are spectacularly annoying to implement purely in templates. ¹ There is actually no such thing as as a cluster service (yet?), so instead we create a fake VM for the cluster, define services for it, and then add the same services to a custom field on the cluster. Alternative would be to tie services to a specific node, but that could be problematic if that node is replaced. |
||
---|---|---|
filter_plugins | ||
roles | ||
templates | ||
ansible.cfg | ||
inventory.yml | ||
README.md | ||
setup.yml |
These Ansible roles set up servers running various Linux distributions to participate in BGP routing. Device and IP address data are pulled from NetBox. A separate VRF mgmt
is configured for a L2 management interface.
Setup
Each server should have the following information recorded in NetBox:
- network interfaces
mgmt*
: used for management (Ansible) access; must define MAC and IP address - network interfaces
lan*
: used for BGP routing; must define MAC address - network interface
lo
: must define the IP address to announce over BGP, also serves as router ID
For the management IP address, another address in the same prefix should be defined with the tag gateway
.
Run
Create a read-only token in NetBox. Define required variables:
# one for nb_inventory and one for nb_lookup
export NETBOX_API_KEY=<token>
export NETBOX_TOKEN="${NETBOX_API_KEY}"
# one for both
export NETBOX_API=<netbox API endpoint>
Run one-off tasks with (add --key-file
or other options as necessary):
ansible -i inventory.yml -m ping 'server-*'
Run a playbook with:
ansible-playbook setup.yml -i inventory.yml -l 'server-*'