According to its website,
Version | Date | Status | Notes |
1.0 | 14 Dec 2021 | First Draft | Needs Checking |
NetBox is an infrastructure resource modeling (IRM) application designed to empower network automation. Initially conceived by the network engineering team at DigitalOcean, NetBox was developed specifically to address the needs of network and infrastructure engineers. NetBox is made available as open source under the Apache 2 license. It encompasses the following aspects of network management:
- IP address management (IPAM) - IP networks and addresses, VRFs, and VLANs
- Equipment racks - Organized by group and site
- Devices - Types of devices and where they are installed
- Connections - Network, console, and power connections among devices
- Virtualization - Virtual machines and clusters
- Data circuits - Long-haul communications circuits and providers
What NetBox Is Not
While NetBox strives to cover many areas of network management, the scope of its feature set is necessarily limited. This ensures that development focuses on core functionality and that scope creep is reasonably contained. To that end, it might help to provide some examples of functionality that NetBox does not provide:
- Network monitoring
- DNS server
- RADIUS server
- Configuration management
- Facilities management
At its heart, Netbox is a CMDb and IPAM tool, NetBox was started by Jeremy Stretch while he was building networks at DigitalOcean.
What is this post about?
I've created this post to go over the installation of Netbox, the initial setup using Ansible and importing devices using Ansible.
There is a fully functional Netbox Ansible Module available and while its documentation is not great if you are new to Ansible it is usable.
I will create a follow-up post to this which covers the IPAM, IP Address, Port Assignment and other low-level features of Netbox, they will not be covered in this post.
Disclaimer
The internet being what it it is, not everyone is happy, I will state (in true dev style) this works for me. If it doesn't work for you, or you feel using the API or other methods are better, good for you.. I will make some spelling mistakes, I'll try not to.. As with everything on this site, its not for you, its for me, its a reminder, a debreif of sorts I create to ensure I have understood something and can come back to it later.
Setup
I'm running Netbox on Ubuntu 20.04 on VirtualBox with 4Gb ram, 40Gb HDD and 2Vcpu's.
For ease of use, I've also created a Jumpbox running CentOS7 which has 2Gb Ram, 30Gb Hdd and 1Vcpu which the Ansible code runs on.
On the Jumpbox I have setup the following, remember this is a development setup in production other considerations around security and setup would need to be taken into consideration.
Passwordless SSH
The Jumpbox is able to SSH to the Ubuntu Netbox server using SSH Keys, these are setup passwordless.

Sudo no passwords
On the Ubuntu box, I've set up sudo with no passwords
DO NOT DO THIS IN PRODUCTION!!!!
In a development environment, it makes things much easier to use.

Hosts Entry
There is no DNS server on my test environment, the Centos 7 Jumpbox has an entry in
/etc/hosts
Which reflects the IP and DNS name of the Ubuntu box
10.10.100.22 netbox.lan netbox
Obviously you'll need to change the IP to reflect you're Netbox Server
Ansible Hosts
Under the ansible hosts file on the Jumpbox
/etc/ansible/hosts
I've added the entry
[netbox]
10.100.100.22
Obviously you'll need to change the IP to reflect you're Netbox Server
Ansible
I've installed Python3 (3.8), Python-pip3, and Ansible (2.10) (2.9 will also work, anything lower gave me problems) on the Centos7 box, this blog is out of scope for these instructions. However
There are plenty of places online which will explain how to install Ansible
Virtualbox Nat Network
Both Virtualbox machines share the same NAT Network details of the setup for this can be found below.

Virtualbox Port Forwarding to Nat
Because NAT Network is being used, you won't be able to SSH from your host machine to the two virtual machines OR use the HTTP server we set up for Netbox. To resolve this you can make use of Virtual Port forwards

Create a virtual port forward for
22 on Centos7 use 2222 as the Port Forward Number
80 on Ubuntu use 8080 as the Port Forward Number.
Run the command to test.
ssh username@127.0.0.1 -p 2222
So if you can ping the two servers from each other, ssh from the Jumpbox to the Ubuntu server and launch sudo with no password. You're ready to go..
What have we done?
- Installed Centos 7 on Virtualbox
- Installed Ubuntu 20.04 on Virtualbox
- Setup a NAT Network on Virtualbox
- Setup port forwarding from the host to access the virtual machines.
- Installed Ansible on Centos 7
- Setup Passwordless SSH on Centos 7
- Setup Sudo with no Password on Ubuntu 20.04
- Setup port forwarding for SSH and HTTP
Netbox Installation
Netbox can either be installed as a fully distributed system for production environments OR as a single server system for testing on. For this guide, I will be installing the single-server version.
This install can be done in Ansible or Puppet however for the sake of these instructions I think its important to understand the manual process.
Official Instructions here:
Install Postgresql
sudo apt install postgresql libpq-dev -y
Start the database server.
sudo systemctl start postgresql
Enable the database server to start automatically on reboot.
sudo systemctl enable postgresql
Change the default PostgreSQL password.
sudo passwd postgres
Switch to the postgres
user.
su - postgres
Log in to PostgreSQL.
psql
Create database netbox
.
CREATE DATABASE netbox;
Create user netbox
with password my_strong_password
. Use a strong password in place of my_strong_password
.
DONT USE my_strong_password IN PRODUCTION!!!
CREATE USER netbox WITH ENCRYPTED password 'my_strong_password';
Grant all the privileges on the netbox
database to the netbox
user.
GRANT ALL PRIVILEGES ON DATABASE netbox to netbox;
Exit PostgreSQL.
\q
Return to your non-root sudo user account.
exit
Install Redis.
sudo apt install -y redis-server
Install Netbox Prerequisites
On the Ubuntu Netbox Server
sudo apt install -y python3 python3-pip python3-venv python3-dev build-essential libxml2-dev libxslt1-dev libffi-dev libpq-dev libssl-dev zlib1g-dev
Before continuing, check that your installed Python version is at least 3.7:
python3 -V
Upgrade pip to the latest version
sudo pip3 install --upgrade pip
Create the base directory for the NetBox installation. For this guide, we'll use /opt/netbox
.
sudo mkdir -p /opt/netbox/ cd /opt/netbox/
If git
is not already installed, install it:
sudo apt install -y git
Install Netbox using Git
Clone the master branch of the NetBox GitHub repository into the current directory. (This branch always holds the current stable release.)
sudo git clone -b master --depth 1 https://github.com/netbox-community/netbox.git .
The git clone
the command should generate output similar to the following:
Cloning into '.'...
remote: Enumerating objects: 996, done.
remote: Counting objects: 100% (996/996), done.
remote: Compressing objects: 100% (935/935), done.
remote: Total 996 (delta 148), reused 386 (delta 34), pack-reused 0
Receiving objects: 100% (996/996), 4.26 MiB | 9.81 MiB/s, done.
Resolving deltas: 100% (148/148), done.
Setup users, groups and permissions
Create a system user named netbox
.
sudo adduser --system --group netbox
Grant user netbox
ownership of /opt/netbox/netbox/media/
.
sudo chown --recursive netbox /opt/netbox/netbox/media/
Create Netbox Configuration file
Browse to the /opt/netbox/netbox/netbox/
directory.
cd /opt/netbox/netbox/netbox/
Copy example configuration file configuration.example.py
to a configuration file configuration.py
that we will use to configure the project.
sudo cp configuration.example.py configuration.py
Create a symbolic link of Python binary.
sudo ln -s /usr/bin/python3 /usr/bin/python
Generate a random SECRET_KEY
of at least 50 alphanumeric characters.
sudo /opt/netbox/netbox/generate_secret_key.py
You will get a random secret similar to the bellow example. Copy it and save it somewhere. You will need it in the configuration file.
-^%YEl*Q2etCR6$kNG70H=&sM(45XvJaBWdf3O)inZ@L9j8_w1
Open and edit the configuration file configuration.py
.
$ sudo nano /opt/netbox/netbox/netbox/configuration.py
The final file should have the following configurations.
ALLOWED_HOSTS = ['*']
DATABASE = {
'NAME': 'netbox', # Database name you created
'USER': 'netbox', # PostgreSQL username you created
'PASSWORD': 'my_strong_password', # PostgreSQL password you set
'HOST': 'localhost', # Database server
'PORT': '', # Database port (leave blank for default)
}
SECRET_KEY = '-^%YEl*Q2etCR6$kNG70H=&sM(45XvJaBWdf3O)inZ@L9j8_w1'
Upgrade
Run the upgrade script.
sudo /opt/netbox/upgrade.sh
Enter the Python virtual environment.
source /opt/netbox/venv/bin/activate
Go to /opt/netbox/netbox
directory.
cd /opt/netbox/netbox
Create a superuser account.
python3 manage.py createsuperuser
This is the user you will login to Netbox later with.
Reboot the system to apply the changes.
$ sudo reboot
Configure Gunicorn
Copy /opt/netbox/contrib/gunicorn.py
to /opt/netbox/gunicorn.py
.
sudo cp /opt/netbox/contrib/gunicorn.py /opt/netbox/gunicorn.py
Configure Systemd
Copy contrib/netbox.service
and contrib/netbox-rq.service
to the /etc/systemd/system/
directory.
sudo cp /opt/netbox/contrib/*.service /etc/systemd/system/
Reload daemon to enable the Systemd changes.
sudo systemctl daemon-reload
Start the netbox
and netbox-rq
services.
sudo systemctl start netbox netbox-rq
Enable the services to initiate at boot time.
sudo systemctl enable netbox netbox-rq
Configure Nginx Web Server
Install Nginx web server.
sudo apt install -y nginx
Copy NetBox Nginx configuration file nginx.conf
to /etc/nginx/sites-available/netbox
.
sudo cp /opt/netbox/contrib/nginx.conf /etc/nginx/sites-available/netbox
Edit file netbox
.
sudo nano /etc/nginx/sites-available/netbox
Replace all the files contained with the below code. Modify the server_name
value with your server IP address:
server {
listen 80;
# CHANGE THIS TO YOUR SERVER'S NAME
server_name 192.0.2.10;
client_max_body_size 25m;
location /static/ {
alias /opt/netbox/netbox/static/;
}
location / {
proxy_pass http://127.0.0.1:8001;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Delete /etc/nginx/sites-enabled/default
.
sudo rm /etc/nginx/sites-enabled/default
Create a symlink in the sites-enabled directory to the netbox
configuration file.
sudo ln -s /etc/nginx/sites-available/netbox /etc/nginx/sites-enabled/netbox
Restart Nginx service to enable the new configurations.
sudo systemctl restart nginx
What have we done?
- Installed and setup Postgres
- Installed Redis
- Installed Python 3
- Installed Git
- Installed Netbox using get
- Created and Edited Config files
- Installed Gunicorn
- Installed NGINX
- Ran Netbox Update
Test
You should now be able to access Netbox using the URL below on your host IF the Virtualbox Port Forwarding was set up properly
http://127.0.0.1:8080
You can now log in with the username and password you set while creating the superuser account.
Thoughts
At this point, there should be a working Netbox installation. This however is a shell, it needs your infrastructure setup to install it, then devices ingressed into it..
Setup the netbox.netbox Ansible Module
Before Ansible will work some python needs to be installed and some files pulled down from Ansible Galaxy.
There is a Python requirement with the modules of the pynetbox package.
Install pynetbox
To install you execute the following to get the latest version of pynetbox:
pip install pynetbox --upgrade
Installation - NetBox Collection
The collection is installed via Ansible Galaxy as a primary method to install. You can also install the collection manually from GitHub, but the galaxy method is the preferred method.
ansible-galaxy collection install netbox.netbox --force
The addition of --force
will have Ansible Galaxy install the latest version on top of what you may already have.
If you already have a version of the collection installed, Galaxy will not overwrite what you already have.
To verify that you have installed the NetBox Ansible Collection, you can execute the Ansible Doc command to get the current documentation. This is done as followed with the netbox_device module to verify that the docs load:
ansible-doc netbox.netbox.netbox_device
If the module is not installed properly you will see, with a key in on the first line
[WARNING]: module netbox.netbox.netbox_inventory not found in:
~/.local/lib/python3.7/site-packages/ansible/modules
The output when I sent the stdout to a file is:
> NETBOX.NETBOX.NETBOX_DEVICE (/Users/joshvanderaa/.ansible/collections/ansible_collections/netbox/netbox/plugins/modules/netbox_device.py)
Creates, updates or removes devices from Netbox
OPTIONS (= is mandatory): = data Defines the device configuration
...
Generate Netbox API Token
For the Ansible to work an API Token must be Generated, this can be done by logging in as a User and under the user icon in the top right selecting to manage the account and then an API Token can be generated
A token is a unique identifier mapped to a NetBox user account. Each user may have one or more tokens which he or she can use for authentication when making REST API requests. To create a token, navigate to the API tokens page under your user profile.
Note
The creation and modification of API tokens can be restricted per user by an administrator. If you don't see an option to create an API token, ask an administrator to grant you access.
Each token contains a 160-bit key represented as 40 hexadecimal characters. When creating a token, you'll typically leave the key field blank so that a random key will be automatically generated. However, NetBox allows you to specify a key in case you need to restore a previously deleted token to operation.
By default, a token can be used to perform all actions via the API that a user would be permitted to do via the web UI. Deselecting the "write enabled" option will restrict API requests made with the token to read operations (e.g. GET) only.
Additionally, a token can be set to expire at a specific time. This can be useful if an external client needs to be granted temporary access to NetBox.
What have we done?
- Installed the Python API for Netbox
- Installed the Ansible Galaxy Netbox Module
- Generated a Netbox Api Token
- Tested this works.
Setting it Up
Taking Netbox installation from the basic framework to working for you needs data. Focussing only on importing devices in this post the requirements cab be imported using Ansible.
Ansible Setup File
To set up the business-specific requirements for Netbox i've used
00-setup-prereq.yaml
Which is created using the netbox.netbox ansible plugins references for which are found at this website.
When setting up the playbook there are some considerations that should be taken into account
- Data needs to be set up in the right order if a netbox plugin that needs tenant information for example is set up BEFORE tenants are the play will fail.
- If not using a public or recognised self-signed certificate to connect to your netbox server you MUST use validate_certs: no or the ingress to netbox will fail
- The error messages are terrible and really don't help.
With this in mind the following is the playbook
---
- name: "PLAY 1: SETUP DEVICES WITHIN NETBOX"
hosts: netbox
connection: local
vars:
install_state: present
NETBOX_URL: http://netbox.lan
NETBOX_TOKEN: dfe756822ff5055b0e9d8bf26fe0b2d10f5c58a0
vars_files:
- vars/setup.yml
tasks:
- name: "TASK 0: ADD TAGS"
netbox.netbox.netbox_tag:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data:
name: "{{ tag.tag }}"
description: "{{ tag.description }}"
state: "{{ install_state }}"
register: site_setup
loop: "{{ tags }}"
loop_control:
loop_var: tag
label: "{{ tag['tag'] }}"
tags: [ sites, devices ]
- name: "TASK 1a: SETUP SITES DC1"
netbox.netbox.netbox_site:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data: "{{ site }}"
state: "{{ install_state }}"
register: site_setup
loop: "{{ sites1 }}"
loop_control:
loop_var: site
label: "{{ site['name'] }}"
tags: [ sites, devices ]
- name: "TASK 2: Add Locations"
netbox.netbox.netbox_location:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data:
name: "{{ location.name }}"
site: "{{ location.site }}"
slug: "{{ location.slug }}"
state: "{{ location.state }}"
loop: "{{ locations }}"
loop_control:
loop_var: location
- name: "TASK 3: SETUP MANUFACTURERS"
netbox.netbox.netbox_manufacturer:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data:
name: "{{ manufacturer }}"
state: "{{ install_state }}"
loop: "{{ manufacturers }}"
loop_control:
loop_var: manufacturer
tags: [ devices ]
- name: "TASK 4: SETUP DEVICE TYPES"
netbox.netbox.netbox_device_type:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data:
model: "{{ device_type.model }}"
manufacturer: "{{ device_type.manufacturer }}"
slug: "{{ device_type.slug }}"
tags: "{{ device_type.tags }}"
state: "{{ install_state }}"
loop: "{{ device_types }}"
loop_control:
loop_var: device_type
label: "{{ device_type['model'] }}"
tags: [ devices ]
- name: "TASK 5: SETUP PLATFORMS"
netbox.netbox.netbox_platform:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data:
name: "{{ platform.name }}"
slug: "{{ platform.slug }}"
state: "{{ install_state }}"
loop: "{{ platforms }}"
loop_control:
loop_var: platform
label: "{{ platform['name'] }}"
tags: [ devices ]
- name: "TASK 6: SETUP DEVICE ROLES"
netbox.netbox.netbox_device_role:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data:
name: "{{ device_role.name }}"
color: "{{ device_role.color }}"
vm_role: "{{ device_role.vmrole }}"
state: "{{ install_state }}"
loop: "{{ device_roles }}"
loop_control:
loop_var: device_role
label: "{{ device_role['name'] }}"
tags: [ devices ]
- name: "TASK 7: SETUP VLANS"
netbox.netbox.netbox_vlan:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data:
name: "VLAN{{ vlan.vid }}"
vid: "{{ vlan.vid }}"
site: "h1"
description: "{{ vlan.desc }}"
state: "{{ install_state }}"
register: result
loop: "{{ vlans }}"
loop_control:
loop_var: vlan
label: "{{ vlan['vid'] }}"
tags: [ ipam ]
- name: "TASK 8: SETUP RFC1918 RIR"
netbox.netbox.netbox_rir:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data: "{{ rir }}"
state: "{{ install_state }}"
loop: "{{ rirs }}"
loop_control:
loop_var: rir
label: "{{ rir['name'] }}"
tags: [ ipam ]
- name: "TASK 9: SETUP AGGREGRATES"
netbox.netbox.netbox_aggregate:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data:
prefix: "{{ aggregate.name }}"
description: "{{ aggregate.desc }}"
rir: "{{ aggregate.rir }}"
state: "{{ install_state }}"
loop: "{{ aggregates }}"
loop_control:
loop_var: aggregate
label: "{{ aggregate['name'] }}"
tags: [ ipam ]
- name: "TASK 10: SETUP PREFIXES"
netbox.netbox.netbox_prefix:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data:
family: 4
prefix: "{{ prefix.prefix }}"
site: "{{ prefix.site | default(omit) }}"
status: "{{ prefix.status | default('Active') }}"
description: "{{ prefix.desc }}"
is_pool: "{{ prefix.ispool }}"
state: "{{ install_state }}"
loop: "{{ prefixes }}"
loop_control:
loop_var: prefix
label: "{{ prefix['prefix'] }}"
tags: [ ipam ]
- name: "TASK 11: SETUP CIRCUIT PROVIDER"
netbox.netbox.netbox_provider:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data: "{{ circuit_provider }}"
state: "{{ install_state }}"
loop: "{{ circuit_providers }}"
loop_control:
loop_var: circuit_provider
label: "{{ circuit_provider['name'] }}"
tags: [ circuit ]
- name: "TASK 12: SETUP CIRCUIT TYPE"
netbox.netbox.netbox_circuit_type:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data: "{{ circuit_type }}"
state: "{{ install_state }}"
loop: "{{ circuit_types }}"
loop_control:
loop_var: circuit_type
label: "{{ circuit_type['name'] }}"
tags: [ circuit ]
- name: "TASK 13: CREATE LOCAL CIRCUIT"
netbox.netbox.netbox_circuit:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data: "{{ circuit }}"
state: "{{ install_state }}"
loop: "{{ circuits }}"
loop_control:
loop_var: circuit
label: "{{ circuit['cid'] }}"
tags: [ circuit ]
- name: "TASK 14: SETUP TENANT GROUPS"
netbox.netbox.netbox_tenant_group:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data:
name: "{{ tenant_group }}"
state: "{{ install_state }}"
loop: "{{ tenantgroups }}"
loop_control:
loop_var: tenant_group
tags: [ organization ]
- name: "TASK 15: SETUP TENANTS"
netbox.netbox.netbox_tenant:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data:
name: "{{ tenant.tenant }}"
tenant_group: "{{ tenant.tenant_group }}"
tags: "{{ tenant.tags | default('Active') }}"
state: "{{ install_state }}"
loop: "{{ tenants }}"
loop_control:
loop_var: tenant
label: "{{ tenant['tenant'] }}"
tags: [ organization ]
- name: "TASK 16: CLUSTER TYPES"
netbox.netbox.netbox_cluster_type:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data:
name: "{{ clustertype.name }}"
state: "{{ install_state }}"
loop: "{{ cluster_types }}"
loop_control:
loop_var: clustertype
label: "{{ clustertype['name'] }}"
tags: [ virtualization ]
- name: "TASK 17: CLUSTER GROUPS"
netbox.netbox.netbox_cluster_group:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data:
name: "{{ clustergroup.name }}"
slug: "{{ clustergroup.description }}"
state: "{{ install_state }}"
loop: "{{ cluster_groups }}"
loop_control:
loop_var: clustergroup
label: "{{ clustergroup['name'] }}"
tags: [ virtualization ]
- name: "TASK 18: RACK_ROLES"
netbox.netbox.netbox_rack_role:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data:
name: "{{ rackrole.name }}"
color: "{{ rackrole.color }}"
slug: "{{ rackrole.slug }}"
state: "{{ rackrole.state }}"
loop: "{{ rack_roles }}"
loop_control:
loop_var: rackrole
label: "{{ rackrole['name'] }}"
tags: [ racks ]
- name: "TASK 19: RACKS"
netbox.netbox.netbox_rack:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data:
name: "{{ rack.name }}"
tenant: "{{ rack.tenant }}"
u_height: "{{ rack.u_height }}"
type: "{{ rack.type }}"
width: "{{ rack.width }}"
tags: "{{ rack.tags }}"
site: "{{ rack.site }}"
location: "{{ rack.location }}"
state: "{{ rack.state }}"
loop: "{{ racks }}"
loop_control:
loop_var: rack
label: "{{ rack['name'] }}"
tags: [ racks ]
Almost all the plays are based on loops as with the example below.
- name: "TASK 0: ADD TAGS"
netbox.netbox.netbox_tag:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data:
name: "{{ tag.tag }}"
description: "{{ tag.description }}"
state: "{{ install_state }}"
register: site_setup
loop: "{{ tags }}"
loop_control:
loop_var: tag
label: "{{ tag['tag'] }}"
tags: [ sites, devices ]
A loop is broken down into sections
Netbox Connection
netbox.netbox.netbox_tag:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
Each different netbox item you need to import will need to know where to connect to and how to connect to it.
The Data to be imported
data:
name: "{{ tag.tag }}"
description: "{{ tag.description }}"
state: "{{ install_state }}"
Each netbox item has a list of data import fields as an example the netbox_tag
The following items are listed for import.

In this example under data we are using the name and description fields. If you scroll through the full YAML, you will see that each of the netbox import items has a lot of import options.
I've looked at the GUI on each of the setup screens and the must-have items are listed with a * next to them. If these items are not added then the play will fail
The loop creation
register: site_setup
loop: "{{ tags }}"
loop_control:
loop_var: tag
label: "{{ tag['tag'] }}"
tags: [ sites, devices ]
On the following import YAML file, the data will be parsed and look for the field marked tag: and loop through the contents.
Ansible YAML File
In the Ansible play
00-setup-prereq.yaml
The data is held in the setup.yml file
vars/setup.yml
Something to remember with this data file is the data doesn't need to be in the same order that the 00-setup.prereq.yaml is listed, the play will parse the setup YAML for the label.
---
ntp_servers:
- 162.159.200.1
- 162.159.200.123
dns_servers:
- 192.168.22.35
- 192.168.22.1
sites1:
- name: "dc1"
time_zone: Europe/London
status: active
description: dc1 Datacenter
contact_name: David Field
contact_email: operations@mysite.co.uk
sites2:
- name: "dc2"
time_zone: Europe/London
status: active
description: dc2 Datacenter
contact_name: David Field
contact_email: operations@mysite.co.uk
sites3:
- name: "dc3"
time_zone: Europe/London
status: active
description: dc3 Datacenter
contact_name: David Field
contact_email: operations@mysite.co.uk
manufacturers:
- Brocade
- Cisco
- Juniper
- Opengear
- Dell
- HP
- Arbor
- F5
- Sunfire
- DELLEMC
- None
- solaris
tenantgroups:
- dc1
- dc2
- dc3
- London
tenants:
- { tenant : dc1mysite, tenant_group: dc1, tags: [ dc1, DC ]}
- { tenant : dc2mysite, tenant_group: dc2, tags: [ dc2, DC ]}
- { tenant : dc3mysite, tenant_group: dc3, tags: [ dc3, DC ]}
- { tenant : london_mysite, tenant_group: London, tags: [ London, Office ]}
device_types:
- { model : brocadevdx, manufacturer: Brocade, slug: brocade, tags: [ switch, hardware, network, layer2 ], full_depth: False}
- { model : ciscoswitch, manufacturer: Cisco, slug: cisco, tags: [ switch, hardware, network, layer2 ], full_depth: False}
- { model : loadbalancer, manufacturer: F5, slug: f5, tags: [ firewall, hardware, network, layer3 ], full_depth: False}
- { model : DellServer, manufacturer: Dell, slug: dell, tags: [ server, hardware ], full_depth: False}
- { model : HPServer, manufacturer: HP, slug: hp, tags: [ server, hardware ], full_depth: False}
- { model : SolarisServer, manufacturer: Solaris, slug: sun, tags: [ server, hardware ], full_depth: False}
- { model : juniper, manufacturer: Juniper, slug: junp, tags: [ router, hardware, network, layer3 ], full_depth: False}
- { model : climatemanagement, manufacturer: None, slug: climate, tags: [ hardware ], full_depth: False}
- { model : oob, manufacturer: Opengear, slug: oob, tags: [ hardware ], full_depth: False}
- { model : ddos, manufacturer: Arbor, slug: ddos, tags: [ hardware, network, layer3 ], full_depth: False}
- { model : patchpanel, manufacturer: None, slug: patch, tags: [ hardware ], full_depth: False}
- { model : cablemgmt, manufacturer: None, slug: cmgmt, tags: [ hardware ], full_depth: False}
platforms:
- { name: Server, slug: svr }
- { name: Storage, slug: store }
- { name: Network, slug: net }
- { name: Vxrail, slug: vxr }
device_roles:
- { name: vxrail, color: FF0000, vmrole: false }
- { name: switch, color: FF0000, vmrole: true }
- { name: router, color: FF0000, vmrole: false }
- { name: firewall, color: FF0000, vmrole: false }
- { name: database, color: FF0000, vmrole: true }
- { name: loadbalancer, color: FF0000, vmrole: false }
- { name: docker, color: FF0000, vmrole: true }
- { name: virtualmachne, color: FF0000, vmrole: true }
cluster_types:
- { name: database, description: Database Cluster }
- { name: elastisearch, description: ELK Cluster }
- { name: docker, description: Docker Swarm Cluster }
- { name: storage, description: Storage Cluster }
- { name: network, description: Network Cluster }
- { name: server, description: Server Cluster }
- { name: switch, description: Switch Cluster }
cluster_groups:
- { name: dc1, description: dc1 DC Cluster}
- { name: dc2, description: dc2 DC Cluster}
- { name: dc3, description: dc3 DC Cluster}
- { name: QA, description: QA Cluster}
- { name: PREPROD, description: PreProduction Cluster}
- { name: STORAGE, description: iSCSI Storage Cluster}
- { name: NETWORK, description: IP Cluster}
- { name: DATABASE, description: Database}
- { name: ELASTIC, description: Elsasticsearch Cluster}
clusters:
- { name: vxrail, cluster_group: dc1, cluster_type: server, tenant: 2A_C15, tags: [ dc1, server, hardware ]}
vlans:
- { vid: 100, desc: Primary VLAN, site: dc1, tags: [ safewebbox, onsite, network ] }
- { vid: 300, desc: Secondary VLAN, site: dc1, tags: [ safewebbox, onsite, network ] }
rirs:
- name: RFC1918
is_private: True
aggregates:
- { name: "10.0.0.0/8", desc: RFC1918 - 10, rir: RFC1918, tags: network }
- { name: "172.16.0.0/12", desc: RFC1918 - 172, rir: RFC1918, tags: network }
- { name: "192.168.0.0/16", desc: RFC1918 - 192, rir: RFC1918, tags: network }
prefixes:
- { prefix: 10.10.0.0/24, desc: Microtik Lan, ispool: true, tags: [ hardware, safewebbox, onsite, network ] }
- { prefix: 172.16.0.0/24, desc: DOCKER Lan, ispool: false, tags: [ docker, safewebbox, onsite, network ] }
- { prefix: 192.168.86.0/24, desc: Home Lan, ispool: true, tags: [ virtual, safewebbox, onsite, network ] }
circuit_providers:
- name: Zen Internet
asn: 7843
account: in_good_standing
portal_url: http://zen.com
noc_contact: support@zeninternet.com
comments: "Internet Provider"
circuit_types:
- name: FTTP
circuits:
- cid: home_1g_fttp
provider: Zen Internet
circuit_type: FTTP
status: Active
install_date: "2018-06-01"
commit_rate: 100000000
description: Home Internet 1G line
comments: "Delivered"
tags:
- { tag: "dc1", description: "Site dc1 - DrPepper Slough" }
- { tag: "dc2", description: "Site dc2 - DrPepper Enfield" }
- { tag: "dc3", description: "Site dc3 - Quattro" }
- { tag: "ANO", description: "Provider AN Other" }
- { tag: "switch", description: "physical layer 2 device" }
- { tag: "server", description: "physical server" }
- { tag: "linux", description: "running linux os" }
- { tag: "storage", description: "storage device" }
- { tag: "solaris", description: "running solaris" }
- { tag: "router", description: "physical layer 3 device" }
- { tag: "dns", description: "DNS Server" }
- { tag: "docker", description: "Docker Server" }
- { tag: "firewall", description: "Firewall Server" }
- { tag: "puppet", description: "managed by puppet" }
- { tag: "web", description: "Webserver" }
- { tag: "crosssite", description: "part of a cross site interconnect" }
- { tag: "vxrail", description: "vmware Server" }
- { tag: "hardware", description: "Physcial device" }
- { tag: "virtual", description: "Virtual device" }
- { tag: "network", description: "Networking Device" }
- { tag: "rack", description: "Comms Rack" }
- { tag: "Pepsi", description: "Pepsi Comms Rack" }
- { tag: "layer2", description: "Layer2 Network Device" }
- { tag: "layer3", description: "Layer3 Network Device" }
- { tag: "DC", description: "Datacenter" }
- { tag: "London", description: "London Office" }
- { tag: "Office", description: "London Office" }
rack_groups:
- { name: dc1Pepsi, site: dc1, state: present, tags: [ dc1, rack, Pepsi ]}
- { name: dc2Pepsi, site: dc2, state: present, tags: [ dc1, rack, Pepsi ]}
- { name: dc3Pepsi, site: dc3, state: present, tags: [ dc1, rack, Pepsi ]}
- { name: dc1ANO, site: dc1, state: present, tags: [ dc1, rack, ANO ]}
rack_roles:
- { name: active, color: 00FF00, slug: activerack, state: present, tags: [ rack ]}
- { name: unused, color: CC0000, slug: inactiverack, state: present, tags: [ rack ]}
locations:
- { name: 2a, site: dc1, slug: 2a, state: present, tags: [ dc1, Pepsi,rack ]}
- { name: 2, site: dc1, slug: 2, state: present, tags: [ dc1, ANO,rack ]}
- { name: 4, site: dc2, slug: 4, state: present, tags: [ dc1, Pepsi,rack ]}
- { name: 445, site: dc3, slug: 445, state: present, tags: [ dc2, Pepsi,rack ]}
- { name: 450, site: dc3, slug: 450, state: present, tags: [ dc3, Pepsi,rack ]}
racks:
- { name: C15, rack_group: dc1Pepsi, location: 2a, rack_role: active , tenant: dc1mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc1, tags: [ dc1, rack, Pepsi ]}
- { name: C16, rack_group: dc1Pepsi, location: 2a, rack_role: active, tenant: dc1mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc1, tags: [ dc1, rack, Pepsi ]}
- { name: C17, rack_group: dc1Pepsi, location: 2a, rack_role: active, tenant: dc1mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc1, tags: [ dc1, rack, Pepsi ]}
- { name: C18, rack_group: dc1Pepsi, location: 2a, rack_role: active, tenant: dc1mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc1, tags: [ dc1, rack, Pepsi ]}
- { name: D14, rack_group: dc1Pepsi, location: 2a, rack_role: active, tenant: dc1mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc1, tags: [ dc1, rack, Pepsi ]}
- { name: D16, rack_group: dc1Pepsi, location: 2a, rack_role: active, tenant: dc1mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc1, tags: [ dc1, rack, Pepsi ]}
- { name: G06, rack_group: dc1ANO, location: 2, rack_role: active, tenant: dc1mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc1, tags: [ dc1, rack, ANO ]}
- { name: G07, rack_group: dc1ANO, location: 2, rack_role: active, tenant: dc1mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc1, tags: [ dc1, rack, ANO ]}
- { name: I11, rack_group: dc2Pepsi, location: 4, rack_role: active, tenant: dc2mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc2, tags: [ dc2, rack, Pepsi ]}
- { name: I12, rack_group: dc2Pepsi, location: 4, rack_role: active, tenant: dc2mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc2, tags: [ dc2, rack, Pepsi ]}
- { name: I13, rack_group: dc2Pepsi, location: 4, rack_role: active, tenant: dc2mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc2, tags: [ dc2, rack, Pepsi ]}
- { name: I14, rack_group: dc2Pepsi, location: 4, rack_role: active, tenant: dc2mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc2, tags: [ dc2, rack, Pepsi ]}
- { name: J11, rack_group: dc2Pepsi, location: 4, rack_role: active, tenant: dc2mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc2, tags: [ dc2, rack, Pepsi ]}
- { name: J12, rack_group: dc2Pepsi, location: 4, rack_role: active, tenant: dc2mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc2, tags: [ dc2, rack, Pepsi ]}
- { name: J13, rack_group: dc2Pepsi, location: 4, rack_role: active, tenant: dc2mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc2, tags: [ dc2, rack, Pepsi ]}
- { name: J14, rack_group: dc2Pepsi, location: 4, rack_role: active, tenant: dc2mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc2, tags: [ dc2, rack, Pepsi ]}
- { name: C3B1, rack_group: dc3Pepsi, location: 445, rack_role: active, tenant: dc3mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc3, tags: [ dc3, rack, Pepsi ]}
- { name: C3B2, rack_group: dc3Pepsi, location: 445, rack_role: active, tenant: dc3mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc3, tags: [ dc3, rack, Pepsi ]}
- { name: C3B3, rack_group: dc3Pepsi, location: 445, rack_role: active, tenant: dc3mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc3, tags: [ dc3, rack, Pepsi ]}
- { name: C3B4, rack_group: dc3Pepsi, location: 445, rack_role: active, tenant: dc3mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc3, tags: [ dc3, rack, Pepsi ]}
- { name: C3B5, rack_group: dc3Pepsi, location: 445, rack_role: active, tenant: dc3mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc3, tags: [ dc3, rack, Pepsi ]}
- { name: C3B6, rack_group: dc3Pepsi, location: 445, rack_role: active, tenant: dc3mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc3, tags: [ dc3, rack, Pepsi ]}
- { name: C3B7, rack_group: dc3Pepsi, location: 445, rack_role: active, tenant: dc3mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc3, tags: [ dc3, rack, Pepsi ]}
- { name: C3B8, rack_group: dc3Pepsi, location: 445, rack_role: active, tenant: dc3mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc3, tags: [ dc3, rack, Pepsi ]}
- { name: C3E1, rack_group: dc3Pepsi, location: 450, rack_role: active, tenant: dc3mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc3, tags: [ dc3, rack, Pepsi ]}
- { name: C3E2, rack_group: dc3Pepsi, location: 450, rack_role: "active", tenant: dc3mysite, u_height: 47, type: 4-post cabinet, width: 19, state: present, site: dc3, tags: [ dc3, rack, Pepsi ]}
With this Ansible and YAML files in place to populate Netbox for your setup run
ansible-playbook 00-setup-prereq.yaml
The play will run through and your netbox will be ready to import devices.
What have we done?
- Created a setup Ansible playbook file to customise netbox to our environment
- Created a setup.yaml with the settings we want to use.
- Run the playbook to customise the netbox install.
Populating Devices
Once Netbox has been customised we are able to run a second Ansible playbook to import the device data.
Getting the Device Data
The Data I was originally working on was an Excel spreadsheet. It had the fields
- site
- location
- rack
- face
- position
- name
- hostname
- serial
- devicerole
- devicetype
- platform
- tenant
- status
I exported this from excel as a CSV and did a find and replace on , to ; then ran this script
#!/bin/bash
INPUT=data.csv
OUTPUT=data.yaml
OLDIFS=$IFS
#Deletes the first line of data which is an exported heading
sed -i '1d' $INPUT
echo "devices:" > $OUTPUT
IFS=';'
[ ! -f $INPUT ] && { echo "$INPUT file not found"; exit 99; }
while read site location rack face position name hostname serial devicerole devicetype platform tenant status
do
printf "site: $site, location: $location, rack: $rack, face: $face, position: $position, name: $name, comments: $hostname, serial: $serial, devicerole: $devicerole, devicetype: $devicetype, platform: $platform, tenant: $tenant, status : $status" >> $OUTPUT
done < $INPUT
IFS=$OLDIFS
#I was having a problem adding the -{ and } at the start and end of each line
#used sed as a work around
sed -i 's/site:/- { site:/g' $OUTPUT
sed -i 's/Active/Active }/g' $OUTPUT
This presented data.yaml in the appropriate format.
Device Input fields
The netbox_device module has a huge number of options for importing devices and it's possible to go into as much detail or as little as you need.



data.yaml
This is an example of the format needed for the YAML file with the fields I needed.
devices:
- { site: dc1, location: 2A, rack: C15, face: front, position: 46, name: 48 Port - Copper Patch Panel RJ45C1546, comments: 48 Port - Copper Patch Panel RJ45, serial: unknown, devicerole: unknown, devicetype: patchpanel, platform: unknown, tenant: dc1mysite, status : Active }
- { site: dc1, location: 2A, rack: C15, face: front, position: 45, name: Cable MgmtC1545, comments: Cable Mgmt, serial: unknown, devicerole: unknown, devicetype: cablemgmt, platform: unknown, tenant: dc1mysite, status : Active }
- { site: dc1, location: 2A, rack: C15, face: front, position: 44, name: 48 Port - Copper Patch Panel RJ45C1544, comments: 48 Port - Copper Patch Panel RJ45, serial: unknown, devicerole: unknown, devicetype: patchpanel, platform: unknown, tenant: dc1mysite, status : Active }
- { site: dc1, location: 2A, rack: C15, face: front, position: 43, name: Cable MgmtC1543, comments: Cable Mgmt, serial: unknown, devicerole: unknown, devicetype: cablemgmt, platform: unknown, tenant: dc1mysite, status : Active }
- { site: dc1, location: 2A, rack: C15, face: front, position: 42, name: 48 Port - Copper Patch Panel RJ45C1542, comments: 48 Port - Copper Patch Panel RJ45, serial: unknown, devicerole: unknown, devicetype: patchpanel, platform: unknown, tenant: dc1mysite, status : Active }
- { site: dc1, location: 2A, rack: C15, face: front, position: 41, name: Cable MgmtC1541, comments: Cable Mgmt, serial: unknown, devicerole: unknown, devicetype: cablemgmt, platform: unknown, tenant: dc1mysite, status : Active }
- { site: dc1, location: 2A, rack: C15, face: front, position: 40, name: 48 Port - Copper Patch Panel RJ45C1540, comments: 48 Port - Copper Patch Panel RJ45, serial: unknown, devicerole: unknown, devicetype: patchpanel, platform: unknown, tenant: dc1mysite, status : Active }
- { site: dc1, location: 2A, rack: C15, face: front, position: 39, name: Cable MgmtC1539, comments: Cable Mgmt, serial: unknown, devicerole: unknown, devicetype: cablemgmt, platform: unknown, tenant: dc1mysite, status : Active }
- { site: dc1, location: 2A, rack: C15, face: front, position: 38, name: 48 Port - Copper Patch Panel RJ45C1538, comments: 48 Port - Copper Patch Panel RJ45, serial: unknown, devicerole: unknown, devicetype: patchpanel, platform: unknown, tenant: dc1mysite, status : Active }
- { site: dc1, location: 2A, rack: C15, face: front, position: 37, name: Cable MgmtC1537, comments: Cable Mgmt, serial: unknown, devicerole: unknown, devicetype: cablemgmt, platform: unknown, tenant: dc1mysite, status : Active }
- { site: dc1, location: 2A, rack: C15, face: front, position: 36, name: 48 Port - Copper Patch Panel RJ45C1536, comments: 48 Port - Copper Patch Panel RJ45, serial: unknown, devicerole: unknown, devicetype: patchpanel, platform: unknown, tenant: dc1mysite, status : Active }
Ansible Playbook
The 02_add_devices.yaml is based on the same look format for the 00_setup_prereq.yaml which will loop over data.yaml and import all the devices assigning them to the right location, rack, tenant etc.
- name: "PLAY 1: SETUP DEVICES WITHIN NETBOX"
#importing devices
hosts: netbox
connection: local
vars:
install_state: present
NETBOX_URL: http://netbox.lan
NETBOX_TOKEN: dfe756822ff5055b0e9d8bf26fe0b2d10f5c58a0
null_var: null
vars_files:
- vars/data.yml
tasks:
- name: "TASK 0: ADD Devices"
netbox.netbox.netbox_device:
netbox_url: "{{ NETBOX_URL }}"
netbox_token: "{{ NETBOX_TOKEN }}"
validate_certs: no
data:
site: "{{ device.site }}"
rack: "{{ device.rack }}"
face: "{{ device.face }}"
position: "{{ device.position }}"
name: "{{ device.name }}"
serial: "{{ device.serial }}"
comments: "{{ device.comments }}"
device_role: "{{ device.devicerole }}"
device_type: "{{ device.devicetype }}"
platform: "{{ device.platform }}"
status: "{{ device.status }}"
tenant: "{{ device.tenant }}"
state: "{{ install_state }}"
loop: "{{ devices }}"
loop_control:
loop_var: device
This will populate Netbox and provide views such as Rack Views and all the essential tags needed to make this a source of truth for your infrastructure.
Thoughts
I had a fair number of issues setting this up, I didn't find the netbox documentation well written and more examples are needed.
I like Netbox, and it is actively being developed.
Part 2
When I get to the next stage I'll be looking at setting up Netbox as an IPAM source of truth for networks and devices.
References and Links


