AI Powered Network Automation Series
No description available
Ask AI about AI Powered Network Automation Series
Powered by Claude Β· Grounded in docs
I know everything about AI Powered Network Automation Series. Ask me about installation, configuration, usage, or troubleshooting.
0/500
Reviews
Documentation
π What You'll Learn
This series takes you from zero to hero in network automation. Build a complete virtual lab in EVE-NG, automate firewalls, deploy NetBox for infrastructure management, and integrate everything with modern DevOps tools and AI.
π οΈ Tech Stack
Video 1: EVE-NG Installation & First Virtual Lab
π Overview
Install EVE-NG on VMware Workstation and build your first virtual network topology.
π― What You'll Learn
- Download and install EVE-NG Community Edition
- Configure VMware Workstation network settings
- Initial EVE-NG configuration
- Add network device images
- Create your first topology
π» Commands
1. VMware Network Configuration
# Open Virtual Network Editor
# Edit > Virtual Network Editor > Change Settings
# Configure VMnet0 - Bridge to physical adapter
# This allows EVE-NG to use your home network IP range
2. EVE-NG VM Settings
# Recommended VM Settings:
# - Processors: 2 processors x 2 cores = 4 vCPUs
# - Memory: 8GB minimum (16GB recommended)
# - Hard Disk: 80GB
# - Network Adapter: Bridged (VMnet0)
# - Guest OS: Linux > Ubuntu 64-bit
3. EVE-NG Initial Setup
# Default credentials after installation
# Username: root
# Password: eve
# Update EVE-NG
apt update && apt upgrade -y
# Fix permissions (run after adding images)
/opt/unetlab/wrappers/unl_wrapper -a fixpermissions
4. Key EVE-NG Directories
# Lab files location
/opt/unetlab/labs/
# QEMU images (routers, firewalls)
/opt/unetlab/addons/qemu/
# IOL images (Cisco IOS)
/opt/unetlab/addons/iol/bin/
# Docker images
/opt/unetlab/addons/docker/
# Running lab configurations
/opt/unetlab/tmp/
5. Add Network Device Images
# Upload images via WinSCP/SCP to appropriate directory
# Example: Cisco IOL images
cd /opt/unetlab/addons/iol/bin/
# After uploading, fix permissions
/opt/unetlab/wrappers/unl_wrapper -a fixpermissions
# Verify images
ls -la /opt/unetlab/addons/qemu/
ls -la /opt/unetlab/addons/iol/bin/
π Resources
Video 2: Build Automation Environment Inside EVE-NG
π Overview
Deploy Linux node in EVE-NG, install Ansible, and integrate VS Code for professional development workflow.
π― What You'll Learn
- Add Linux node to EVE-NG topology
- Install Python and Ansible
- Configure VS Code Remote SSH
- Set up virtual environment best practices
π» Commands
1. Download & Install Ubuntu Cloud Image
# On EVE-NG server - download Ubuntu cloud image
mkdir -p /tmp/ubuntu-download
cd /tmp/ubuntu-download
wget https://cloud-images.ubuntu.com/releases/22.04/release/ubuntu-22.04-server-cloudimg-amd64.img
# Rename to EVE-NG format
mv ubuntu-22.04-server-cloudimg-amd64.img virtioa.qcow2
# Move to EVE-NG directory
mkdir -p /opt/unetlab/addons/qemu/linux-ubuntu-22.04
mv virtioa.qcow2 /opt/unetlab/addons/qemu/linux-ubuntu-22.04/
# Fix permissions
/opt/unetlab/wrappers/unl_wrapper -a fixpermissions
# Verify installation
ls -la /opt/unetlab/addons/qemu/linux-ubuntu-22.04/
2. Linux Node Initial Setup
# Default credentials for EVE-NG Linux images
# Username: root
# Password: eve
# Update system
sudo apt update && sudo apt upgrade -y
# Install essential tools
sudo apt install -y python3 python3-pip python3-venv git curl wget vim
3. Create Python Virtual Environment
# Create virtual environment
python3 -m venv ~/ansible-venv
# Activate virtual environment
source ~/ansible-venv/bin/activate
# Upgrade pip
pip install --upgrade pip
# Create activation alias (add to ~/.bashrc)
echo 'alias netdev="source ~/ansible-venv/bin/activate"' >> ~/.bashrc
source ~/.bashrc
# Now you can use 'netdev' to activate
netdev
4. Install Ansible
# Activate virtual environment first
source ~/ansible-venv/bin/activate
# Install Ansible
pip install ansible
# Install additional useful packages
pip install paramiko netmiko napalm
# Verify installation
ansible --version
python3 --version
pip list
5. VS Code Remote SSH Setup
# On Linux node - Install OpenSSH Server
sudo apt install openssh-server -y
# Start and enable SSH service
sudo systemctl start ssh
sudo systemctl enable ssh
# Check SSH status
sudo systemctl status ssh
# Configure firewall (if enabled)
sudo ufw allow ssh
# Get IP address for VS Code connection
ip addr show
In VS Code:
- Install "Remote - SSH" extension
- Press
F1> "Remote-SSH: Connect to Host" - Enter:
username@<linux-node-ip> - Open folder:
/home/username/ - Install Python extension in remote environment
π Resources
Video 3: Git Workflow for Network Engineers
π Overview
Complete Git workflow for network engineers - version control for Ansible projects.
π― What You'll Learn
- Git fundamentals for network automation
- Create GitHub repository
- Push Ansible playbooks to GitHub
- Best practices for version control
π» Commands
1. Install Git
# Install Git
sudo apt install git -y
# Verify installation
git --version
# Configure Git (use your details)
git config --global user.name "Your Name"
git config --global user.email "your.email@example.com"
# Verify configuration
git config --list
2. Initialize Local Repository
# Navigate to your project directory
cd ~/ansible-projects
# Initialize Git repository
git init
# Check repository status
git status
# Create .gitignore file
cat << 'EOF' > .gitignore
# Python
__pycache__/
*.py[cod]
*.venv/
venv/
# Ansible
*.retry
*.log
# Sensitive files
inventory/hosts
group_vars/vault.yml
*vault*
# IDE
.vscode/
.idea/
EOF
# View .gitignore
cat .gitignore
3. Stage and Commit Files
# Add all files to staging
git add .
# Or add specific files
git add playbook.yml inventory/
# Check staged files
git status
# Commit with message
git commit -m "Initial commit: Ansible project structure"
# View commit history
git log
git log --oneline
4. Create GitHub Repository
# On GitHub.com:
# 1. Click "+" (top right) > "New repository"
# 2. Repository name: ansible-network-automation
# 3. Description: Network automation with Ansible
# 4. Choose: Private or Public
# 5. Do NOT initialize with README
# 6. Click "Create repository"
# 7. Copy the repository URL
5. Connect Local to GitHub
# Add remote repository (use HTTPS or SSH)
git remote add origin https://github.com/yourusername/ansible-network-automation.git
# Verify remote
git remote -v
# Push to GitHub
git push -u origin main
# Or if branch is named 'master'
git branch -M main
git push -u origin main
6. Daily Git Workflow
# Check current status
git status
# Pull latest changes (if working in team)
git pull
# Make changes to your files...
# Check what changed
git diff
# Stage changes
git add .
# Commit changes
git commit -m "Add VLAN configuration playbook"
# Push to GitHub
git push
# View commit history
git log --oneline --graph
7. GitHub SSH Authentication (Recommended)
# Generate SSH key
ssh-keygen -t ed25519 -C "your.email@example.com"
# Press Enter for default location
# Enter passphrase (optional)
# Copy SSH public key
cat ~/.ssh/id_ed25519.pub
# On GitHub.com:
# 1. Settings > SSH and GPG keys
# 2. Click "New SSH key"
# 3. Paste your public key
# 4. Click "Add SSH key"
# Test SSH connection
ssh -T git@github.com
# Change remote to SSH
git remote set-url origin git@github.com:yourusername/ansible-network-automation.git
# Verify
git remote -v
8. Branching Strategy
# Create new branch
git branch feature/new-playbook
# Switch to branch
git checkout feature/new-playbook
# Or create and switch in one command
git checkout -b feature/new-playbook
# List all branches
git branch
# Make changes and commit
git add .
git commit -m "Add new feature"
# Push branch to GitHub
git push origin feature/new-playbook
# Switch back to main
git checkout main
# Merge branch
git merge feature/new-playbook
# Delete branch
git branch -d feature/new-playbook
π Resources
Video 4: Deploy FortiGate Firewall on EVE-NG
π Overview
Deploy FortiGate virtual firewall in EVE-NG for security automation testing.
π― What You'll Learn
- Download FortiGate VM image
- Add FortiGate to EVE-NG
- Initial FortiGate configuration
- Network connectivity setup
π» Commands
1. Download FortiGate VM Image
# Visit: https://support.fortinet.com/
# Login with account (free trial available)
# Download: FortiGate-VM64-KVM (QCOW2 format)
# Version: Latest 7.x
# On your computer, upload to EVE-NG via SCP
scp FGT_VM64_KVM-v7.x.x-build.out root@<eve-ng-ip>:/tmp/
2. Install FortiGate in EVE-NG
# On EVE-NG server
cd /tmp
# Create FortiGate directory
mkdir -p /opt/unetlab/addons/qemu/fortigate-7.4.1
# Rename image
mv FGT_VM64_KVM-v7-build*.out /opt/unetlab/addons/qemu/fortigate-7.4.1/hda.qcow2
# Fix permissions
/opt/unetlab/wrappers/unl_wrapper -a fixpermissions
# Verify installation
ls -la /opt/unetlab/addons/qemu/fortigate-7.4.1/
3. FortiGate Initial Configuration
# Default credentials
# Username: admin
# Password: (blank - press Enter)
# Initial setup via console
# Set admin password
config system admin
edit admin
set password YourStrongPassword
end
# Configure hostname
config system global
set hostname FortiGate-VM
end
# Configure management interface
config system interface
edit port1
set mode static
set ip 192.168.1.99/24
set allowaccess ping https ssh http
end
# Configure default gateway
config router static
edit 1
set gateway 192.168.1.1
set device port1
next
end
# Configure DNS
config system dns
set primary 8.8.8.8
set secondary 8.8.4.4
end
# Save configuration
execute save config
4. Verify Connectivity
# Check interface status
get system interface physical
# Test ping
execute ping 8.8.8.8
# Check routes
get router info routing-table all
# Access via GUI
# https://192.168.1.99
# Username: admin
# Password: YourStrongPassword
π Resources
Video 5: FortiGate Automation Using Ansible
π Overview
Automate FortiGate firewalls using Ansible with secure credential storage via Ansible Vault.
π― What You'll Learn
- Install Ansible Collections for FortiGate
- Ansible Vault Setup
- Demo Playbook 1 - System Information Check
- Demo Playbook 2 - Deploy Firewall Policy
π Project Structure
ansible-project/
βββ ansible.cfg
βββ inventory/
β βββ hosts
βββ host_vars/
β βββ Forti-FW-1.yml (encrypted)
βββ playbooks/
βββ fortigate_system_info.yml
βββ fortigate_create_policy.yml
π» Commands
1. Install FortiGate Collection
# Check Ansible version
ansible --version
# Install FortiGate collection
ansible-galaxy collection install fortinet.fortios
# Verify installation
ansible-galaxy collection list | grep fortinet
2. Ansible Vault Commands
# Create encrypted vault file
ansible-vault create host_vars/Forti-FW-1.yml
# View encrypted file (shows encrypted text)
cat host_vars/Forti-FW-1.yml
# View decrypted content
ansible-vault view host_vars/Forti-FW-1.yml
# Edit encrypted file
ansible-vault edit host_vars/Forti-FW-1.yml
# Change vault password
ansible-vault rekey host_vars/Forti-FW-1.yml
3. Inventory Verification
# List all inventory with vault decryption
ansible-inventory --list -i inventory/hosts --ask-vault-pass
# Check specific host variables
ansible-inventory --host Forti-FW-1 --ask-vault-pass
# View inventory in YAML format
ansible-inventory --list -i inventory/hosts --ask-vault-pass --yaml
4. Run Playbooks
# Run system info playbook
ansible-playbook playbooks/fortigate_system_info.yml --ask-vault-pass
# Run with verbose output
ansible-playbook playbooks/fortigate_system_info.yml --ask-vault-pass -vvv
# Create firewall policy
ansible-playbook playbooks/fortigate_create_policy.yml --ask-vault-pass
# Dry run (check mode)
ansible-playbook playbooks/fortigate_create_policy.yml --ask-vault-pass --check
5. Inventory File (inventory/hosts)
[fortigates]
Forti-FW-1 ansible_host=192.168.1.111
[fortigates:vars]
ansible_network_os=fortinet.fortios.fortios
ansible_connection=httpapi
ansible_httpapi_use_ssl=yes
ansible_httpapi_validate_certs=no
ansible_httpapi_port=443
π Resources
Video 6: FortiGate Automation Using REST API
π Overview
Automate FortiGate using REST API with Postman and VS Code integration.
π― What You'll Learn
- FortiGate's API structure
- Postman Collection Setup
- VS Code API Query Setup
- Postman - GitHub Integration
π FortiGate API Structure
/api/v2/
βββ cmdb/ β Configuration (Create, Read, Update, Delete)
βββ monitor/ β Status & Monitoring (Read-only)
βββ log/ β Logs & Events (Read-only)
π§ Postman Environment Variables
| Variable | Value |
|---|---|
base_url | https://your-fortigate-ip |
api_token | your-api-token |
vdom | root |
π» API Endpoints & Commands
1. GET - System Monitoring
Endpoint:
https://{{base_url}}/api/v2/monitor/system/status?vdom={{vdom}}
cURL:
curl -k -X GET "https://192.168.1.111/api/v2/monitor/system/status?vdom=root" \
-H "Authorization: Bearer YOUR_API_TOKEN"
Headers:
Authorization: Bearer {{api_token}}
2. POST - Create Firewall Address
Endpoint:
https://{{base_url}}/api/v2/cmdb/firewall/address?vdom={{vdom}}
cURL:
curl -k -X POST "https://192.168.1.111/api/v2/cmdb/firewall/address?vdom=root" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "API_Demo_Server",
"subnet": "10.0.5.100 255.255.255.255",
"type": "ipmask",
"comment": "Created via Postman API"
}'
Request Body (JSON):
{
"name": "API_Demo_Server",
"subnet": "10.0.5.100 255.255.255.255",
"type": "ipmask",
"comment": "Created via Postman API"
}
3. PUT - Update Interface
Endpoint:
https://{{base_url}}/api/v2/cmdb/system/interface/port2?vdom={{vdom}}
cURL:
curl -k -X PUT "https://192.168.1.111/api/v2/cmdb/system/interface/port2?vdom=root" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"alias": "LAN-Internal",
"description": "Updated via Postman API"
}'
Request Body (JSON):
{
"alias": "LAN-Internal",
"description": "Updated via Postman API"
}
4. GET - Firewall Addresses & Policies
# Get all firewall addresses
curl -k -X GET "https://192.168.1.111/api/v2/cmdb/firewall/address?vdom=root" \
-H "Authorization: Bearer YOUR_API_TOKEN"
# Get all firewall policies
curl -k -X GET "https://192.168.1.111/api/v2/cmdb/firewall/policy?vdom=root" \
-H "Authorization: Bearer YOUR_API_TOKEN"
# Get system interfaces
curl -k -X GET "https://192.168.1.111/api/v2/cmdb/system/interface?vdom=root" \
-H "Authorization: Bearer YOUR_API_TOKEN"
5. POST - Create Firewall Policy
Endpoint:
https://{{base_url}}/api/v2/cmdb/firewall/policy?vdom={{vdom}}
Request Body (JSON):
{
"name": "Allow-Web-Traffic",
"srcintf": [{"name": "port1"}],
"dstintf": [{"name": "port2"}],
"srcaddr": [{"name": "all"}],
"dstaddr": [{"name": "API_Demo_Server"}],
"service": [{"name": "HTTP"}, {"name": "HTTPS"}],
"action": "accept",
"status": "enable"
}
6. Common API Endpoints Reference
System Monitoring (GET only):
/api/v2/monitor/system/status
/api/v2/monitor/system/interface
/api/v2/monitor/system/resource/usage
/api/v2/monitor/firewall/session
Configuration (GET, POST, PUT, DELETE):
/api/v2/cmdb/system/interface
/api/v2/cmdb/system/global
/api/v2/cmdb/firewall/address
/api/v2/cmdb/firewall/addrgrp
/api/v2/cmdb/firewall/policy
/api/v2/cmdb/firewall/service/custom
/api/v2/cmdb/router/static
Logs (GET only):
/api/v2/log/memory/filter
/api/v2/log/disk/filter
π Resources
Video 7: NetBox Installation Using Docker
π Overview
Install and configure NetBox using Docker Compose for network infrastructure management.
π― What You'll Learn
- Deploy NetBox with Docker Compose
- Initial NetBox configuration
- Create sites, devices, and IP addresses
- NetBox data modeling best practices
π» Commands
1. Install Docker & Docker Compose
# Update system
sudo apt update && sudo apt upgrade -y
# Install Docker
sudo apt install -y docker.io docker-compose
# Add user to docker group
sudo usermod -aG docker $USER
# Log out and back in for group changes to take effect
# Verify
docker --version
docker-compose --version
2. Clone NetBox Docker Repository
# Create directory for NetBox
mkdir -p ~/netbox-discovery
cd ~/netbox-discovery
# Clone official NetBox Docker repository
git clone -b release https://github.com/netbox-community/netbox-docker.git
cd netbox-docker
# Verify files
ls -la
3. Configure NetBox
# Copy example configuration
cp docker-compose.override.yml.example docker-compose.override.yml
# Generate secret key
SECRET_KEY=$(python3 -c 'import secrets; print(secrets.token_urlsafe(50))')
echo "SECRET_KEY=$SECRET_KEY"
# Create environment file
cat << EOF > .env
SUPERUSER_EMAIL=admin@example.com
SUPERUSER_PASSWORD=admin
SUPERUSER_API_TOKEN=$(python3 -c 'import secrets; print(secrets.token_hex(20))')
SECRET_KEY=$SECRET_KEY
EOF
# View configuration
cat .env
4. Start NetBox
# Pull Docker images
docker-compose pull
# Start NetBox
docker-compose up -d
# Check container status
docker-compose ps
# View logs
docker-compose logs -f netbox
# Wait for startup (about 2 minutes)
# Press Ctrl+C to stop following logs
5. Access NetBox Web UI
# Get NetBox IP
ip addr show
# Access via browser:
# http://<your-ip>:8000
# Default credentials:
# Username: admin
# Password: admin
# β οΈ Change password after first login!
6. NetBox Initial Setup
In NetBox UI:
-
Create Site:
- Organization > Sites > Add
- Name: HQ-DataCenter
- Status: Active
-
Create Device Role:
- Devices > Device Roles > Add
- Name: Router
- Color: Blue
-
Create Manufacturer:
- Devices > Manufacturers > Add
- Name: Cisco
-
Create Device Type:
- Devices > Device Types > Add
- Manufacturer: Cisco
- Model: CSR1000v
-
Create Device:
- Devices > Devices > Add
- Name: vIOS-R1
- Device Role: Router
- Device Type: Cisco CSR1000v
- Site: HQ-DataCenter
- Status: Active
7. NetBox CLI Commands
# Enter NetBox container
docker-compose exec netbox bash
# Inside container - Django shell
python manage.py shell
# Create superuser
python manage.py createsuperuser
# Collect static files
python manage.py collectstatic --no-input
# Run migrations
python manage.py migrate
# Exit container
exit
8. NetBox Backup
# Backup NetBox database
docker-compose exec -T postgres pg_dump -U netbox netbox > netbox_backup_$(date +%Y%m%d).sql
# Backup media files
docker-compose exec netbox tar -czf /tmp/media_backup.tar.gz /opt/netbox/netbox/media
docker cp netbox-docker_netbox_1:/tmp/media_backup.tar.gz ./media_backup_$(date +%Y%m%d).tar.gz
# List backups
ls -lh netbox_backup* media_backup*
π Resources
Video 8: NetBox Dynamic Inventory - Core Concepts Explained
π Overview
Understand the architecture of NetBox Auto-Discovery using DIODE and ORB Agent.
π― What You'll Learn
- NetBox Auto-Discovery architecture
- DIODE Server components
- ORB Agent functionality
- Data flow and OAuth authentication
ποΈ Architecture Overview
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β ORB Agent ββββββΆβ DIODE Server ββββββΆβ NetBox Plugin β
β (Discovery) β β (Processing) β β (Storage) β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β β
β β β
βΌ βΌ βΌ
SSH/SNMP PostgreSQL NetBox DB
Devices Hydra
OAuth 2.0
π Key Components
1. ORB Agent - Discovery Engine
Purpose: Discovers network devices and collects configuration data
How it works:
- Runs as Docker container
- Uses NAPALM drivers for multi-vendor support
- Connects to devices via SSH/SNMP/NETCONF
- Scheduled discovery (configurable interval)
- Sends data to DIODE Server
Supported Vendors:
- Cisco (IOS, IOS-XE, NXOS, IOS-XR)
- Juniper (JunOS)
- Arista (EOS)
- And more via NAPALM
2. DIODE Server - Processing Layer
Purpose: Processes and validates network data
Components:
- Ingester: Receives data from ORB Agent
- PostgreSQL: Stores raw network data
- Reconciler: Creates NetBox changesets
- Hydra: OAuth 2.0 authorization server
- Nginx: Reverse proxy
Data Flow:
ORB Agent β Ingester β PostgreSQL β Reconciler β NetBox
β
Hydra (Auth)
3. NetBox DIODE Plugin - Integration
Purpose: Receives data from DIODE and updates NetBox
Functions:
- Pulls changesets from DIODE API
- Creates/updates devices
- Manages interfaces and IP addresses
- Handles OAuth authentication
- Provides UI for credential management
4. OAuth 2.0 Authentication Flow
βββββββββββββββ βββββββββββββββ
β ORB Agent β β Hydra β
ββββββββ¬βββββββ ββββββββ¬βββββββ
β β
β 1. Request Token β
βββββββββββββββββββββββββββββββββββΆβ
β β
β 2. Validate Credentials β
ββββββββββββββββββββββββββββββββββββ
β Return JWT Token β
β β
β 3. Send Data + Token β
βββββββββββββββββββββββββββββββββββΆβ
β β
β 4. Verify Token β
ββββββββββββββββββββββββββββββββββββ
β Accept Data β
β β
Three OAuth Clients:
orb-agent- ORB Agent β DIODEnetbox-to-diode- NetBox Plugin β DIODEdiode-to-netbox- DIODE β NetBox (future use)
π Resources
Video 9: NetBox Auto-Discovery - Complete Setup Guide
π Overview
Complete step-by-step setup of NetBox Auto-Discovery with DIODE Server and ORB Agent.
π― What You'll Learn
- Deploy DIODE Server with Docker
- Install NetBox DIODE Plugin
- Configure ORB Agent
- Troubleshoot common issues
- Live auto-discovery demonstration
π» Commands
1. Deploy DIODE Server
# Create directory for DIODE
mkdir -p ~/netbox-discovery/diode
cd ~/netbox-discovery/diode
# Download DIODE quick-start script
curl -o diode-quickstart.sh https://raw.githubusercontent.com/netboxlabs/diode/main/diode-quickstart.sh
# Make executable
chmod +x diode-quickstart.sh
# Run quick-start (replace with your NetBox URL)
./diode-quickstart.sh --netbox-url http://192.168.1.120:8000
# This script will:
# - Download docker-compose.yml
# - Create .env file
# - Generate OAuth credentials
# - Start all DIODE services
# View generated OAuth credentials
cat oauth2/client/client-credentials.json
# Start DIODE services
docker-compose up -d
# Check status
docker-compose ps
2. Install NetBox DIODE Plugin
# Navigate to NetBox directory
cd ~/netbox-docker
# Enter NetBox container as root
docker-compose exec -u root netbox bash
# Inside container - Update package list
apt update
# Install pip (if not already installed)
apt install -y python3-pip
# Install NetBox DIODE plugin
pip3 install --target=/opt/netbox/venv/lib/python3.12/site-packages \
--break-system-packages \
netboxlabs-diode-netbox-plugin
# Verify installation
ls -la /opt/netbox/venv/lib/python3.12/site-packages/ | grep -i diode
# Test import
/opt/netbox/venv/bin/python3 -c "import netbox_diode_plugin; print('Plugin installed successfully')"
# Exit container
exit
# Commit the container to make plugin permanent
docker commit netbox-docker-netbox-1 netbox-with-diode:latest
# Update docker-compose.override.yml to use new image
cat << 'EOF' >> docker-compose.override.yml
services:
netbox:
image: netbox-with-diode:latest
EOF
3. Configure NetBox DIODE Plugin
# Edit NetBox plugins configuration
cd ~/netbox-docker
nano configuration/plugins.py
# Add at the end of the file:
PLUGINS = [
'netbox_diode_plugin',
]
PLUGINS_CONFIG = {
'netbox_diode_plugin': {
'diode_target_override': 'grpc://192.168.1.120:8080/diode',
'netbox_to_diode_client_secret': 'YOUR_SECRET_FROM_CLIENT_CREDENTIALS_JSON',
'hydra_admin_url': 'http://diode-hydra-1:4445',
}
}
# Save and exit (Ctrl+O, Enter, Ctrl+X)
# Get the actual secret from DIODE credentials
cd ~/netbox-discovery/diode
cat oauth2/client/client-credentials.json | grep -A 3 "netbox-to-diode"
# Copy the client_secret value and update plugins.py
# Restart NetBox
cd ~/netbox-docker
docker-compose restart netbox netbox-worker
# Run migrations
docker-compose exec netbox python /opt/netbox/netbox/manage.py migrate
# Restart again
docker-compose restart netbox netbox-worker
4. Fix Network Connectivity (Critical!)
# Check if NetBox and DIODE can communicate
cd ~/netbox-docker
docker-compose exec netbox hostname -i
# Note the IP (e.g., 172.18.0.5)
cd ~/netbox-discovery/diode
docker-compose exec hydra hostname -i
# Note the IP (e.g., 172.19.0.3)
# If IPs are on different subnets, add network configuration
# Edit DIODE docker-compose.yml
nano docker-compose.yml
# Add to the BOTTOM of the file:
networks:
default:
external: true
name: netbox-docker_default
# Save and exit (Ctrl+O, Enter, Ctrl+X)
# Restart DIODE services
docker-compose down
docker-compose up -d
# Verify connectivity
cd ~/netbox-docker
docker-compose exec netbox curl http://diode-hydra-1:4445/health/ready
# Should return: {"status":"ok"}
5. Deploy ORB Agent
# Create directory for ORB Agent
mkdir -p ~/netbox-discovery/orb-agent
cd ~/netbox-discovery/orb-agent
# Create config.yaml (use OAuth credentials from NetBox UI)
# First, create OAuth credentials in NetBox:
# NetBox UI > Plugins > DIODE > Client Credentials > Add
# Create config.yaml
cat << 'EOF' > config.yaml
discovery:
device_credentials:
- username: cisco
password: cisco
driver: ios
interfaces:
enabled: true
mac_addresses:
enabled: true
ip_addresses:
enabled: true
vlans:
enabled: true
devices:
- hostname: 192.168.1.10
driver: ios
- hostname: 192.168.1.11
driver: ios
- hostname: 192.168.1.12
driver: ios
oauth:
client_id: your-client-id-from-netbox
client_secret: your-client-secret-from-netbox
token_url: http://192.168.1.120:8080/oauth2/token
diode:
target: 192.168.1.120:8080
schedule:
interval: 900 # 15 minutes
EOF
# Create docker-compose.yml for ORB Agent
cat << 'EOF' > docker-compose.yml
services:
orb-agent:
image: netboxlabs/orb-agent:latest
container_name: orb-agent
volumes:
- /home/user/netbox-discovery/orb-agent/config.yaml:/app/config.yaml
restart: unless-stopped
EOF
# IMPORTANT: Update path to absolute path
# Replace /home/user with your actual home directory path
# Start ORB Agent
docker-compose up -d
# Check logs
docker logs -f orb-agent
6. Verify Auto-Discovery
# Check ORB Agent logs
cd ~/netbox-discovery/orb-agent
docker logs orb-agent | grep "Successful ingestion"
# Check DIODE ingester logs
cd ~/netbox-discovery/diode
docker-compose logs diode-ingester | grep -i "success"
# Check DIODE reconciler logs
docker-compose logs diode-reconciler | grep "applied successfully"
# Check NetBox UI
# Navigate to: Devices > Devices
# You should see discovered devices!
7. Troubleshooting Commands
# Check all container status
cd ~/netbox-docker && docker-compose ps
cd ~/netbox-discovery/diode && docker-compose ps
cd ~/netbox-discovery/orb-agent && docker-compose ps
# Check NetBox plugin configuration
cd ~/netbox-docker
docker-compose exec netbox python /opt/netbox/netbox/manage.py shell
>>> from django.conf import settings
>>> print(settings.PLUGINS_CONFIG.get('netbox_diode_plugin'))
>>> exit()
# List OAuth clients in Hydra
cd ~/netbox-discovery/diode
docker-compose exec hydra hydra list clients --endpoint http://localhost:4445
# Test OAuth token generation
curl -X POST http://192.168.1.120:8080/oauth2/token \
-d "grant_type=client_credentials" \
-d "client_id=YOUR_CLIENT_ID" \
-d "client_secret=YOUR_CLIENT_SECRET"
# Check DIODE API health
curl http://192.168.1.120:8080/health
# Restart all services
cd ~/netbox-docker && docker-compose restart
cd ~/netbox-discovery/diode && docker-compose restart
cd ~/netbox-discovery/orb-agent && docker-compose restart
π¨ Common Issues & Solutions
β Error: "Missing netbox to diode client secret"
Screenshot:
Error Message:
Please update the plugin configuration to access this feature.
Missing netbox to diode client secret.
Root Cause: The NetBox DIODE Plugin configuration is missing or incorrectly configured with the OAuth client secret parameter.
Why This Happens:
- β Incorrect parameter name used in
plugins.py(diode_client_secretinstead ofnetbox_to_diode_client_secret) - β Plugin configuration not updated after initial installation
- β NetBox services not restarted after configuration changes
- β Client secret doesn't match the value in DIODE's
client-credentials.json
β STEP-BY-STEP FIX:
1. Verify DIODE has the Client Credentials
# Navigate to DIODE directory
cd ~/netbox-discovery/diode
# Check if client credentials file exists
cat oauth2/client/client-credentials.json
Expected output:
[
{
"client_id": "diode-ingest",
"client_secret": "tcgnPFpZ3qVtxyqR+sGXIexpxAk8wul2S8yu7Duans=",
"grant_types": ["client_credentials"],
"scope": "diode:ingest"
},
{
"client_id": "netbox-to-diode",
"client_secret": "GdS63SRQ4I0G15I0V35uQYD7V+qnNUTjZCCD10yQvQ=",
"grant_types": ["client_credentials"],
"scope": "diode:read diode:write"
},
{
"client_id": "diode-to-netbox",
"client_secret": "NypFsBMV1rQTRDX5jO7Utez57DwF503gk8e2QKADLU=",
"grant_types": ["client_credentials"],
"scope": "netbox:read netbox:write"
}
]
β
Copy the netbox-to-diode client_secret value - you'll need this!
2. Verify Hydra Has the Client
# Check Hydra container is running
cd ~/netbox-discovery/diode
docker-compose ps hydra
# List all OAuth clients in Hydra
docker-compose exec hydra hydra list clients --endpoint http://localhost:4445
Expected output should include:
CLIENT ID GRANT TYPES RESPONSE TYPES
netbox-to-diode client_credentials token
diode-to-netbox client_credentials token
diode-ingest client_credentials token
β
If you see netbox-to-diode, the client exists in Hydra
3. Check NetBox Plugin Configuration
# Navigate to NetBox directory
cd ~/netbox-docker
# Check the plugins configuration file
cat configuration/plugins.py | grep -A 10 "netbox_diode_plugin"
β WRONG Configuration (causes error):
PLUGINS_CONFIG = {
'netbox_diode_plugin': {
'diode_target_override': 'grpc://192.168.1.20:8080/diode',
'diode_client_id': 'netbox-to-diode', # β Plugin doesn't use this
'diode_client_secret': 'GdS63SRQ4I0G...', # β WRONG parameter name
}
}
β CORRECT Configuration:
PLUGINS_CONFIG = {
'netbox_diode_plugin': {
'diode_target_override': 'grpc://192.168.1.20:8080/diode',
'netbox_to_diode_client_secret': 'GdS63SRQ4I0G15I0V35uQYD7V+qnNUTjZCCD10yQvQ=', # β
CORRECT
'hydra_admin_url': 'http://diode-hydra-1:4445', # β
REQUIRED
}
}
π Key Differences:
- β Remove
diode_client_id- plugin doesn't use this parameter - β
Change
diode_client_secretβnetbox_to_diode_client_secret - β
Add
hydra_admin_urlparameter (required for plugin to function)
4. Update NetBox Configuration
cd ~/netbox-docker
# Edit the plugins configuration
nano configuration/plugins.py
Update to the CORRECT configuration:
PLUGINS = [
'netbox_diode_plugin',
]
PLUGINS_CONFIG = {
'netbox_diode_plugin': {
'diode_target_override': 'grpc://192.168.1.20:8080/diode',
'netbox_to_diode_client_secret': 'GdS63SRQ4I0G15I0V35uQYD7V+qnNUTjZCCD10yQvQ=', # Use YOUR actual secret
'hydra_admin_url': 'http://diode-hydra-1:4445',
}
}
π Replace the secret with YOUR actual value from client-credentials.json
Save: Ctrl+O, Enter, Ctrl+X
5. Restart NetBox Services
cd ~/netbox-docker
# Restart NetBox and NetBox worker
docker-compose restart netbox netbox-worker
# Wait 30 seconds for services to fully restart
sleep 30
# Check if services are running
docker-compose ps
Expected: Both netbox and netbox-worker should show status "Up"
6. Verify Configuration Was Applied
# Enter NetBox container Python shell
docker-compose exec netbox python /opt/netbox/netbox/manage.py shell
Inside Python shell, run:
from django.conf import settings
config = settings.PLUGINS_CONFIG.get('netbox_diode_plugin')
print(config)
Expected output:
{
'diode_target_override': 'grpc://192.168.1.20:8080/diode',
'netbox_to_diode_client_secret': 'GdS63SRQ4I0G15I0V35uQYD7V+qnNUTjZCCD10yQvQ=',
'hydra_admin_url': 'http://diode-hydra-1:4445'
}
β
Verify netbox_to_diode_client_secret is NOT None
Exit Python shell: exit()
7. Test NetBox UI
- Open browser:
http://192.168.1.20:8000 - Login as admin
- Navigate to: Plugins β DIODE β Client Credentials
- Click: "+ Add a Credential"
β The error should be GONE!
π Complete Verification Checklist:
# Run all these commands to verify everything:
# 1. Check DIODE credentials file exists
cat ~/netbox-discovery/diode/oauth2/client/client-credentials.json
# 2. Check Hydra has the client
cd ~/netbox-discovery/diode
docker-compose exec hydra hydra list clients --endpoint http://localhost:4445
# 3. Check NetBox plugin config syntax
cd ~/netbox-docker
grep -A 5 "netbox_diode_plugin" configuration/plugins.py
# 4. Verify NetBox can reach Hydra
docker-compose exec netbox curl http://diode-hydra-1:4445/health/ready
# Should return: {"status":"ok"}
# 5. Check NetBox logs for plugin errors
docker-compose logs netbox | grep -i diode | tail -20
π― Quick Fix Summary:
If you see the "Missing netbox to diode client secret" error:
# 1. Get the secret from DIODE
cd ~/netbox-discovery/diode
cat oauth2/client/client-credentials.json | grep -A 3 "netbox-to-diode"
# 2. Update NetBox config with CORRECT parameter name
cd ~/netbox-docker
nano configuration/plugins.py
# Make sure it says:
# 'netbox_to_diode_client_secret': 'YOUR_SECRET'
# NOT: 'diode_client_secret'
# And includes:
# 'hydra_admin_url': 'http://diode-hydra-1:4445'
# 3. Restart NetBox
docker-compose restart netbox netbox-worker
# 4. Wait 30 seconds and test
sleep 30
# Open browser and check Plugins > DIODE > Client Credentials
Additional Notes:
- The plugin expects the parameter name
netbox_to_diode_client_secret(notdiode_client_secret) - The
hydra_admin_urlparameter is required for the plugin to manage OAuth clients - Client ID is hardcoded as
netbox-to-diodein the plugin - no need to specify it - Ensure NetBox and DIODE containers are on the same Docker network for connectivity
This configuration error is one of the most common issues during DIODE plugin setup. Following these exact steps will resolve it!
Issue 2: Network Connectivity - "Could not resolve host"
Solution:
cd ~/netbox-discovery/diode
nano docker-compose.yml
# Add to bottom:
networks:
default:
external: true
name: netbox-docker_default
# Restart
docker-compose down && docker-compose up -d
Issue 3: ORB Agent Config Not Found
Solution:
# Use absolute path in docker-compose.yml
volumes:
- /home/username/netbox-discovery/orb-agent/config.yaml:/app/config.yaml
# Not relative path like:
# - ./config.yaml:/app/config.yaml
π Resources
Video 10: pyATS + NetBox Integration
π Overview
Integrate Cisco pyATS testing framework with NetBox for automated network validation.
π― What You'll Learn
- Install and configure pyATS
- Create testbed from NetBox data
- Run automated network tests
- Generate test reports
π» Commands
1. Install pyATS and Genie
# Activate virtual environment
source ~/ansible-venv/bin/activate
# Install pyATS
pip install pyats[full]
# Install Genie
pip install genie
# Verify installation
pyats version
genie --version
2. Create pyATS Testbed
# Create testbed directory
mkdir -p ~/pyats-netbox && cd ~/pyats-netbox
# Create testbed.yaml
cat << 'EOF' > testbed.yaml
testbed:
name: Network_Lab
credentials:
default:
username: cisco
password: cisco
devices:
vIOS-R1:
os: ios
type: router
connections:
cli:
protocol: ssh
ip: 192.168.1.10
vIOS-R2:
os: ios
type: router
connections:
cli:
protocol: ssh
ip: 192.168.1.11
vIOS-R3:
os: ios
type: router
connections:
cli:
protocol: ssh
ip: 192.168.1.12
EOF
# Verify testbed
pyats validate testbed testbed.yaml
3. Test Device Connectivity
# Connect to device
pyats shell --testbed-file testbed.yaml
# Inside pyATS shell
>>> devices.vIOS-R1.connect()
>>> devices.vIOS-R1.execute('show version')
>>> devices.vIOS-R1.disconnect()
>>> exit()
4. Parse Show Commands
# Create parser script
cat << 'EOF' > parse_devices.py
#!/usr/bin/env python3
from pyats import topology
from genie.conf import Genie
# Load testbed
testbed = topology.loader.load('testbed.yaml')
# Connect to device
device = testbed.devices['vIOS-R1']
device.connect()
# Parse show version
output = device.parse('show version')
print(f"Hostname: {output['version']['hostname']}")
print(f"Version: {output['version']['version']}")
print(f"Uptime: {output['version']['uptime']}")
# Parse show interfaces
interfaces = device.parse('show interfaces')
for intf in interfaces['interfaces']:
print(f"Interface: {intf}")
print(f" Status: {interfaces['interfaces'][intf]['oper_status']}")
device.disconnect()
EOF
chmod +x parse_devices.py
python3 parse_devices.py
5. Learn Device Features
# Learn interface configuration
pyats learn interface --testbed testbed.yaml --device vIOS-R1 --output interface_state/
# Learn routing table
pyats learn routing --testbed testbed.yaml --device vIOS-R1 --output routing_state/
# Learn OSPF
pyats learn ospf --testbed testbed.yaml --device vIOS-R1 --output ospf_state/
# Learn all features
pyats learn all --testbed testbed.yaml --output all_features/
6. Create Test Script
# Create test script
cat << 'EOF' > test_network.py
#!/usr/bin/env python3
from pyats import aetest
from genie.testbed import load
class CommonSetup(aetest.CommonSetup):
@aetest.subsection
def connect_to_devices(self, testbed):
for device in testbed.devices.values():
device.connect()
class InterfaceTest(aetest.Testcase):
@aetest.setup
def setup(self, testbed):
self.device = testbed.devices['vIOS-R1']
@aetest.test
def test_interfaces_up(self):
output = self.device.parse('show interfaces')
failed_interfaces = []
for intf in output['interfaces']:
status = output['interfaces'][intf]['oper_status']
if status != 'up':
failed_interfaces.append(intf)
if failed_interfaces:
self.failed(f"Interfaces down: {failed_interfaces}")
else:
self.passed("All interfaces are up")
class CommonCleanup(aetest.CommonCleanup):
@aetest.subsection
def disconnect_from_devices(self, testbed):
for device in testbed.devices.values():
device.disconnect()
if __name__ == '__main__':
import argparse
from pyats.topology import loader
parser = argparse.ArgumentParser()
parser.add_argument('--testbed', dest='testbed')
args, unknown = parser.parse_known_args()
testbed = loader.load(args.testbed)
aetest.main(testbed=testbed)
EOF
chmod +x test_network.py
python3 test_network.py --testbed testbed.yaml
7. Compare Network States
# Learn "before" state
pyats learn interface --testbed testbed.yaml --output before/
# Make changes on devices...
# Learn "after" state
pyats learn interface --testbed testbed.yaml --output after/
# Compare states
pyats diff before/ after/
8. Integration with NetBox
# Create script to sync pyATS results to NetBox
cat << 'EOF' > sync_to_netbox.py
#!/usr/bin/env python3
import requests
import json
from pyats import topology
# NetBox configuration
NETBOX_URL = "http://192.168.1.120:8000"
NETBOX_TOKEN = "your-netbox-api-token"
headers = {
"Authorization": f"Token {NETBOX_TOKEN}",
"Content-Type": "application/json"
}
# Load testbed
testbed = topology.loader.load('testbed.yaml')
# Connect and collect data
device = testbed.devices['vIOS-R1']
device.connect()
# Parse interfaces
interfaces = device.parse('show interfaces')
# Update NetBox
for intf_name in interfaces['interfaces']:
intf_data = interfaces['interfaces'][intf_name]
# Create/update interface in NetBox
data = {
"name": intf_name,
"type": "other",
"enabled": intf_data['enabled'],
"mtu": intf_data.get('mtu', 1500),
"description": intf_data.get('description', '')
}
# API call to NetBox
response = requests.post(
f"{NETBOX_URL}/api/dcim/interfaces/",
headers=headers,
json=data
)
print(f"Updated interface: {intf_name} - Status: {response.status_code}")
device.disconnect()
EOF
chmod +x sync_to_netbox.py
python3 sync_to_netbox.py
9. Automated Testing with Job File
# Create job file
cat << 'EOF' > network_test_job.py
import os
from pyats.easypy import run
def main(runtime):
"""Main job execution"""
# Run test script
run(
testscript='test_network.py',
runtime=runtime,
taskid='InterfaceTests'
)
EOF
# Run job
pyats run job network_test_job.py --testbed-file testbed.yaml
# View results
# Results are in: ./archive/
ls -la archive/
π Resources
Video 11: MCP Fundamentals - Quick Breakdown
π Overview
Understanding Model Context Protocol (MCP) - the universal standard that connects AI assistants to external tools. This video covers theory and concepts only - hands-on setup is in Video 12.
π― What You'll Learn
- What is MCP and why it matters
- MCP architecture (Host, Server, Resources)
- Transport methods (STDIO vs HTTP)
- FastMCP framework basics
ποΈ MCP Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β MCP ARCHITECTURE β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β βββββββββββββ JSON-RPC βββββββββββββββββββ β
β β β β MCP SERVER β β
β β MCP HOST βββββββββββββββββββββββΊβ β β
β β β request/response β βββββββββββββ β API β
β β Claude β β β TOOLS β β calls β
β β Code β β β ββββββββ β βββββββββββββΊ β
β β β β β β’ func1() β β β
β βββββββββββββ β β β’ func2() β β β
β β βββββββββββββ β β
β βββββββββββββββββββ β
β β β
β βΌ β
β βββββββββββββββ β
β β RESOURCES β β
β β βββββββββββ β β
β β β’ APIs β β
β β β’ Databases β β
β β β’ Files β β
β βββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
π MCP Components
| Component | Description | Examples |
|---|---|---|
| MCP Host | AI application interface | Claude Code, Claude Desktop, VS Code |
| MCP Server | Bridge between AI and tools | NetBox MCP, Custom servers |
| Resources | Actual data sources | APIs, Databases, Files |
| Tools | Functions AI can execute | get_devices(), get_ips() |
π Transport Methods
| Transport | Use Case | Pros | Cons |
|---|---|---|---|
| STDIO | Local development | Simple, Secure, No ports | Same machine only |
| HTTP/SSE | Remote/Team | Multi-user, Docker-friendly | Network config needed |
π Transport Decision Guide
βββββββββββββββββββββββββββ¬βββββββββββββ
β SCENARIO β USE β
βββββββββββββββββββββββββββΌβββββββββββββ€
β Local development β STDIO β
β Same machine β STDIO β
β Remote server β HTTP β
β Docker deployment β HTTP β
β Team shared β HTTP β
β Just starting? β β STDIO β
βββββββββββββββββββββββββββ΄βββββββββββββ
π‘ Start with STDIO β Graduate to HTTP
π Resources
Video 12: NetBox + MCP Hands-On Setup
π Overview
Complete hands-on guide to connect Claude AI to NetBox using Model Context Protocol (MCP) for natural language infrastructure queries.
π― What You'll Learn
- Install Claude Code CLI on Linux
- Setup UV Python package manager
- Clone and configure NetBox MCP Server
- Create NetBox API token
- Query NetBox using natural language
ποΈ Lab Architecture
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β LAB TOPOLOGY β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β βββββββββββββββββββ βββββββββββββββββββ β
β β Ansible Node β β NetBox β β
β β (Claude Host) ββββββββββββββ 192.168.1.20 β β
β β β API β β β
β β β’ Claude Code β β βββββββββββββ β β
β β β’ NetBox MCP β β β 3 Devices β β β
β β β’ UV / Python β β βββββββββββββ β β
β βββββββββββββββββββ βββββββββββββββββββ β
β β
β Devices in NetBox: β
β βββββββββββ βββββββββββ βββββββββββ β
β β vIOS-R1 β β vIOS-R2 β β vIOS-R3 β β
β β .201 β β .202 β β .203 β β
β βββββββββββ βββββββββββ βββββββββββ β
β β
β β οΈ Note: Actual routers NOT required for MCP queries! β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
π Key Concepts
| Term | Description |
|---|---|
| Node.js | JavaScript runtime for running CLI tools like Claude Code |
| NPM | Node Package Manager - like pip for JavaScript packages |
| Claude Code | CLI interface for Claude AI (Linux option) |
| Claude Desktop | GUI application (Windows/macOS only - no Linux) |
| UV | Fast Python package manager built in Rust (10-100x faster than pip) |
| MCP Host | Where the AI runs (Claude Code in our case) |
| MCP Server | Bridge that translates AI requests to API calls |
| Tools | Functions the AI can call (e.g., netbox_get_objects) |
| Pontification | Claude's "thinking out loud" reasoning process |
| DCIM | Data Center Infrastructure Management (devices, racks, sites) |
| IPAM | IP Address Management (IPs, prefixes, VLANs) |
π» Commands
1. Install Node.js 20+
# Check if Node.js is installed
node --version
# If not installed, add NodeSource repository
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
# Install Node.js
sudo apt install -y nodejs
# Verify installation
node --version # Should show v20.x.x
npm --version # Should show 10.x.x
What is Node.js? JavaScript runtime that lets you run JavaScript outside a browser. Many modern CLI tools are built with it.
What is NPM? Node Package Manager - like pip for Python or apt for Ubuntu.
2. Install Claude Code CLI
# Install Claude Code globally
sudo npm install -g @anthropic-ai/claude-code
# Verify installation
claude --version
# Check available commands
claude --help
Claude Offerings:
| Option | Platform | Description |
|---|---|---|
| Claude.ai | Web | Browser-based interface |
| Claude Desktop | Windows/macOS | GUI application (No Linux!) |
| Claude Code | All platforms | CLI interface β Our choice for Linux |
3. Install UV (Python Package Manager)
# Install UV
curl -LsSf https://astral.sh/uv/install.sh | sh
# Add to PATH (current session)
source $HOME/.local/bin/env
# Add to PATH permanently
echo 'source $HOME/.local/bin/env' >> ~/.bashrc
source ~/.bashrc
# Verify installation
uv --version
Why UV? 10-100x faster than pip, automatically creates virtual environments, handles all dependency resolution.
4. Clone NetBox MCP Server
# Create MCP servers directory
mkdir -p ~/mcp-servers
cd ~/mcp-servers
# Clone official NetBox MCP server from NetBox Labs
# Find at: NetBox Labs Docs > MCP Section > GitHub Repository
git clone https://github.com/netboxlabs/netbox-mcp-server.git
# Verify clone
ls -l
# Enter the directory
cd netbox-mcp-server
# View contents
ls -la
5. Create NetBox API Token
# In NetBox UI:
# 1. Login to NetBox: http://192.168.1.20:8000
# 2. Click your username (top right corner)
# 3. Select "API Tokens"
# 4. Click "+ Add a token"
# 5. Description: "mcp-setup"
# 6. Write enabled: Leave UNCHECKED (read-only is safer)
# 7. Click "Create"
# 8. COPY THE TOKEN IMMEDIATELY - shown only once!
6. Test MCP Server Standalone
# Navigate to MCP server directory
cd ~/mcp-servers/netbox-mcp-server
# Test MCP server connection (replace with your values)
NETBOX_URL=http://192.168.1.20:8000/ \
NETBOX_TOKEN=your-netbox-api-token \
uv run netbox-mcp-server
# What happens behind the scenes:
# 1. UV creates virtual environment (.venv folder)
# 2. Reads dependencies from pyproject.toml
# 3. Downloads packages (FastMCP, httpx, pydantic, etc.)
# 4. Runs the MCP server
# You should see:
# - FastMCP banner
# - Server name: NetBox
# - Message: "Starting MCP server 'NetBox' with transport 'stdio'"
# Press Ctrl+C to stop the server
What does this prove? MCP server can connect to NetBox API and is functioning correctly - but running standalone, not connected to Claude yet.
7. Configure Claude Code with MCP Server
# Register MCP server with Claude Code
claude mcp add netbox \
-e NETBOX_URL=http://192.168.1.20:8000/ \
-e NETBOX_TOKEN=your-netbox-api-token \
-- uv --directory ~/mcp-servers/netbox-mcp-server run netbox-mcp-server
# Command breakdown:
# claude mcp add netbox β Add server named "netbox"
# -e NETBOX_URL=... β Environment variable for URL
# -e NETBOX_TOKEN=... β Environment variable for token
# -- uv --directory ... run β Command to start the server
# Verify MCP server is registered
claude mcp list
# You should see:
# "Checking MCP server health..."
# "netbox β Connected"
Difference between Step 6 and Step 7:
| Step 6 | Step 7 |
|---|---|
| Manual standalone test | Register with Claude Code |
| We run the server | Claude manages the server |
| Verify it works | Enable Claude to use it |
8. Start Claude Code
# Launch Claude Code
claude
# First-time setup:
# 1. Choose text editor style (6 options) - pick default
# 2. Select login method:
# - Claude Subscription (Pro/Max) β Recommended
# - API Usage (pay per token)
# 3. Browser opens for authentication
# 4. Login and authorize
# 5. Press Enter to continue
# 6. Configure terminal settings (use default)
# Welcome screen shows:
# - Subscription details
# - Model: Claude Opus 4.5
# - Available MCP servers including "netbox"
Login Options:
| Option | Description | Pricing |
|---|---|---|
| Claude Subscription | Pro/Max monthly | Fixed monthly fee |
| API Usage | Pay per token | See anthropic.com/pricing |
9. Demo Queries
# Inside Claude Code, try these natural language queries:
# Query 1: List all devices
List all devices in my NetBox
# Triggers: netbox_get_objects tool
# Returns: Device ID, Name, Status, Type, Site, Primary IP
# Query 2: Site and role query
Show all devices from Main-DC with their role
# May show "Pontification" (Claude thinking out loud)
# Query 3: IP address query (IPAM)
List all IP addresses assigned to each device
# Queries ipam.ipaddresses object
# Query 4: MAC address query
List all MAC addresses available in NetBox
# Returns: MAC addresses for each interface
10. MCP Server Management
# List all registered MCP servers
claude mcp list
# Remove an MCP server
claude mcp remove netbox
# Re-add with different settings
claude mcp add netbox \
-e NETBOX_URL=http://new-ip:8000/ \
-e NETBOX_TOKEN=new-token \
-- uv --directory ~/mcp-servers/netbox-mcp-server run netbox-mcp-server
# View MCP server configuration
cat ~/.config/claude-code/mcp.json
11. Troubleshooting
# If UV times out during package install
UV_HTTP_TIMEOUT=120 uv sync
# If MCP server won't start - reset virtual environment
cd ~/mcp-servers/netbox-mcp-server
rm -rf .venv
uv sync
# If Claude can't connect to MCP server
claude mcp list
# Check for red X marks or "Disconnected" status
# Verify NetBox API is accessible
curl -s http://192.168.1.20:8000/api/ \
-H "Authorization: Token YOUR_TOKEN" | head -20
# Debug MCP server with logging
NETBOX_URL=http://192.168.1.20:8000/ \
NETBOX_TOKEN=your-token \
uv run netbox-mcp-server 2>&1 | tee mcp-debug.log
π― Demo Queries Reference
| Query | Tool Called | Data Returned |
|---|---|---|
| "List all devices" | netbox_get_objects | Device ID, Name, Status, Type, Site, IP |
| "Show devices from Main-DC with role" | netbox_get_objects | Devices filtered by site with roles |
| "List all IP addresses" | netbox_get_objects (ipam) | IPs, interfaces, assignment status |
| "List all MAC addresses" | netbox_get_objects | MAC addresses per interface |
β οΈ Important Notes
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β IMPORTANT NOTES β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β β
READ-ONLY ACCESS β
β NetBox MCP server only queries data β
β It CANNOT modify your NetBox β
β Safe for production environments β
β β
β β NO SSH TO DEVICES β
β We're querying NetBox (source of truth) β
β NOT the actual network devices β
β For device access β Video 13: Custom MCP Server β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
π Resources
- NetBox MCP Server GitHub
- NetBox Labs Documentation
- Claude Code Download
- Anthropic Pricing
- UV Package Manager
- FastMCP Documentation
- MCP Protocol
Video 13: Custom MCP Server - Network Device Automation
π Overview
Build a custom MCP server using FastMCP and Netmiko to SSH into network devices and run commands via Claude CLI.
π― What You'll Build
- Custom Python MCP server with FastMCP
- SSH connectivity using Netmiko
- Device inventory from YAML file
- Natural language to Live device commands
ποΈ Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
β βββββββββββββββββββ STDIO ββββββββββββββββββββββ SSH βββββββββββββ β
β β Claude CLI β<----------->β device_mcp.py β<------->β Routers β β
β β β β β β β β
β β "Show β β Python β β vIOS-R1 β β
β β version on β β FastMCP β β vIOS-R2 β β
β β R1" β β Netmiko β β vIOS-R3 β β
β βββββββββββββββββββ ββββββββββββββββββββββ βββββββββββββ β
β ^ β
β devices.yaml β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
π» Commands
1. Project Setup
# Navigate to MCP servers directory
cd ~/mcp-servers
# Create project directory
mkdir device-mcp
cd device-mcp
# Initialize Python project with UV
uv init
# Add dependencies
uv add fastmcp netmiko pyyaml
# Verify installation
uv pip list | grep -E "fastmcp|netmiko"
2. devices.yaml - Device Inventory
vIOS-R1:
host: 192.168.1.101
vIOS-R2:
host: 192.168.1.102
vIOS-R3:
host: 192.168.1.103
3. device_mcp.py - Custom MCP Server
#!/usr/bin/env python3
from fastmcp import FastMCP
from netmiko import ConnectHandler
import yaml
# Initialize MCP server
mcp = FastMCP("DeviceCommands")
# Credentials (for demo - use env vars in production)
CREDENTIALS = {
"username": "ansible",
"password": "ansible@123",
"device_type": "cisco_ios"
}
# Load device inventory
def load_inventory(path: str = "devices.yaml"):
with open(path, "r") as f:
return yaml.safe_load(f)
DEVICES = load_inventory()
@mcp.tool()
def list_devices() -> str:
"""List all available network devices"""
return "\n".join(DEVICES.keys())
@mcp.tool()
def run_command(device_name: str, command: str) -> str:
"""Run a show command on a network device via SSH.
Args:
device_name: Name of the device (e.g., vIOS-R1)
command: The show command to run
"""
if device_name not in DEVICES:
return f"Error: Device {device_name} not found"
device_info = DEVICES[device_name]
connection_params = {
"host": device_info["host"],
**CREDENTIALS
}
try:
with ConnectHandler(**connection_params) as conn:
output = conn.send_command(command)
return output
except Exception as e:
return f"Error: {str(e)}"
@mcp.tool()
def get_device_info(device_name: str) -> dict:
"""Get connection info for a device"""
if device_name not in DEVICES:
return {"error": f"Device {device_name} not found"}
return DEVICES[device_name]
if __name__ == "__main__":
mcp.run()
4. Test MCP Server Locally
# Test the server runs without errors
uv run python device_mcp.py
# You should see FastMCP banner with "DeviceCommands"
# Press Ctrl+C to stop
5. Register MCP Server with Claude CLI
# Add MCP server to Claude CLI
claude mcp add device-commands -- uv --directory ~/mcp-servers/device-mcp run python device_mcp.py
# Verify registration
claude mcp list
# Expected output shows device-commands as connected
6. Launch Claude CLI and Test
# Start Claude CLI
claude
# Demo Query 1: List devices
"List the available devices"
# Demo Query 2: Show version
"Show me the IOS version on vIOS-R1"
# Demo Query 3: Interface info
"What interfaces are configured on vIOS-R2?"
# Demo Query 4: Multi-device query
"Show the routing table on all three routers"
7. MCP Server Management
# List all registered MCP servers
claude mcp list
# Remove an MCP server
claude mcp remove device-commands
# Re-add with different path
claude mcp add device-commands -- uv --directory /new/path run python device_mcp.py
# View MCP logs (for troubleshooting)
# Check ~/.claude/mcp-logs/
π¦ Example Queries
# List available devices
"What devices can you connect to?"
# Run show commands
"Show the version on vIOS-R1"
"What interfaces are on vIOS-R2?"
"Show me the routing table on vIOS-R3"
# Multi-device queries
"Show version on all routers"
"Check OSPF neighbors on R1 and R2"
"Compare the interface status across all routers"
π Resources
Video 14: Ansible Dynamic Inventory with NetBox
π Overview
Configure Ansible to pull device inventory directly from NetBox instead of static files. Add a device in NetBox and it automatically available in Ansible. No more manual inventory updates!
π― What You'll Learn
- How dynamic inventory actually works
- Setting up the NetBox Ansible plugin
- Auto-grouping devices by role, site, and platform
- And running playbooks against this live inventory"
- Troubleshoot common issues (including Redis Docker bug)
ποΈ Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
β STATIC INVENTORY (Old Way) DYNAMIC INVENTORY (New Way) β
β -------------------------- --------------------------- β
β β
β ββββββββββββββββββββββββ βββββββββββββββββββ β
β β inventory/hosts β β NetBox β β
β β β β (Source of β β
β β [routers] β β Truth) β β
β β R1 ansible_host=... β ββββββββββ¬βββββββββ β
β β R2 ansible_host=... β β β
β β R3 ansible_host=... β β API Call β
β ββββββββββββ¬ββββββββββββ v β
β β βββββββββββββββββββ β
β β β netbox.yml β β
β v β (Plugin Config)β β
β ββββββββββββββββββββββββ ββββββββββ¬βββββββββ β
β β Ansible β β β
β β (Runs Playbook) β<------------------------+ β
β ββββββββββββββββββββββββ β
β β
β [X] Manual updates required [OK] Always current β
β [X] Config drift risk [OK] Single source of truth β
β [X] Duplicate effort [OK] Auto-grouping by role/site β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
π How Dynamic Inventory Works
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
β βββββββββββββββββββββββββββββββββββββββ β
β β $ ansible-inventory --graph β <-- You run this command β
β ββββββββββββββββββββ¬βββββββββββββββββββ β
β β β
β v β
β βββββββββββββββββββββββββββββββββββββββ β
β β Ansible reads: inventory/netbox.ymlβ <-- Plugin config file β
β ββββββββββββββββββββ¬βββββββββββββββββββ β
β β β
β v β
β βββββββββββββββββββββββββββββββββββββββ β
β β GET /api/dcim/devices/ β <-- Auto API call to NetBox β
β ββββββββββββββββββββ¬βββββββββββββββββββ β
β β β
β v β
β βββββββββββββββββββββββββββββββββββββββ β
β β NetBox returns: Device list (JSON) β <-- Real-time device data β
β ββββββββββββββββββββ¬βββββββββββββββββββ β
β β β
β v β
β βββββββββββββββββββββββββββββββββββββββ β
β β @all: β β
β β |--@device_roles_router: β <-- Dynamic inventory graph β
β β | |--vIOS-R1 β β
β β | |--vIOS-R2 β β
β β | |--vIOS-R3 β β
β βββββββββββββββββββββββββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Key Concept: The netbox.yml file is NOT the inventory itself - it's the CONFIG that tells Ansible HOW to get inventory from NetBox. Every time you run an Ansible command, it queries NetBox in real-time!
π Dynamic Inventory Structure
This section contains the JSON schema used by our dynamic inventory scripts. Click the arrows below to expand each section.
1. π Host Metadata (_meta)
Contains individual device details like Serial Numbers, Manufacturer, and Hardware Roles.
βΆ Click to expand Host Variables
{
"_meta": {
"hostvars": {
"spine-01.dc1": {
"ansible_host": "10.10.1.1",
"serial_number": "SN-CISCO-123",
"manufacturer": "Cisco",
"model": "Nexus 9300",
"site": "Chicago-DC",
"device_role": "spine"
},
"leaf-01.dc1": {
"ansible_host": "10.10.2.1",
"serial_number": "SN-ARISTA-999",
"manufacturer": "Arista",
"model": "DCS-7050",
"site": "Chicago-DC",
"device_role": "leaf"
}
}
}
}
ποΈ 2. Group Definitions
These groups allow you to target devices based on their function, location, or OS platform. Each group contains its own specific variables.
βΆ Click to expand Group Definitions (Roles, Sites, Platforms)
{
"role_spine": {
"hosts": ["spine-01.dc1"]
},
"role_leaf": {
"hosts": ["leaf-01.dc1"]
},
"site_chicago": {
"hosts": ["spine-01.dc1", "leaf-01.dc1"],
"vars": {
"ntp_server": "10.0.0.5",
"timezone": "CST",
"dns_primary": "8.8.8.8"
}
},
"platform_ios": {
"hosts": ["spine-01.dc1"],
"vars": {
"ansible_network_os": "cisco.ios.ios"
}
},
"platform_eos": {
"hosts": ["leaf-01.dc1"],
"vars": {
"ansible_network_os": "arista.eos.eos"
}
}
}
π 3. Global Hierarchy (all)
This is the "Parent" group. It lists all other groups as children and sets the global variables used for the entire fleet.
βΆ Click to expand Global Hierarchy & Connection Vars
{
"all": {
"children": [
"role_spine",
"role_leaf",
"site_chicago",
"platform_ios",
"platform_eos"
],
"vars": {
"ansible_connection": "ansible.netcommon.network_cli",
"ansible_user": "netadmin",
"ansible_become": true,
"ansible_become_method": "enable"
}
}
}
π Home LAB Demo Setup
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β
β βββββββββββββββββ βββββββββββββββββββ β
β β Fortinet β β NetBox-SOT β β
β β Firewall β β (Inventory Src)β β
β β 192.168.1.211 β β 192.168.1.120 β β
β βββββββββ¬ββββββββ ββββββββββ¬βββββββββ β
β β port1 β e0 β
β β β β
β βββββββββββ ββββββββ΄βββββββββββββββββββββββββββ΄βββββββ βββββββββββββ β
β β Net β---------β Mgmt_Switch β---β Ansible β β
β β (Cloud) β Gi0/3 β β β Node β β
β βββββββββββ βββββββ¬ββββββββββββββ¬ββββββββββββββ¬βββββββ β192.168.1. β β
β βGi0/0 βGi0/1 βGi0/2 β 119 β β
β β β β βββββββββββββ β
β ββββββ΄βββββ ββββββ΄βββββ ββββββ΄βββββ β
β β vIOS-R1 β---β vIOS-R2 β---β vIOS-R3 β β
β β Router β β Router β β Router β β
β β 192.168 β β 192.168 β β 192.168 β β
β β .1.101 β β .1.102 β β .1.103 β β
β βββββββββββ βββββββββββ βββββββββββ β
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β COMPONENTS β β
β β > NetBox-SOT (192.168.1.120) --- Source of Truth / Inventory Source β β
β β > Ansible Node (192.168.1.119) - Runs playbooks with dynamic inventory β β
β β > vIOS-R1/R2/R3 ---------------- Target network devices β β
β β > Fortinet Firewall ------------ Additional managed device β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
π Prerequisites
| Requirement | Details |
|---|---|
| NetBox | Running with devices configured |
| Devices | Primary IP assigned, Status = Active |
| Platform | Set for each device |
| API Token | NetBox API token ready |
| Ansible | Installed in virtual environment |
π» Commands
1. Verify NetBox is Running
# Access NetBox via VS Code Remote SSH or terminal
# Start NetBox Docker if not running
cd ~/netbox-docker
docker-compose up -d
# Test NetBox is accessible
curl -I http://192.168.1.20:8000
# Expected: HTTP/1.1 302 Found
# Test API with your token
curl -s -H "Authorization: Token YOUR_NETBOX_TOKEN" \
http://192.168.1.20:8000/api/dcim/devices/ | jq
# This returns full device details including:
# - Device names (vIOS-R1, vIOS-R2, vIOS-R3)
# - Primary IPs (192.168.1.101, .102, .103)
# - Device roles, platforms, sites
# - Status (active)
2. Prepare Ansible Environment
# Activate your ansible virtual environment
netdev
# Check our project
ls -l
cd ~/ansible-project
# Look at current static inventory
cat ./inventory/hosts
# Backup old inventory
mv ./inventory/hosts ./inventory/hosts_backup
ls -l ./inventory/
# Verify backup created
ls -l ./inventory/
# hosts.backup should now exist
3. Install NetBox Ansible Collection
# Check if Netbox Plugin avaliable on Virtiual Enviorment
ansible-galaxy collection list | grep netbox
"Nothing installed. Let's get the latest version."
# If exist, remove them if its older version ( current Plugin Version is 3.22)
rm -rf ~/.ansible/collections/ansible_collections/netbox/netbox
rm -rf ~/ansible-project/ansible-venv/lib/python3.10/site-packages/ansible_collections/netbox/netbox
# Install NetBox collection
ansible-galaxy collection install netbox.netbox
# If already installed, force update to latest
ansible-galaxy collection install netbox.netbox --force
# Verify installation
ansible-galaxy collection list | grep netbox
# Expected: netbox.netbox 3.20.0 (or newer)
# Verify plugin exists
ansible-doc -t inventory netbox.netbox.nb_inventory
# Install required Python dependency
pip install pytz
pip install ansible-pylibssh
# Verify pytz installed
pip show pytz
# Expected: Version: 2025.x
pip show ansible-pylibssh
4. Create NetBox Inventory File
# Create the inventory file
code ./inventory/netbox.yml
inventory/netbox.yml:
---
plugin: netbox.netbox.nb_inventory
# NetBox connection settings
api_endpoint: http://192.168.1.20:8000
token: YOUR_NETBOX_TOKEN_HERE
validate_certs: false
# Group devices automatically by these attributes
group_by:
- device_roles # Creates groups like: device_roles_router
- platforms # Creates groups like: platforms_cisco_ios
- sites # Creates groups like: sites_main_dc
- tags # Creates groups based on device tags
# Only fetch active devices
query_filters:
- status: active
# Map NetBox fields to Ansible variables
compose:
# Extract IP without subnet mask (192.168.1.101/24 -> 192.168.1.101)
ansible_host: primary_ip4.address | split('/') | first
# Use platform slug for network_os ( Jinja modify the os details based platfrom info from Netbox, all automatic)
ansible_network_os: platform.name | regex_replace('.*IOS.*', 'cisco.ios.ios', ignorecase=True) | default('unknown')
# Only include devices that have a primary IP assigned
device_query_filters:
- has_primary_ip: true
Configuration Breakdown:
| Setting | Purpose |
|---|---|
plugin | Tells Ansible to use NetBox inventory plugin |
api_endpoint | Your NetBox server URL |
token | API authentication token |
validate_certs | Skip SSL verification (for lab) |
group_by | Auto-create groups from device attributes |
query_filters | Only fetch devices with status=active |
compose | Transform NetBox data into Ansible variables |
device_query_filters | Only devices with IP addresses |
5. Update ansible.cfg
# Edit ansible configuration
code ./ansible.cfg
Change inventory path:
[defaults]
# OLD: inventory = inventory/hosts
inventory = inventory/netbox.yml # NEW: Point to NetBox plugin
host_key_checking = False
timeout = 30
6. Test Dynamic Inventory
# View inventory as a tree graph
ansible-inventory --graph
# Expected output:
# @all:
# |--@ungrouped:
# |--@sites_main_dc:
# | |--vIOS-R1
# | |--vIOS-R2
# | |--vIOS-R3
# |--@device_roles_router:
# | |--vIOS-R1
# | |--vIOS-R2
# | |--vIOS-R3
# |--@platforms_cisco_ios:
# | |--vIOS-R1
# | |--vIOS-R2
# | |--vIOS-R3
# View full inventory as JSON
ansible-inventory --list
# Check variables for specific host
ansible-inventory --host vIOS-R1
ansible-inventory -i inventory/netbox.yml --host vIOS-R1 --vars --yaml
7. Run Playbook with Dynamic Inventory
# Create test playbook
cat << 'EOF' > playbooks/show_version.yml
---
- name: IOS Version Report
hosts: all
gather_facts: no
vars:
ansible_user: ansible
ansible_ssh_password: ansible@123
tasks:
- name: Get version
raw: show version
register: version
- name: Show version
debug:
msg: "{{ inventory_hostname }}: {{ (version.stdout_lines | select('match', '.*IOS.*') | first) }}"
EOF
# Run on all devices
ansible-playbook playbooks/show_version.yml
# Run on specific group (by device role)
ansible-playbook playbooks/show_version.yml --limit device_roles_router
# Run on specific site
ansible-playbook playbooks/show_version.yml --limit sites_main_dc
# Run on single device
ansible-playbook playbooks/show_version.yml --limit vIOS-R1
8. Demo: Add Device in NetBox -> Auto Appears in Ansible
# THE MAGIC OF DYNAMIC INVENTORY:
# Step 1: Check current inventory
ansible-inventory --graph
# Shows: vIOS-R1, vIOS-R2, vIOS-R3
# Step 2: Add new device in NetBox UI
# - Go to NetBox -> Devices -> Add
# - Name: vIOS-R4
# - Assign Primary IP
# - Set Status = Active
# - Set Device Role = Router
# - Set Platform = cisco-ios
# Step 3: Run inventory again (NO FILE CHANGES NEEDED!)
ansible-inventory --graph
# Step 4: New device appears automatically!
# @all:
# |--@device_roles_router:
# | |--vIOS-R1
# | |--vIOS-R2
# | |--vIOS-R3
# | |--vIOS-R4 <-- NEW DEVICE!
# Step 5: Run playbook - includes new device
ansible-playbook playbooks/show_version.yml
β οΈ Troubleshooting
Redis Connection Error (NetBox Docker 4.4.5+)
Error Message:
[WARNING]: Failed to parse inventory/netbox.yml with auto plugin:
{"error": "Error -3 connecting to redis:6379. Temporary failure in name resolution.",
"exception": "ConnectionError", "netbox_version": "4.4.5-Docker-3.4.1", "python_version": "3.12.3"}
Root Cause Analysis:
| Step | What Happens |
|---|---|
| 1 | NetBox Docker exposes /api/status/ endpoint |
| 2 | Ansible plugin calls self._fetch_information(api_endpoint + "/api/status/") |
| 3 | Response contains: "netbox-version": "4.4.5-Docker-3.4.1" |
| 4 | Plugin detects "Docker" in version string |
| 5 | Plugin tries internal Redis probe to redis:6379 |
| 6 | Your Ansible host can't resolve "redis" (Docker-internal hostname) |
| 7 | Plugin CRASHES before cache: false or group_by config loads |
The Fix - Modify NetBox Inventory Plugin:
# Find the plugin file location
find ~/.ansible -name "nb_inventory.py" 2>/dev/null
# Typical location:
# ~/.ansible/collections/ansible_collections/netbox/netbox/plugins/inventory/nb_inventory.py
# Edit the file
code ~/.ansible/collections/ansible_collections/netbox/netbox/plugins/inventory/nb_inventory.py
Replace the _fetch_information call with a direct mock:
Find the section that calls _fetch_information for status and replace with:
# ORIGINAL CODE (causes Redis error):
# status = self._fetch_information(self.api_endpoint + "/api/status/")
# FIXED CODE (bypasses Redis probe):
status = {
"netbox-version": "4.4.5",
"netbox-version-docker": "3.4.1",
"python-version": "3.12.3"
}
netbox_api_version = "4.4"
Result After Fix:
ansible-inventory --graph
# [WARNING]: Invalid characters were found in group names but not replaced
# [WARNING]: Unable to load the facts cache plugin ().
# @all:
# |--@ungrouped:
# |--@sites_main_dc:
# | |--vIOS-R1
# | |--vIOS-R2
# | |--vIOS-R3
# |--@device_roles_router:
# | |--vIOS-R1
# | |--vIOS-R2
# | |--vIOS-R3
# |--@platforms_ios_iosv_software...:
# | |--vIOS-R1
# | |--vIOS-R2
# | |--vIOS-R3
- Bypasses Redis probe entirely
- Sets correct NetBox version (4.4) for schema selection
group_by,compose, and all config options work perfectly
No Hosts Found
# Check 1: Devices have Primary IP in NetBox
# Go to NetBox -> Devices -> Check "Primary IPv4" column
# Check 2: Devices Status = Active
# Go to NetBox -> Devices -> Check "Status" column
# Check 3: Test API directly
curl -s -H "Authorization: Token YOUR_TOKEN" \
http://192.168.1.20:8000/api/dcim/devices/ | jq '.results[].name'
Authentication Failed
# Verify group_vars/all.yml has correct credentials
cat group_vars/all.yml
# Test SSH manually
ssh ansible@192.168.1.101
# Password: ansible@123
Plugin Not Found
# Reinstall NetBox collection
ansible-galaxy collection install netbox.netbox --force
# Verify installation
ansible-galaxy collection list | grep netbox
π Final Project Structure
ansible-project/
βββ ansible.cfg # Updated: inventory = inventory/netbox.yml
βββ inventory/
β βββ hosts.backup # Old static inventory (backup)
β βββ netbox.yml # NEW: Dynamic inventory config
βββ group_vars/
β βββ all.yml # Connection credentials
βββ playbooks/
β βββ show_version.yml
β βββ backup_configs.yml
βββ backups/ # Created by backup playbook
π Resources
Video 15: Ansible MCP Integration
π Overview
Integrate MCP with Ansible to trigger playbooks via natural language using Claude CLI.
π― What You'll Learn
- Build MCP server that runs Ansible playbooks
- Natural language to Ansible automation
- Integration with NetBox dynamic inventory
ποΈ Architecture
"Backup all routers"
β
βΌ
βββββββββββββββββββ
β Claude CLI β
ββββββββββ¬βββββββββ
β MCP Protocol
βΌ
βββββββββββββββββββ
β Ansible MCP β
β Server β
β β
β - list_playbooksβ
β - run_playbook β
β - get_inventory β
ββββββββββ¬βββββββββ
β
ββββββββββββββββββββΌβββββββββββββββββββ
βΌ βΌ βΌ
ββββββββββββββ ββββββββββββββ ββββββββββββββ
β backup.yml β β show_ver.ymlβ β config.yml β
ββββββββββββββ ββββββββββββββ ββββββββββββββ
β
βΌ
βββββββββββββββββββ
β Ansible Core β
β (NetBox Inv) β
ββββββββββ¬βββββββββ
β
ββββββββββββββββββββΌβββββββββββββββββββ
βΌ βΌ βΌ
ββββββββββ ββββββββββ ββββββββββ
βvIOS-R1 β βvIOS-R2 β βvIOS-R3 β
ββββββββββ ββββββββββ ββββββββββ
π Home LAB Setup
ββββββββββββββββββββ ββββββββββββββββββββ ββββββββββββββββββββ
β Ansible Node β β NetBox β β EVE-NG β
β 192.168.1.119 β β 192.168.1.120 β β 192.168.1.100 β
β β β β β β
β - Claude CLI β β - Device Data β β - vIOS-R1 (.201) β
β - Ansible MCP β β β β - vIOS-R2 (.202) β
β - Ansible Core β β β β - vIOS-R3 (.203) β
ββββββββββββββββββββ ββββββββββββββββββββ ββββββββββββββββββββ
π Project Structure
~/mcp-servers/
βββ device-mcp/ # From Video 13
βββ netbox-mcp-server/ # From Video 12
βββ ansible-mcp/ # NEW - This video
βββ .venv/ # Virtual environment
βββ ansible_mcp.py # MCP server script
π» Commands
1. Prerequisites - Verify Video 14 Setup
# Activate ansible environment
netdev
cd ~/ansible-project
# Verify dynamic inventory works
ansible-inventory --graph
# Expected: Shows vIOS-R1, R2, R3 from NetBox
2. Create Ansible MCP Folder
# Create folder (consistent with other MCPs)
mkdir -p ~/mcp-servers/ansible-mcp
cd ~/mcp-servers/ansible-mcp
3. Create Virtual Environment
# Create venv
python3 -m venv .venv
# Activate it
source .venv/bin/activate
# Verify you're in venv
which python
# Should show: /home/user/mcp-servers/ansible-mcp/.venv/bin/python
4. Install FastMCP
# Install FastMCP (inside venv)
pip install fastmcp
# Verify
pip show fastmcp
5. Create Ansible MCP Server
# Create the MCP server file
cat << 'EOF' > ansible_mcp.py
#!/usr/bin/env python3
"""Ansible MCP Server - Run playbooks via natural language"""
from mcp.server.fastmcp import FastMCP
import subprocess
from pathlib import Path
mcp = FastMCP("Ansible MCP Server")
ANSIBLE_DIR = Path.home() / "ansible-project"
PLAYBOOK_DIR = ANSIBLE_DIR / "playbooks"
@mcp.tool()
def list_playbooks() -> str:
"""List all available Ansible playbooks"""
playbooks = list(PLAYBOOK_DIR.glob("*.yml"))
if not playbooks:
return "No playbooks found"
result = "Available playbooks:\n"
for pb in playbooks:
result += f" - {pb.name}\n"
return result
@mcp.tool()
def run_playbook(playbook: str, limit: str = None) -> str:
"""
Run an Ansible playbook
Args:
playbook: Playbook filename (e.g., backup_config.yml)
limit: Limit to hosts/groups (e.g., sites_main-dc, vIOS-R1)
"""
playbook_path = PLAYBOOK_DIR / playbook
if not playbook_path.exists():
return f"Error: Playbook '{playbook}' not found"
# IMPORTANT: Use absolute path to ansible-playbook from your ansible venv
cmd = ["/home/user/ansible-project/ansible-venv/bin/ansible-playbook", str(playbook_path)]
if limit:
cmd.extend(["--limit", limit])
try:
result = subprocess.run(
cmd, cwd=ANSIBLE_DIR,
capture_output=True, text=True, timeout=300
)
return result.stdout + result.stderr
except Exception as e:
return f"Error: {str(e)}"
@mcp.tool()
def get_inventory() -> str:
"""Get current Ansible inventory from NetBox"""
result = subprocess.run(
# IMPORTANT: Use absolute path to ansible-inventory
["/home/user/ansible-project/ansible-venv/bin/ansible-inventory", "--graph"],
cwd=ANSIBLE_DIR, capture_output=True, text=True
)
return result.stdout
if __name__ == "__main__":
mcp.run()
EOF
# Make executable
chmod +x ansible_mcp.py
6. Create Backup Playbook
# Create backup playbook in ansible-project
cat << 'EOF' > ~/ansible-project/playbooks/backup_config.yml
---
- name: Backup Device Configuration
hosts: all
gather_facts: no
vars:
ansible_user: ansible
ansible_ssh_password: ansible@123
backup_dir: "{{ playbook_dir }}/../backups"
tasks:
- name: Create backup directory
delegate_to: localhost
file:
path: "{{ backup_dir }}"
state: directory
run_once: true
- name: Backup running config
raw: show running-config
register: config
- name: Save config
delegate_to: localhost
copy:
content: "{{ config.stdout }}"
dest: "{{ backup_dir }}/{{ inventory_hostname }}.cfg"
EOF
7. Add MCP Server to Claude CLI
# Add to Claude CLI (use full paths)
claude mcp add ansible-mcp \
/home/user/mcp-servers/ansible-mcp/.venv/bin/python \
/home/user/mcp-servers/ansible-mcp/ansible_mcp.py
# Verify connection
claude mcp list
# Expected: ansible-mcp: ... - β Connected
8. Test with Claude
# Start Claude CLI
claude
# Try these queries:
# "List available playbooks"
# "Show me the current inventory"
# "Run backup_config playbook on all routers"
# "Run backup on vIOS-R1 only"
π§ Troubleshooting
Failed to Connect Error
# 1. Verify paths exist
ls -la ~/mcp-servers/ansible-mcp/
ls -la ~/mcp-servers/ansible-mcp/.venv/bin/python
# 2. Test import manually
cd ~/mcp-servers/ansible-mcp
source .venv/bin/activate
python -c "from mcp.server.fastmcp import FastMCP; print('OK')"
# 3. Remove and re-add
claude mcp remove ansible-mcp
claude mcp add ansible-mcp \
/home/user/mcp-servers/ansible-mcp/.venv/bin/python \
/home/user/mcp-servers/ansible-mcp/ansible_mcp.py
Playbook Not Found Error
# Check playbook directory
ls -la ~/ansible-project/playbooks/
# Verify ANSIBLE_DIR in script matches your setup
π¦ Example Queries
# List playbooks
"What Ansible playbooks are available?"
# Run on all devices
"Run the backup playbook on all routers"
# Run on specific group (from NetBox)
"Backup configs for devices in sites_main-dc"
# Run on single device
"Execute show_version playbook on vIOS-R1 only"
Video 16: Gemini CLI + Remote MCP
π Overview
Use Google's FREE Gemini CLI with remote MCP servers. Access your network automation from anywhere - no paid subscription required!
π― What You'll Learn
- Build unified MCP server with SSE transport
- Install and configure Gemini CLI on Windows
- Generate and configure FREE Google API key
- Connect remote MCP server from any machine
- Understand rate limits and token usage
π Why Gemini CLI?
| Feature | Claude CLI | Gemini CLI |
|---|---|---|
| Cost | Subscription | β FREE |
| MCP Support | Yes | β Yes |
| Remote Access | Local (stdio) | β Remote (SSE) |
| Auth | Claude account | Google API key |
π€ Gemini Models Overview
| Model | Best For | Rate Limit | Speed |
|---|---|---|---|
gemini-2.5-flash | General use | ~15 req/min | Fast |
gemini-2.5-flash-lite | High volume | Higher | Faster |
gemini-2.5-pro | Complex tasks | ~2 req/min | Slower |
Tip: Use
gemini-2.5-flashfor network automation - good balance of speed and capability.
π Understanding Token Usage
Model Usage Reqs Input Tokens Cache Reads Output Tokens
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
gemini-2.5-flash-lite 8 9,707 2,505 420
gemini-2.5-flash 12 16,213 59,036 111
| Term | Meaning |
|---|---|
| Reqs | Number of API requests made |
| Input Tokens | Tokens sent TO the model (your prompts) |
| Cache Reads | Cached tokens reused (saves quota) |
| Output Tokens | Tokens returned FROM the model (responses) |
Rate Limit vs Daily Quota:
- Rate Limit (429): Too many requests per minute β Wait 1-2 minutes
- Daily Quota: Exceeded daily limit β Wait until midnight PST
ποΈ Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β GEMINI CLI + REMOTE MCP ARCHITECTURE β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββ βββββββββββββββββββββββββββββββββββ
β WINDOWS LAPTOP/PC β β ANSIBLE NODE (.119) β
β (Anywhere) β β β
β β β βββββββββββββββββββββββββββββ β
β βββββββββββββββββββββββββ β β β Unified MCP Server β β
β β Gemini CLI β β HTTP/SSE β β (Port 8080) β β
β β β β ββββββββ> β β β β
β β "Backup all routers" β β Port 8080 β β βββββββββββββββββββββββ β β
β βββββββββββββββββββββββββ β β β β NetBox Tools β β β
β β β β β - list_devices β β β
β Google API Key (FREE) β β β βββββββββββββββββββββββ β β
β β β β β β
βββββββββββββββββββββββββββββββ β β βββββββββββββββββββββββ β β
β β β Ansible Tools β β β
β β β - list_playbooks β β β
β β β - run_playbook β β β
β β β - get_inventory β β β
β β βββββββββββββββββββββββ β β
β βββββββββββββββ¬ββββββββββββββ β
β βΌ β
β Network Devices β
βββββββββββββββββββββββββββββββββββ
π Local vs Remote MCP
VIDEOS 12-15: LOCAL MCP (stdio)
ββββββββββββββββββββββββββββββββββββββββββββββ
β Same Machine Only β
β ββββββββββββ stdio ββββββββββββ β
β β Claude β ββββββββΊ β MCP β β
β β CLI β β Server β β
β ββββββββββββ ββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββ
VIDEO 16: REMOTE MCP (SSE)
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Access from ANYWHERE β
β ββββββββββββ HTTP/SSE ββββββββββββ β
β β Gemini β βββββββββββββΊ β MCP β β
β β CLI β Internet β Server β β
β β(Windows) β β (Linux) β β
β ββββββββββββ ββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
π§ Single Agent + Multi-Tool
βββββββββββββββββββ
β Gemini CLI β
β (ONE Agent) β
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ
β MCP Server β
β (Multi-Tool) β
ββββββββββ¬βββββββββ
β
ββββββββββββββββββββββββΌβββββββββββββββββββββββ
βΌ βΌ βΌ
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β NetBox Tools β β Ansible Tools β β (Future Tools) β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
NOTE: This is NOT multi-agent. It's ONE AI with MULTIPLE tools.
π Project Structure
~/mcp-servers/
βββ netbox-mcp-server/ # Video 12 (Local)
βββ device-mcp/ # Video 13 (Local)
βββ ansible-mcp/ # Video 15 (Local)
βββ unified-mcp-sse/ # Video 16 (Remote!) β NEW
βββ .venv/
βββ unified_mcp_sse.py
π₯οΈ PART 1: Server Setup (Ansible Node)
π» Step 1.1: Create Project Folder
# On Ansible Node (192.168.1.119)
cd ~/mcp-servers
mkdir -p unified-mcp-sse
cd unified-mcp-sse
# Verify
pwd
# Expected: /home/user/mcp-servers/unified-mcp-sse
π» Step 1.2: Create Virtual Environment
# Create venv
python3 -m venv .venv
# Activate
source .venv/bin/activate
# Verify - should show venv path
which python
# Expected: /home/user/mcp-servers/unified-mcp-sse/.venv/bin/python
π» Step 1.3: Install Dependencies
# Install required packages
pip install fastmcp requests uvicorn
# Verify installations
pip show fastmcp | grep Version
pip show uvicorn | grep Version
# Expected output:
# Version: 2.x.x (fastmcp)
# Version: 0.x.x (uvicorn)
π» Step 1.4: Get NetBox API Token
# If you don't know your token, get it from NetBox UI:
# 1. Login to NetBox: http://192.168.1.120:8000
# 2. Go to: Admin β API Tokens β Add Token
# 3. Copy the token
# Test NetBox API connectivity
curl -s http://192.168.1.120:8000/api/dcim/devices/ \
-H "Authorization: Token YOUR-NETBOX-TOKEN" | head -100
# Should return JSON with device list
π» Step 1.5: Create Unified MCP Server
# Create the server file
cat << 'EOF' > unified_mcp_sse.py
#!/usr/bin/env python3
"""Unified MCP Server with SSE Transport for Remote Access"""
from mcp.server.fastmcp import FastMCP
import subprocess
import requests
from pathlib import Path
import uvicorn
mcp = FastMCP("Network Automation MCP")
# ============================================
# CONFIGURATION - UPDATE THESE VALUES!
# ============================================
ANSIBLE_DIR = Path.home() / "ansible-project"
ANSIBLE_BIN = Path.home() / "ansible-project/ansible-venv/bin"
NETBOX_URL = "http://192.168.1.120:8000"
NETBOX_TOKEN = "YOUR-NETBOX-TOKEN-HERE" # β UPDATE THIS!
# ============================================
# NETBOX TOOLS
# ============================================
@mcp.tool()
def netbox_list_devices() -> str:
"""List devices with IPs from NetBox (direct API - fast query)"""
try:
headers = {"Authorization": f"Token {NETBOX_TOKEN}"}
r = requests.get(f"{NETBOX_URL}/api/dcim/devices/", headers=headers, timeout=10)
r.raise_for_status()
devices = r.json().get('results', [])
if not devices:
return "No devices found in NetBox"
result = "Devices in NetBox:\n"
for d in devices:
ip = d.get('primary_ip4', {})
ip_addr = ip.get('address', 'No IP') if ip else 'No IP'
result += f" - {d['name']} ({ip_addr})\n"
return result
except Exception as e:
return f"Error connecting to NetBox: {str(e)}"
# ============================================
# ANSIBLE TOOLS
# ============================================
@mcp.tool()
def ansible_list_playbooks() -> str:
"""List all available Ansible playbooks"""
playbooks = list((ANSIBLE_DIR / "playbooks").glob("*.yml"))
if not playbooks:
return "No playbooks found in " + str(ANSIBLE_DIR / "playbooks")
result = "Available playbooks:\n"
for p in playbooks:
result += f" - {p.name}\n"
return result
@mcp.tool()
def ansible_run_playbook(playbook: str, limit: str = None) -> str:
"""Run an Ansible playbook with optional host limit"""
playbook_path = ANSIBLE_DIR / "playbooks" / playbook
if not playbook_path.exists():
return f"Error: Playbook '{playbook}' not found at {playbook_path}"
# IMPORTANT: Use absolute path to ansible-playbook from your venv
ansible_cmd = str(ANSIBLE_BIN / "ansible-playbook")
cmd = [ansible_cmd, str(playbook_path)]
if limit:
cmd.extend(["--limit", limit])
try:
result = subprocess.run(
cmd, cwd=ANSIBLE_DIR,
capture_output=True, text=True, timeout=300
)
output = result.stdout + result.stderr
return output if output else "Playbook executed (no output)"
except subprocess.TimeoutExpired:
return "Error: Playbook execution timed out (5 min limit)"
except Exception as e:
return f"Error running playbook: {str(e)}"
@mcp.tool()
def ansible_get_inventory() -> str:
"""Get Ansible inventory groups and hosts (via NetBox dynamic inventory)"""
ansible_cmd = str(ANSIBLE_BIN / "ansible-inventory")
try:
result = subprocess.run(
[ansible_cmd, "--graph"],
cwd=ANSIBLE_DIR, capture_output=True, text=True, timeout=30
)
output = result.stdout
return f"Ansible Inventory:\n{output}" if output else "No inventory data"
except Exception as e:
return f"Error getting inventory: {str(e)}"
# ============================================
# HOST HEADER FIX FOR REMOTE ACCESS
# ============================================
class HostFixMiddleware:
"""Middleware to fix host header for remote SSE connections"""
def __init__(self, app):
self.app = app
async def __call__(self, scope, receive, send):
if scope["type"] == "http":
headers = []
for name, value in scope.get("headers", []):
if name == b"host":
value = b"localhost:8080"
headers.append((name, value))
scope["headers"] = headers
await self.app(scope, receive, send)
# ============================================
# MAIN - START SERVER
# ============================================
if __name__ == "__main__":
print("=" * 50)
print("Starting Unified MCP Server")
print("=" * 50)
print(f"NetBox URL: {NETBOX_URL}")
print(f"Ansible Dir: {ANSIBLE_DIR}")
print(f"Listening on: http://0.0.0.0:8080")
print("=" * 50)
# Wrap SSE app with host fix middleware
app = HostFixMiddleware(mcp.sse_app())
# Start uvicorn server
uvicorn.run(app, host="0.0.0.0", port=8080)
EOF
# Make executable
chmod +x unified_mcp_sse.py
π» Step 1.6: Update Configuration
# Edit the file to add your NetBox token
nano unified_mcp_sse.py
# Find this line and update:
NETBOX_TOKEN = "YOUR-NETBOX-TOKEN-HERE" # β Put your actual token
# Save: Ctrl+O, Enter, Ctrl+X
π» Step 1.7: Start MCP Server
# Start the server
python unified_mcp_sse.py
# Expected output:
# ==================================================
# Starting Unified MCP Server
# ==================================================
# NetBox URL: http://192.168.1.120:8000
# Ansible Dir: /home/user/ansible-project
# Listening on: http://0.0.0.0:8080
# ==================================================
# INFO: Started server process [xxxxx]
# INFO: Waiting for application startup.
# INFO: Application startup complete.
# INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
π» Step 1.8: Verify Port is Listening (New Terminal)
# Open new terminal, check port
ss -tlnp | grep 8080
# Expected:
# LISTEN 0 2048 0.0.0.0:8080 0.0.0.0:* users:(("python",pid=xxxx,fd=6))
π» Step 1.9: Test SSE Endpoint Locally
# Test the SSE endpoint (will hang - that's normal for SSE)
curl http://localhost:8080/sse
# Press Ctrl+C to stop
# If you see "Invalid Host header" - the middleware isn't working
# If it hangs waiting - that's correct! SSE keeps connection open
π» Step 1.10: Check Firewall
# Allow port 8080 through firewall
sudo ufw allow 8080/tcp
# Check status
sudo ufw status
# If firewall inactive, that's fine - port is open
π» PART 2: Client Setup (Windows)
π» Step 2.1: Install Node.js (if not installed)
# Check if Node.js is installed
node --version
# If not installed:
# 1. Go to https://nodejs.org
# 2. Download LTS version
# 3. Run installer (all defaults)
# 4. Restart PowerShell
π» Step 2.2: Install Gemini CLI
# Install Gemini CLI globally
npm install -g @google/gemini-cli
# Verify installation
gemini --version
# Expected: 0.24.x or higher
π» Step 2.3: Generate Google API Key (FREE)
1. Open browser: https://aistudio.google.com/apikey
2. Sign in with Google account
3. Click "Create API key in new project"
(This auto-creates a Google Cloud project)
4. Copy the API key (starts with "AIza...")
5. Keep this key safe - you'll need it next!
π» Step 2.4: Configure API Key (Permanent)
# Create PowerShell profile (if doesn't exist)
New-Item -Path $PROFILE -ItemType File -Force
# Open profile in notepad
notepad $PROFILE
# Add this line (replace with YOUR key):
$env:GEMINI_API_KEY = "AIzaSy...your-key-here"
# Save and close notepad
# Restart PowerShell for changes to take effect
π» Step 2.5: Verify API Key is Set
# After restarting PowerShell, verify:
echo $env:GEMINI_API_KEY
# Should display your API key
# If empty - profile didn't load, try again
π» Step 2.6: Test Connectivity to Server
# Test if you can reach the MCP server (use curl.exe, not PowerShell alias)
curl.exe http://192.168.1.119:8080/sse
# Expected: Connection stays open (SSE stream)
# Press Ctrl+C to stop
# If "Connection refused" - check server is running
# If timeout - check firewall/network
π» Step 2.7: Register MCP Server with Gemini CLI
Option A: Use Gemini CLI command
# Add remote MCP server (single line - no backslashes in PowerShell)
gemini mcp add network-automation --transport sse --url http://192.168.1.119:8080/sse --scope user
# Verify
gemini mcp list
Option B: Manually edit settings file (if Option A doesn't save URL)
# Open settings file
notepad $env:USERPROFILE\.gemini\settings.json
# Add this content:
{
"mcpServers": {
"network-automation": {
"url": "http://192.168.1.119:8080/sse",
"type": "sse"
}
}
}
# Save and close notepad
# NOTE: Gemini CLI has a bug that sometimes doesn't save the URL
# If 'gemini mcp list' shows empty URL, use Option B
π» Step 2.8: Verify MCP Connection
# Check MCP server status
gemini mcp list
# Expected output:
# Configured MCP servers:
# β network-automation: http://192.168.1.119:8080/sse (sse) - Connected
# If shows "Disconnected":
# 1. Check server is running on Ansible node
# 2. Check URL in settings.json is correct
# 3. Check firewall allows port 8080
π§ͺ PART 3: Testing
π» Step 3.1: Start Gemini CLI
# Start Gemini
gemini
# You should see the Gemini banner and prompt
# Look for "1 MCP server" message at bottom
π» Step 3.2: Test Queries
# Query 1: List NetBox devices
> List all devices in NetBox
# Expected: Shows devices from NetBox
# Look for: β netbox_list_devices (network-automation MCP Server)
# Query 2: List playbooks
> Show available Ansible playbooks
# Expected: Shows .yml files from ~/ansible-project/playbooks
# Query 3: Get inventory
> Show the Ansible inventory
# Expected: Shows host groups from NetBox dynamic inventory
# Query 4: Run playbook (be careful!)
> Run backup_config playbook on vIOS-R1
π» Step 3.3: Check Token Usage
# Exit Gemini CLI (Ctrl+C or type /exit)
# Check usage at:
# https://aistudio.google.com/apikey
# Click on your key β View metrics
# Understanding the metrics:
# - Reqs: Number of API requests
# - Input Tokens: Your prompts sent to model
# - Cache Reads: Reused tokens (saves quota)
# - Output Tokens: Model responses
π§ Troubleshooting
β "Disconnected" in gemini mcp list
# 1. Check server is running (on Ansible Node)
ps aux | grep unified_mcp
# 2. Check port is open
ss -tlnp | grep 8080
# 3. Test connectivity from Windows
curl.exe http://192.168.1.119:8080/sse
# 4. Verify settings.json has correct URL
cat $env:USERPROFILE\.gemini\settings.json
β "Invalid Host header" error
# The HostFixMiddleware should fix this
# Make sure unified_mcp_sse.py has the middleware class
# Restart server after any changes
pkill -f unified_mcp
python unified_mcp_sse.py
β "Rate limit exceeded" or "429 TooManyRequests"
This is per-minute rate limiting, NOT daily quota.
Solution: Wait 1-2 minutes and try again.
To avoid:
- Don't spam requests quickly
- Wait a few seconds between queries
β "Daily quota exhausted"
Free tier daily limits reached.
Solutions:
1. Wait until midnight PST (quota resets)
2. Switch to different model: /model β select gemini-2.5-flash-lite
3. Create new API key with different Google account
β "No devices found in NetBox"
# 1. Check NetBox token in script
grep NETBOX_TOKEN ~/mcp-servers/unified-mcp-sse/unified_mcp_sse.py
# 2. Test NetBox API directly
curl http://192.168.1.120:8000/api/dcim/devices/ \
-H "Authorization: Token YOUR-TOKEN"
# 3. Restart server after fixing token
pkill -f unified_mcp
python unified_mcp_sse.py
β GEMINI_API_KEY not set
# Check if variable is set
echo $env:GEMINI_API_KEY
# Set for current session
$env:GEMINI_API_KEY = "your-key-here"
# Or check profile loaded
cat $PROFILE
β Gemini CLI not saving URL (mcp add bug)
# Check what was saved
cat $env:USERPROFILE\.gemini\settings.json
# If URL is empty "", manually edit:
notepad $env:USERPROFILE\.gemini\settings.json
# Replace content with:
{
"mcpServers": {
"network-automation": {
"url": "http://192.168.1.119:8080/sse",
"type": "sse"
}
}
}
# Save and verify
gemini mcp list
π Security Recommendations
| Risk | Mitigation |
|---|---|
| Open port 8080 | Use VPN or SSH tunnel for production |
| API key in profile | Don't share your PowerShell profile |
| NetBox token in script | Use environment variables in production |
| HTTP unencrypted | Use nginx + SSL for production |
π¦ Example Queries
# NetBox queries
"What devices do we have in NetBox?"
"List all network devices"
# Ansible queries
"Show me available playbooks"
"What's in the Ansible inventory?"
# Automation (careful - these make changes!)
"Run backup_config playbook on all routers"
"Deploy NTP configuration on vIOS-R1"
"Run show_version playbook on device_roles_router"
π AWX Series - Coming Next!
The next phase of our Network Automation journey - Enterprise-grade automation with Ansible AWX!
Video 17: AWX Installation on K3s
π Overview
Deploy Ansible AWX on lightweight K3s Kubernetes inside your EVE-NG lab. Transform your Ansible CLI workflows into enterprise-grade automation with a web UI.
β Why AWX in 2025?
| Challenge | Ansible CLI | AWX Solution |
|---|---|---|
| Who ran what? | Check bash history | β Complete audit trail |
| Team access | Share SSH keys | β Role-based access (RBAC) |
| Scheduling | Cron jobs | β Built-in scheduler |
| Credentials | Plain text files | β Encrypted credential store |
| Visibility | Terminal only | β Web dashboard |
| GitOps | Manual git pull | β Auto-sync from repos |
Real-world value: NOC teams can run playbooks without CLI access. Junior engineers get guardrails. Management gets visibility.
ποΈ Architecture
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β EVE-NG SERVER β
β 192.168.1.100 β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β€
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β AWX NODE (Ubuntu Linux) β β
β β 192.168.1.121 β β
β β β β
β β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β β
β β β K3s Kubernetes Cluster β β β
β β β β β
β β β βββββββββββββ βββββββββββββ βββββββββββββ β β β
β β β β AWX Web β β AWX Task β β AWX EE β β β β
β β β β (nginx) β β (celery) β β(container)β β β β
β β β β Port 30052 β β β β β β β β
β β β βββββββββββββ βββββββββββββ βββββββββββββ β β β
β β β β β β
β β β βββββββββββββ βββββββββββββ β β β
β β β βPostgreSQL β β Redis β β β β
β β β β DB β β Cache β β β β
β β β βββββββββββββ βββββββββββββ β β β
β β β β β β
β β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β β
β β β β
β β Requirements: 4 vCPU | 8GB RAM | 50GB Disk β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β β
β β SSH/API β
β βΌ β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Network Devices β β
β β vIOS-R1 (.201) vIOS-R2 (.202) vIOS-R3 (.203) β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
π¦ What Gets Installed
| Component | Purpose |
|---|---|
| K3s | Lightweight Kubernetes (single binary, ~512MB RAM) |
| AWX Operator | Manages AWX lifecycle in Kubernetes |
| AWX | Web UI, API, task engine, database |
| Helm | Kubernetes package manager |
π» Commands
1. Prepare Ubuntu Linux Node in EVE-NG
# Clone from your golden image (Video 2) or create new node
# specs: 2 vCPU, 4GB RAM
# Set hostname
sudo hostnamectl set-hostname awx-server
# Verify resources
free -h # Should show 8GB+ RAM
nproc # Should show 4+ CPUs
df -h # Should show 50GB+ disk
# Update system
sudo dnf update -y
# Disable firewall (lab only - enable in production)
sudo systemctl disable firewalld --now
# Disable SELinux (lab only)
sudo setenforce 0
sudo sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
# Get IP address
ip addr show eth0 | grep inet
# Note: 192.168.1.121 (example)
2. Install K3s (Lightweight Kubernetes)
# Install K3s - single command!
curl -sfL https://get.k3s.io | sh -
# If that doesn't work - Download the install script manually
wget -O k3s-install.sh https://get.k3s.io
# Make executable
chmod +x k3s-install.sh
# Run with INSTALL_K3S_SKIP_DOWNLOAD first to see if script works
sudo ./k3s-install.sh
# Wait for K3s to start (30-60 seconds)
sleep 60
# Verify K3s is running
sudo systemctl status k3s
# Check node is ready
sudo kubectl get nodes
# Expected: awx-server Ready control-plane,master 1m v1.28.x
# Setup kubectl for regular user (optional but recommended)
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config
# Add KUBECONFIG to your profile so it persists
echo 'export KUBECONFIG=~/.kube/config' >> ~/.bashrc
# Reload
source ~/.bashrc
# Verify kubectl works without sudo
kubectl get nodes
kubectl get pods -A
3. Install Helm Package Manager
# Download and install Helm
curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# Verify Helm installation
helm version
# Add AWX Operator Helm repository
helm repo add awx-operator https://ansible-community.github.io/awx-operator-helm/
helm repo update
# Verify repo added
helm search repo awx-operator
4. Create AWX Namespace
# Create dedicated namespace for AWX
kubectl create namespace awx
# Verify namespace
kubectl get namespaces | grep awx
5. Install AWX Operator
# Install AWX Operator via Helm (do NOT use --wait flag - it times out)
helm install awx-operator awx-operator/awx-operator -n awx
# Watch operator pod come up
kubectl get pods -n awx -w
If pod shows ImagePullBackOff or ErrImagePull:
# Manually pull the image
sudo k3s crictl pull quay.io/ansible/awx-operator:2.19.1
# Delete stuck pod - K8s will recreate it
kubectl delete pod -n awx -l control-plane=controller-manager
# Watch again
kubectl get pods -n awx -w
Wait until you see:
awx-operator-controller-manager-xxxxx 2/2 Running
Press Ctrl+C when Running.
6. Create AWX Instance Custom Resource
# Create AWX instance definition
cat <<EOF | kubectl apply -f -
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
namespace: awx
spec:
service_type: NodePort
nodeport_port: 30052
EOF
# Verify CR created
kubectl get awx -n awx
7. Wait for AWX Deployment
# Watch all pods come up - this takes an hours depends your connection & resouces!
kubectl get pods -n awx -w
# Expected final state (all Running):
# awx-operator-controller-manager-xxxxx 2/2 Running
# awx-postgres-15-0 1/1 Running
# awx-web-xxxxx 3/3 Running
# awx-task-xxxxx 4/4 Running
# Check deployment progress
kubectl logs -f deployment/awx-operator-controller-manager -n awx -c awx-manager
# Quick status check
kubectl get pods -n awx
kubectl get svc -n awx
8. Get AWX Admin Password
# Admin password is stored in a Kubernetes secret
kubectl get secret awx-admin-password -n awx -o jsonpath='{.data.password}' | base64 --decode
# Example output: kT9xPqL2mNvR5wYz
# Save this password! You'll need it to login
# Username: admin
# Password: <output from above command>
9. Access AWX Web UI
# Get NodePort service details
kubectl get svc -n awx
# Look for: awx-service NodePort ... 80:30052/TCP
# AWX URL
echo "AWX URL: http://$(hostname -I | awk '{print $1}'):30052"
# Example: http://192.168.1.129:30052
# Open in browser:
# URL: http://192.168.1.121:30052
# Username: admin
# Password: <from step 8>
10. Quick AWX UI Tour
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β AWX DASHBOARD β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β π Dashboard - Job status, recent activity β
β β
β π Resources β
β βββ Templates - Job templates (playbook + inventory) β
β βββ Credentials - SSH keys, API tokens, vault passwords β
β βββ Projects - Git repos with playbooks β
β βββ Inventories - Static or dynamic (NetBox!) β
β βββ Hosts - Managed devices β
β β
β βοΈ Administration β
β βββ Execution Environments - Container images for jobs β
β βββ Instance Groups - Where jobs run β
β βββ Users/Teams - RBAC configuration β
β β
β π Jobs - Running and completed jobs β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Verification Checklist
| Step | Verification Command | Expected Result |
|---|---|---|
| K3s installed | kubectl get nodes | Node shows "Ready" |
| Helm installed | helm version | Version displayed |
| Operator running | kubectl get pods -n awx | Operator pod "Running" |
| AWX deployed | kubectl get pods -n awx | All pods "Running" |
| UI accessible | Browser: http://IP:30052 | Login page appears |
| Login works | Enter admin credentials | Dashboard loads |
π§ Troubleshooting
β K3s fails to start
# Check K3s logs
sudo journalctl -u k3s -f
# Common fix: Reset and reinstall
sudo /usr/local/bin/k3s-uninstall.sh
curl -sfL https://get.k3s.io | sh -
β Pods stuck in Pending/ContainerCreating
# Check pod events
kubectl describe pod <pod-name> -n awx
# Check node resources
kubectl describe node | grep -A 5 "Allocated resources"
# Common issue: Not enough memory
free -h
# Solution: Increase VM RAM to 8GB+
β Cannot access UI on port 30052
# Verify service is running
kubectl get svc -n awx
# Check if port is listening
ss -tlnp | grep 30052
# Test locally first
curl -I http://localhost:30052
# Check firewall (if enabled)
sudo firewall-cmd --add-port=30052/tcp --permanent
sudo firewall-cmd --reload
β Forgot admin password
# Password is in Kubernetes secret
kubectl get secret awx-admin-password -n awx -o jsonpath='{.data.password}' | base64 --decode
# Or reset password via container
kubectl exec -it deployment/awx-task -n awx -- awx-manage changepassword admin
π Resources
Video 18: AWX Execution Environments
π Overview
Build custom Execution Environments (EE) with network automation collections. EEs are container images that include Ansible, Python dependencies, and collections - ensuring consistent playbook execution.
β Why Execution Environments?
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β THE EXECUTION ENVIRONMENT PROBLEM β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
OLD WAY (AWX < 2.0) NEW WAY (AWX 2.0+)
βββββββββββββββββββ ββββββββββββββββββ
AWX Server AWX Server
β β
βΌ βΌ
Python installed Launch Container
on AWX host β
β βΌ
βΌ βββββββββββββββββββ
Collections installed β Execution Env β
globally β (Container) β
β β β
βΌ β β’ Ansible 2.15 β
Version conflicts! β β’ Python 3.11 β
Dependency hell! β β’ netbox.netbox β
β β’ cisco.ios β
β "Works on my machine" β β’ pynetbox β
βββββββββββββββββββ
β
Consistent everywhere!
π― What We'll Build
| Component | Purpose |
|---|---|
execution-environment.yml | EE definition file |
requirements.yml | Ansible collections to include |
requirements.txt | Python packages to include |
| Custom EE Image | Container with everything we need |
ποΈ EE Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β EXECUTION ENVIRONMENT β
β network-ee:1.0 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β Base Image: quay.io/ansible/awx-ee:latest β
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β ANSIBLE COLLECTIONS β β
β β β’ ansible.netcommon - Network resource modules β β
β β β’ cisco.ios - Cisco IOS modules β β
β β β’ netbox.netbox - NetBox inventory & modules β β
β β β’ ansible.utils - Filters and utilities β β
β β β’ fortinet.fortios - FortiGate modules (optional) β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β PYTHON PACKAGES β β
β β β’ pynetbox - NetBox API client β β
β β β’ netaddr - IP address manipulation β β
β β β’ paramiko - SSH library β β
β β β’ netmiko - Network device SSH β β
β β β’ jmespath - JSON query language β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
π» Commands
1. Create EE Project Directory
# Create and enter project directory
mkdir -p ~/custom-ee && cd ~/custom-ee
2. Setup UV & install ansible-builder
# Install UV (if not installed) & add it home bin to PATH
curl -LsSf https://astral.sh/uv/install.sh | sh
export PATH="$HOME/.uv/bin:$PATH"
# Verify installation
uv --version
# Initialize project & add dependencies
uv init
uv add ansible-builder ansible-navigator
# Verify installation
ansible-builder --version
# Install Docker or Podman (if not installed)
sudo apt update && sudo apt install podman -y
# Or for Fedora/CentOS:
sudo dnf install -y podman
# Or for Docker:
sudo apt update && sudo apt install docker.io -y
sudo usermod -aG docker $USER
newgrp docker
3. Create requirements.yml (Collections)
Note: We are using ansible.netcommon 6.0.0+ to ensure the latest connection plugins for Cisco and Fortinet are available.
cat <<'EOF' > requirements.yml
---
collections:
# Network automation essentials
- name: ansible.netcommon
version: ">=6.0.0"
- name: ansible.utils
version: ">=3.0.0"
# Cisco support
- name: cisco.ios
version: ">=6.0.0"
# NetBox integration (for dynamic inventory)
- name: netbox.netbox
version: ">=3.19.0"
# FortiGate support (optional)
- name: fortinet.fortios
version: ">=2.3.0"
EOF
cat requirements.yml
4. Create requirements.txt (Python Packages)
Note: pynetbox is required for the NetBox inventory, while netmiko and paramiko handle the SSH connections to your routers and firewalls.
cat <<'EOF' > requirements.txt
# NetBox API client
pynetbox>=7.3.3
# IP address utilities
netaddr>=1.3.0
# Network device connectivity
paramiko>=3.4.0
netmiko>=4.4.0
# JSON parsing
jmespath>=1.0.0
# HTTP requests
requests>=2.32.0
EOF
cat requirements.txt
5. Pull Base Image
# Download base image first (saves time during build)
docker pull quay.io/ansible/awx-ee:24.6.1
# Verify image downloaded
docker images | grep awx-ee
6. Create execution-environment.yml
---
version: 3
images:
base_image:
name: quay.io/ansible/awx-ee:24.6.1
dependencies:
galaxy: requirements.yml
python: requirements.txt
7. Build the Execution Environment
# Set registry auth path to user home
export REGISTRY_AUTH_FILE=$HOME/.docker/config.json
# Build the EE image
uv run ansible-builder build -tag localhost/network-awx-ee:latest --container-runtime docker --verbosity 3
# If using Docker instead of Podman:
uv run ansible-builder build -tag localhost/network-awx-ee:latest --container-runtime podman --verbosity 3
# Verify image was created
podman images | grep network-ee
# Or: docker images | grep network-ee
8. Verification/Check EE Locally
Verify the build locally before moving it to the AWX server.
# Check Ansible Version
docker run network-ee:latest ansible --version
# Check Collections
docker run network-ee:latest ansible-galaxy collection list
# Check Python Packages
docker run network-ee:latest pip list | grep pynetbox
8. Push EE to Registry (Option A: Local Registry)
In our use case, we are building EE on Ansible node & AWX run on different node.
# K3s includes a local registry, but for simplicity we'll load directly
# Save image to tar file
podman save network-ee:latest -o network-ee.tar
# If using Docker:
docker save network-ee:latest -o network-ee.tar
# Ship EE to AWX node.
ls -l
scp network-ee-1.0.tar user@192.168.1.129:/tmp/
# SSH to AWX node
ls -l /tmp
# Import to K3s containerd (if EE ans AWX run on same node)
sudo k3s ctr images import network-ee-1.0.tar
# Verify in K3s
sudo k3s ctr images list | grep network-ee
9. Push EE to Registry (Option B: Docker Hub)
# Login to Docker Hub (create free account at hub.docker.com)
podman login docker.io
# Tag for Docker Hub
podman tag network-ee:1.0 docker.io/YOUR_USERNAME/network-ee:1.0
# Push to Docker Hub
podman push docker.io/YOUR_USERNAME/network-ee:1.0
# Verify on Docker Hub
# https://hub.docker.com/r/YOUR_USERNAME/network-ee
10. Add EE to AWX UI
1. Register the Execution Environment
Log into your AWX UI.
On the left sidebar, go to Administration β Execution Environments.
Click the Add button.
Fill in the details:
Name: Network-Automation-EE (or whatever you prefer).
Image: localhost/network-ee:latest
Note: Use the exact name you saw in your ctr images import output earlier.
Pull: Select Never (This is crucial! It tells AWX to use the image already on the K3s node instead of trying to download it from the internet).
Organization: Select your organization (e.g., Default).
Click Save.
2. Launch the Test
Go to Resources β Templates β Add β Add job template.
Name: EE Smoke Test
Inventory: Local Test Inventory (the one you just made).
Project: Your Git Repo.
Playbook: ee-check.yml.
Execution Environment: Select your Network-Automation-EE.
Save and Launch.
11. Verify EE in AWX
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β AWX > Administration > Execution Environments β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β Name β Image β Pull β
β βββββββββββββββββββββββββΌβββββββββββββββββββββββββββΌββββββββββ β
β AWX EE (default) β quay.io/ansible/awx-ee β Always β
β β
Network Automation EE β network-ee:1.0 β If needed β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
When creating Job Templates, select "Network Automation EE"
to use your custom collections!
π Project Structure
~/custom-ee/
βββ execution-environment.yml # Main EE definition
βββ requirements.yml # Ansible collections
βββ requirements.txt # Python packages
βββ context/ # Auto-generated by builder
βββ _build/
βββ Containerfile
β Verification Checklist
| Step | Check | Expected |
|---|---|---|
| ansible-builder installed | ansible-builder --version | Version shown |
| EE files created | ls ~/custom-ee/ | 3 files present |
| Image built | podman images | grep network-ee | Image listed |
| Collections present | Run container, ansible-galaxy collection list | netbox.netbox, cisco.ios shown |
| Added to AWX | AWX UI > Execution Environments | Network Automation EE listed |
π§ Troubleshooting
β ansible-builder build fails
# Check build logs
ansible-builder build --tag network-ee:1.0 --verbosity 3 2>&1 | tee build.log
# Common issues:
# 1. Network timeout - retry build
# 2. Collection not found - check spelling in requirements.yml
# 3. Python package conflict - check versions in requirements.txt
# Clean and retry
podman system prune -f
ansible-builder build --tag network-ee:1.0 --no-cache
β K3s can't find local image
# Verify image is imported to K3s containerd
sudo k3s ctr images list | grep network-ee
# If not, re-import
podman save network-ee:1.0 -o /tmp/network-ee.tar
sudo k3s ctr images import /tmp/network-ee.tar
# In AWX, set Pull policy to "Never" for local images
β AWX job fails with "collection not found"
# Verify the job is using correct EE
# AWX > Jobs > [Your Job] > Details > Execution Environment
# Should show: Network Automation EE
# If it shows: AWX EE (default) - edit your Job Template
# Job Template > Edit > Execution Environment > Select "Network Automation EE"
π Resources
Video 19: AWX GitHub + NetBox Integration
π Overview
Connect AWX to GitHub for GitOps-style automation and NetBox for dynamic inventory. Changes pushed to Git automatically sync to AWX. Devices added to NetBox automatically appear in AWX inventory.
π― What We'll Configure
| Component | Purpose |
|---|---|
| GitHub Project | Sync playbooks from Git repository |
| Machine Credential | SSH key for network devices |
| NetBox Credential | API token for NetBox |
| NetBox Inventory Source | Dynamic inventory from NetBox |
| Job Template | Tie it all together |
ποΈ Integration Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β AWX GITOPS + NETBOX ARCHITECTURE β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ββββββββββββββββββββ
β GitHub β
β β
β π playbooks/ β
β βββ backup.yml β
β βββ config.yml β
β βββ verify.yml β
ββββββββββ¬ββββββββββ
β
β Webhook / Sync
βΌ
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β AWX β
β β
β βββββββββββββββ βββββββββββββββ βββββββββββββββββββββββββββββββ β
β β Project β β Inventory β β Job Template β β
β β β β β β β β
β β GitHub βββββΌββββββ NetBox βββββΌββββββ Project: Network Playbooks β β
β β Playbooks β β Dynamic β β Inventory: NetBox Dynamic β β
β β β β Inventory β β Playbook: backup.yml β β
β βββββββββββββββ ββββββββ¬βββββββ β Credentials: SSH + NetBox β β
β β β EE: Network Automation EE β β
β β βββββββββββββββββββββββββββββββ β
β β β
βββββββββββββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββββββ
β
β API Query
βΌ
ββββββββββββββββββββ
β NetBox β
β 192.168.1.120 β
β β
β π± vIOS-R1 β
β π± vIOS-R2 β
β π± vIOS-R3 β
ββββββββββββββββββββ
π¦ Prerequisites
- β Video 17: AWX installed and running
- β Video 18: Custom EE with netbox.netbox collection
- β Video 7: NetBox running with devices
- β GitHub account with repository
π» Commands
1. Prepare GitHub Repository
# Create a new repo on GitHub or use existing
# Structure your repo like this:
network-automation/
βββ playbooks/
β βββ backup_config.yml
β βββ show_version.yml
β βββ deploy_ntp.yml
βββ inventory/
β βββ netbox_inv.yml # We won't use this - AWX handles it
βββ group_vars/
β βββ all.yml
βββ README.md
# Example: show_version.yml
cat <<'EOF'
---
- name: Get Device Versions
hosts: all
gather_facts: no
connection: ansible.netcommon.network_cli
tasks:
- name: Run show version
cisco.ios.ios_command:
commands:
- show version
register: version_output
- name: Display version
debug:
msg: "{{ inventory_hostname }}: {{ version_output.stdout_lines[0] | first }}"
EOF
2. Create Machine Credential (SSH)
AWX UI Steps:
βββββββββββββ
1. Navigate to: Resources > Credentials
2. Click: Add
3. Fill in:
- Name: Network SSH Credential
- Credential Type: Machine
- Username: ansible
- Password: ansible@123 (or use SSH key)
4. Click: Save
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Add Credential β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Name: [Network SSH Credential ] β
β Credential Type: [Machine ] βΌ β
β Organization: [Default ] βΌ β
β β
β βββ Type Details βββ β
β Username: [ansible ] β
β Password: [β’β’β’β’β’β’β’β’β’β’ ] β
β SSH Private Key: [ ] β
β Privilege Escalation: β
β Method: [sudo ] βΌ β
β Password: [ ] β
β β
β [Cancel] [Save] β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
3. Create NetBox Credential
AWX UI Steps:
βββββββββββββ
1. Navigate to: Resources > Credentials
2. Click: Add
3. Fill in:
- Name: NetBox API Token
- Credential Type: NetBox (if available) or Custom Credential
- NetBox URL: http://192.168.1.120:8000
- API Token: <your-netbox-token>
4. Click: Save
# Get your NetBox token:
# NetBox UI > Admin > API Tokens > Add Token
# Or via API:
curl -X POST http://192.168.1.120:8000/api/users/tokens/provision/ \
-H "Content-Type: application/json" \
-d '{"username":"admin","password":"admin"}'
4. Create Source Control Credential (GitHub)
AWX UI Steps:
βββββββββββββ
1. Navigate to: Resources > Credentials
2. Click: Add
3. Fill in:
- Name: GitHub Personal Access Token
- Credential Type: Source Control
- Username: <your-github-username>
- Password: <your-github-pat> (Personal Access Token)
4. Click: Save
# Create GitHub PAT:
# GitHub > Settings > Developer Settings > Personal Access Tokens > Generate
# Permissions: repo (full control)
5. Create Project (GitHub Sync)
AWX UI Steps:
βββββββββββββ
1. Navigate to: Resources > Projects
2. Click: Add
3. Fill in:
- Name: Network Automation Playbooks
- Organization: Default
- Execution Environment: Network Automation EE β Important!
- Source Control Type: Git
- Source Control URL: https://github.com/YOUR_USER/network-automation.git
- Source Control Credential: GitHub Personal Access Token
- Options:
βοΈ Clean
βοΈ Update Revision on Launch
4. Click: Save
5. Click: Sync (button with circular arrows)
# Watch sync status
# Should show: Successful
6. Create Inventory
AWX UI Steps:
βββββββββββββ
1. Navigate to: Resources > Inventories
2. Click: Add > Add Inventory
3. Fill in:
- Name: NetBox Dynamic Inventory
- Organization: Default
4. Click: Save
7. Add NetBox Inventory Source
AWX UI Steps:
βββββββββββββ
1. Open: NetBox Dynamic Inventory (from step 6)
2. Click: Sources tab
3. Click: Add
4. Fill in:
- Name: NetBox Source
- Source: Sourced from a Project β Select this!
- Project: Network Automation Playbooks
- Inventory File: inventory/netbox_inv.yml
OR (if using built-in):
- Source: NetBox
- Credential: NetBox API Token
- NetBox URL: http://192.168.1.120:8000
5. Source Variables (YAML):
# Source Variables for NetBox inventory
plugin: netbox.netbox.nb_inventory
api_endpoint: http://192.168.1.120:8000
token: "{{ lookup('env', 'NETBOX_TOKEN') }}"
validate_certs: false
# Group devices by these attributes
group_by:
- device_roles
- sites
- platforms
# Map NetBox platform to ansible_network_os
compose:
ansible_network_os: >-
{%- if platform and 'ios' in platform.slug | lower -%}
cisco.ios.ios
{%- elif platform and 'fortios' in platform.slug | lower -%}
fortinet.fortios.fortios
{%- else -%}
{{ platform.slug | default('unknown') }}
{%- endif -%}
ansible_host: primary_ip4.address | default('') | split('/') | first
6. Update Options:
βοΈ Overwrite
βοΈ Update on Launch
7. Click: Save
8. Click: Sync (circular arrows button)
8. Verify Inventory Sync
AWX UI Steps:
βββββββββββββ
1. Go to: Resources > Inventories > NetBox Dynamic Inventory
2. Click: Hosts tab
Expected:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Hosts β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Name β Description β Activity β
β ββββββββββββββΌβββββββββββββββββββββΌβββββββββββββββββββββββββββββ
β vIOS-R1 β Cisco IOS Router β β β
β vIOS-R2 β Cisco IOS Router β β β
β vIOS-R3 β Cisco IOS Router β β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
3. Click: Groups tab
Expected Groups (auto-created from NetBox):
- device_roles_router
- sites_main_dc
- platforms_cisco_ios
9. Create Job Template
AWX UI Steps:
βββββββββββββ
1. Navigate to: Resources > Templates
2. Click: Add > Add Job Template
3. Fill in:
- Name: Show Version - All Routers
- Job Type: Run
- Inventory: NetBox Dynamic Inventory
- Project: Network Automation Playbooks
- Execution Environment: Network Automation EE β Important!
- Playbook: playbooks/show_version.yml
- Credentials:
- Network SSH Credential (Machine)
4. Options:
β Enable Privilege Escalation (not needed for network devices)
βοΈ Enable Concurrent Jobs (optional)
5. Click: Save
10. Launch Job and Verify
AWX UI Steps:
βββββββββββββ
1. Go to: Resources > Templates
2. Find: "Show Version - All Routers"
3. Click: π Launch button
Watch job output:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Job: Show Version - All Routers #1 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Status: β
Successful β
β Started: 2025-01-24 10:30:00 β
β Finished: 2025-01-24 10:30:45 β
β β
β PLAY [Get Device Versions] *** β
β β
β TASK [Run show version] *** β
β ok: [vIOS-R1] β
β ok: [vIOS-R2] β
β ok: [vIOS-R3] β
β β
β TASK [Display version] *** β
β ok: [vIOS-R1] => "Cisco IOS Software, IOSv ..." β
β ok: [vIOS-R2] => "Cisco IOS Software, IOSv ..." β
β ok: [vIOS-R3] => "Cisco IOS Software, IOSv ..." β
β β
β PLAY RECAP *** β
β vIOS-R1 : ok=2 changed=0 failed=0 β
β vIOS-R2 : ok=2 changed=0 failed=0 β
β vIOS-R3 : ok=2 changed=0 failed=0 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
π GitOps Workflow
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β GITOPS WORKFLOW β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Developer GitHub AWX
β β β
β 1. git push β β
β ββββββββββββββββββββββ>β β
β β β
β β 2. Webhook/Sync β
β β ββββββββββββββββββββββ>β
β β β
β β β 3. Update Project
β β β (git pull)
β β β
β β β 4. Launch Job
β β β (if configured)
β β β
β β β 5. Run playbook
β β β against NetBox
β β β inventory
β β β
β 6. View results in AWX UI β
β <βββββββββββββββββββββββββββββββββββββββββββββββββ
β Complete Integration Checklist
| Component | Verification | Status |
|---|---|---|
| GitHub Credential | Resources > Credentials | β |
| Network SSH Credential | Resources > Credentials | β |
| NetBox Credential | Resources > Credentials | β |
| Project synced | Resources > Projects > Sync successful | β |
| Inventory created | Resources > Inventories | β |
| NetBox source added | Inventory > Sources > Sync successful | β |
| Hosts appear | Inventory > Hosts > vIOS-R1,R2,R3 | β |
| Groups appear | Inventory > Groups > device_roles_router | β |
| Job Template created | Resources > Templates | β |
| Job runs successfully | Jobs > Successful | β |
π§ Troubleshooting
β Project sync fails - "Authentication failed"
# Check GitHub credential
# 1. Verify PAT has 'repo' permission
# 2. PAT might have expired - regenerate
# Test manually on AWX node:
git clone https://YOUR_TOKEN@github.com/YOUR_USER/network-automation.git
β Inventory sync shows 0 hosts
# Check NetBox has devices with primary IPs
# NetBox UI > Devices > Each device needs:
# - Primary IPv4 assigned
# - Platform set (e.g., cisco-ios)
# - Status: Active
# Test NetBox API:
curl -s http://192.168.1.120:8000/api/dcim/devices/ \
-H "Authorization: Token YOUR_TOKEN" | jq '.results[].name'
β Job fails - "No hosts matched"
# Check inventory has hosts
# AWX UI > Inventories > NetBox Dynamic Inventory > Hosts
# Check playbook 'hosts:' matches a group
# Should be: hosts: all or hosts: device_roles_router
# Sync inventory and retry
# Inventory > Sources > Sync button
β Job fails - "Connection refused"
# Check ansible_host variable
# AWX > Inventories > Hosts > vIOS-R1 > Variables
# Should show: ansible_host: 192.168.1.201
# Check network connectivity from AWX
kubectl exec -it deployment/awx-task -n awx -- ping 192.168.1.201
# Check SSH credentials are correct
# Try manual SSH from AWX node
β Collection not found in job
# Verify Job Template uses correct EE
# Resources > Templates > [Your Template] > Execution Environment
# Should be: Network Automation EE (from Video 18)
# NOT: AWX EE (default)
# If still failing, rebuild EE with correct collections
| Component | Purpose |
|---|---|
| GitHub Project | Sync playbooks from Git repository |
| Machine Credential | SSH key for network devices |
| NetBox Credential | API token for NetBox |
| NetBox Inventory Source | Dynamic inventory from NetBox |
| Job Template | Tie it all together |
Video-22 AWX MCP Integeration
π Coming Soon
π Changelog
v25.0 (2025-01-18)
- β
Video 16: Complete rewrite with step-by-step instructions
- All steps now collapsible (details/summary)
- Server setup: 10 detailed steps with verification
- Client setup: 8 detailed steps with verification
- Testing section with example queries
- Added
gemini mcp addcommand + manual JSON fallback - Comprehensive troubleshooting guide (7 issues)
- Gemini models and token usage explanation
- Rate limit vs daily quota explanation
- HostFixMiddleware for remote SSE connections
v24.0 (2025-01-17)
- β Video 16: Architecture diagrams and basic setup
- β Video 17: AWX preview added
v23.0 (2025-01-16)
- β Added AWX Series roadmap (Videos 17-23)
v22.0 (2025-01-16)
- β Video 15: Fixed ansible_mcp.py absolute paths
β If you find this helpful, please star the repo! β
