When you troubleshoot connectivity or service issues from the switch, you're seeing the network from the switch's perspective — not the user's. Latency measurements, DHCP behavior, and DNS resolution can all look different depending on where in the network you're standing. A Service Probe bridges that gap.
A Service Probe emulates a client device connected to a VLAN on your switch. It operates completely outside ExtremeXOS's normal networking domain, so it sees the network the same way a device plugged into a front panel port would. This makes it an effective tool for validating end-user experience, catching problems that don't show up from the switch itself, and debugging intermittent issues without needing a physical test device.
Once created, a Service Probe can:
Service Probe was introduced in version 33.5.1. Shell and Python code execution, enhanced DNS querying, and asynchronous command handling were added in version 33.6.1.
Each Service Probe is implemented as a Linux network namespace containing a single macvlan-type network device, called the service interface. This service interface is linked to the network device that represents the specified VLAN, but lives in a completely separate namespace from the OS itself.
From the network's perspective, a Service Probe looks and behaves like any other device connected to that VLAN — it has its own MAC address, responds to ARP, can send and receive traffic, and is visible to neighboring devices. Multiple probes can share the same VLAN; each still gets its own namespace and service interface.
The probe's MAC address is registered with the OS slow path using the VMAC infrastructure, and programmed into hardware on all stack nodes via the FDB filter mechanism so traffic is forwarded correctly to the CPU.
Each Service Probe is assigned a unique MAC address that is different from the switch MAC.
0A:xx:xx:xx:xx:xx and 0E:xx:xx:xx:xx:xx), which limits probes to two per VLAN.MAC addresses are re-used across VLANs — the same MAC can appear on probes attached to different VLANs at the same time.
A Service Probe gets its IP configuration in one of two ways:
Dynamic (DHCP): The probe runs a DHCP client (udhcpc) inside its namespace, requesting an address, subnet mask, gateway, and DNS servers from the network. This is a real DHCP exchange from the perspective of the DHCP server — it's an effective way to verify that DHCP is working for clients on that VLAN.
Static: An IP address, subnet mask, and gateway are assigned when the probe is created and remain fixed.
When a dynamic probe is created, it immediately attempts a synchronous DHCP request. To keep this initial delay short, the DHCP client uses the following timing:
| Parameter | Value | Description |
|---|---|---|
| Retries | 2 | Number of DHCP requests sent per attempt cycle |
| Timeout | 2 seconds | Time to wait for a response before retrying |
| Try-again interval | 30 seconds | Wait time before starting a new attempt cycle if no response was received |

Note
Service Probes cannot be associated with the Management VLAN.Each probe namespace maintains its own /etc/resolv.conf, giving every probe independent DNS configuration. Up to three name servers can be configured — the maximum libc will use. Name servers can come from:
In version 33.6.1 and later, you can direct a DNS query to a specific server (primary, secondary, or tertiary), all configured servers at once, or the default system resolver. The results of the most recent query to each server are stored and shown in show service-probe detail.
The run service-probe query gateway command tests whether the probe's configured gateway is reachable by sending ARP requests from inside the probe's namespace using arping.
| Parameter | Value |
|---|---|
| Retries | 3 ARP requests |
| Timeout per request | 2 seconds |
This command runs synchronously and completes in at most roughly 6 seconds. The most recent result is stored and shown in show service-probe detail.
The run service-probe ping command sends ICMP echo requests to a hostname or IP address from within the probe's network context. The result is reported as Pass, Fail, or Not Completed.
| Parameter | Value |
|---|---|
| Count | 3 packets |
| Wait per packet | 2 seconds |

Note
A complete failure (all three pings timing out) takes up to 6 seconds. In version 33.6.1 and later, ping runs asynchronously so it does not block the CLI while waiting.Introduced in version 33.6.1. The run service-probe shell and run service-probe python commands let you run arbitrary shell commands or Python scripts from inside the probe's network namespace. This is useful when you need to go beyond ping — for example, running traceroute, testing HTTP connectivity, or executing a custom diagnostic script that relies on the probe's IP stack and DNS configuration.
Results include:
| Field | Description |
|---|---|
normalExit | Whether the process exited cleanly |
exitStatus | The process exit code |
output | Combined stdout/stderr (up to 7,000 bytes) |
outputTruncated | True if output exceeded the 7,000-byte buffer |
To make scripts easier to write, the OS automatically sets environment variables describing the probe's current state at the time of execution:
SP_VLAN_NAME='uplink' SP_VLAN_ID='3500' SP_VLAN_UP='0' SP_IP_ADDR='172.16.1.99' SP_IP_ADDRMASK='172.16.1.99/24' SP_IP_NETMASK='255.255.255.0' SP_IP_GATEWAY='172.16.1.99' SP_DNS='11.100.100.1,172.16.1.98' SP_DYNAMIC='0' SP_MAC='0a:11:88:fe:ec:36'
In version 33.6.1 and later, ping, query DNS, shell, and Python commands run asynchronously. The workflow is:
Up to 10 actions can run concurrently by default.
Command timeouts:
| Command | Check Interval | Max Run Time |
|---|---|---|
| Query DNS | 1 second | 8 seconds |
| Ping | 1 second | 10 seconds |
| Shell | 1 second | 15 seconds (configurable per request) |
| Python | 1 second | 15 seconds (configurable per request) |
All Service Probe commands run inside a dedicated SvcProbe cgroup, isolating their resource consumption from the rest of the system:
| Resource | Limit |
|---|---|
| CPU | 5% maximum (drawn from the Other cgroup allocation) |
| Memory | 5% maximum |
Commands run as the admin user, not root. Only the OS accounts with admin privileges can issue run commands or make POST requests to the NOS-API for Service Probes, so the shell and Python execution features do not introduce any new privilege escalation paths beyond what already exists.
Service Probes are not saved to the switch configuration and do not survive a reboot or a netTools process restart. After either event, probes must be recreated.
In stacking environments, a subset of probe data is checkpointed to the backup node — not for probe functionality, but so that FDB filters can be cleaned up correctly if the backup becomes primary. When a probe is created, the backup node receives just enough data to create the corresponding FDB filter. When a probe is deleted, the backup node removes it. On failover, the new primary uses this checkpointed data to remove any hardware programming left over from the previous primary.
Section contents: