Feature |
Product |
Release introduced |
---|---|---|
High Availability-CPU (HA-CPU) for a standalone switch |
5420 Series |
Not Applicable |
5520 Series |
Not Applicable |
|
VSP 4450 Series |
Not Applicable |
|
VSP 4900 Series |
Not Applicable |
|
VSP 7200 Series |
Not Applicable |
|
VSP 7400 Series |
Not Applicable |
|
VSP 8200 Series |
Not Applicable |
|
VSP 8400 Series |
Not Applicable |
|
VSP 8600 Series |
VSP 8600 4.5 |
|
XA1400 Series |
Not Applicable |
|
High Availability-CPU (HA-CPU) for Layer 2 with Simplified vIST |
5420 Series |
Not Applicable |
5520 Series |
Not Applicable |
|
VSP 4450 Series |
Not Applicable |
|
VSP 4900 Series |
Not Applicable |
|
VSP 7200 Series |
Not Applicable |
|
VSP 7400 Series |
Not Applicable |
|
VSP 8200 Series |
Not Applicable |
|
VSP 8400 Series |
Not Applicable |
|
VSP 8600 Series |
VSP 8600 6.3 |
|
XA1400 Series |
Not Applicable |
|
High Availability-CPU (HA-CPU) for Layer 3 with Simplified vIST |
5420 Series |
Not Applicable |
5520 Series |
Not Applicable |
|
VSP 4450 Series |
Not Applicable |
|
VSP 4900 Series |
Not Applicable |
|
VSP 7200 Series |
Not Applicable |
|
VSP 7400 Series |
Not Applicable |
|
VSP 8200 Series |
Not Applicable |
|
VSP 8400 Series |
Not Applicable |
|
VSP 8600 Series |
VSP 8600 6.3 |
|
XA1400 Series |
Not Applicable |
The High Availability-CPU (HA-CPU) framework supports redundancy at the hardware and application levels. The CP software runs on an Input/Output control (IOC) module in both slots 1 and 2, and the HA-CPU feature activates two CPUs simultaneously in primary or standby role. These CPUs exchange topology data so that, if a failure occurs, one of the CPUs can take over the operations of the other. You can configure the CPUs to operate in either HA mode or non-HA mode. In HA mode, the two CPUs synchronize configuration, protocol states, and tables. In non-HA mode, the two CPUs do not synchronize.
The default mode is HA disabled. To activate HA-CPU mode, use the boot config flags ha-cpu command. To deactivate HA-CPU mode, use the no boot config flags ha-cpu command.
If you switch from one mode to the other, the standby CP restarts in the specified HA mode (hot standby) or non-HA mode (warm standby). This does not impact the Input/Output process and there is no traffic loss on the physical slot of the card.
If a failure occurs and the chassis is configured for either HA mode (hot standby) or non-HA mode (warm standby), the CP software restarts and runs as standby. The system generates a trap to indicate the change from hot-standby mode to warm-standby mode.
Note
The HA-CPU feature provides node-level redundancy. Hot standby mode is not supported with fabric functionality, which provides network-level redundancy.
If your switch is in hot standby mode (ha-cpu boot flag is set to true), you must disable boot config flag to configure SPBM or vIST on the switch. When the switch is in warm standby mode (ha-cpu boot flag is set to false), you must disable SPBM and vIST to move to hot standby mode.
Hot-standby mode cannot be enabled while SPB/VIST features are still configured.
In HA mode, also called hot standby, the platform synchronizes the primary CPU information to the standby secondary CPU. The platform adds any configuration changes or application table changes to the primary CPU by using bulk synchronization or incremental synchronization. After synchronization is complete, both the CPUs contain the same configuration and application tables information. Application in HA mode support either full HA implementation or partial HA implementation. In full HA implementation, both the configuration and runtime application data tables exist on the primary CPU and the standby CPU.
If the primary CPU fails, the standby CPU takes over the primary responsibility quickly and you do not see an impact on your network. Also, the IOC and SF modules as well as the full HA applications continue to operate, and the full HA applications run consistency checks to verify the tables.
Feature |
Supported |
---|---|
Layer 1 |
|
Port configuration parameters |
Yes |
Layer 2 |
|
Media Access Control security (MACsec) |
Yes |
Multiple Spanning Tree Protocol parameters |
Yes |
Quality of Service (QoS) parameters |
Yes |
Rapid Spanning Tree Protocol parameters |
Yes |
VLAN parameters |
Yes |
Layer 3 |
|
ARP entries |
Yes |
Border Gateway Protocol (BGP) |
Partial (configuration only) |
Dynamic Host Configuration Protocol (DHCP) Relay |
Partial (configuration only) |
Internet Group Management Protocol (IGMP) |
Yes |
IPv6 |
Partial (configuration only) |
Access Control Lists |
Yes |
Open Shortest Path First (OSPF) |
Yes |
Protocol Independent Multicast (PIM) |
Partial (configuration only) |
Prefix lists and route policies |
Yes |
Routing Information Protocol |
Yes |
Router Discovery |
Yes |
Static and default routes |
Yes |
Virtual IP (VLANs) |
Yes |
Virtual Router Redundancy Protocol |
Yes |
Transport Layer |
|
Network Load Balancing (NLB) |
Yes |
Remote Access Dial-In User Services (RADIUS) |
Yes |
Terminal Access Controller Access-Control System plus (TACACS+) |
Partial (configuration only) |
UDP forwarding |
Yes |
A few applications in HA mode have partial HA implementation, where the system synchronizes user configuration data (including interfaces, IPv6 addresses and static routes) from the primary CPU to the standby CPU. However, for partial HA implementation, the platform does not synchronize dynamic data learned by protocols. After failure, those applications restart and rebuild their tables, which causes an interruption to traffic that is dependent on a protocol or application with partial HA support.
The following applications support Partial HA:
Layer 3
Border Gateway Protocol (BGP)
Dynamic Host Configuration Protocol (DHCP) Relay
Factory defaults flag behavior
IPv6
MACsec Key Agreement
Open Shortest Path First Version 3 for Loopback interfaces
Protocol Independent Multicast-Sparse Mode (PIM-SM)
Protocol Independent Multicast-Source Specific Mode (PIM-SSM)
SHA512 secure password hashing
Transport Layer
Terminal Access Controller Access Control System plus (TACACS+)
In non-HA mode, also called warm standby, the platform does not synchronize the configuration between the primary CPU and the standby CPU. When failover happens, the standby CPU switches to primary role, and all the IOCs (except the new primary CPU) are restarted. The new primary CPU loads the configuration when all the cards are ready. These operations cause an interruption to traffic on all ports on the chassis.
Note
When there is a switch-over to warm standby mode, only the RWA access level user can log in to the new primary CPU console screen.
The remaining users can log in to the CPU console screen only after the primary CP module reloads the configuration and displays the new login prompt.
When the platform switches from standby CPU to primary CPU in warm standby mode, the platform always uses the previously-saved primary configuration file to boot the chassis on the switch.
The runtime config file must be present on the flash drive during the boot-up of both the primary CPU and the standby CPU. If the config file that is used by the primary CPU for booting is not available on the standby CPU, the standby CPU loads the default config file. You can run the save config command to synchronize the configuration settings or copy the boot config file from the primary CPU to the standby CPU. The standby CPU must be rebooted to load the desired config file.
When the primary CPU is physically removed in warm-standby mode, all cards are rebooted and the standby CPU switches to the primary role and loads the saved configuration. If the old primary CPU is physically not plugged in during this time, the respective slot configuration is not loaded to memory even though the configuration exists in the config file. When the old primary CPU is re-inserted later, the system considers this as a first-time insertion and loads the default configuration on the inserted CP card. This is expected behavior in warm-standby mode. To load the configuration for the re-inserted standby CPU, ensure that the savetostandby boot-flag is set to true after re-inserting the removed CPU, and run the CLI command source <config-file> on the active CPU.