Search our Knowledge Base sites to find answers to your questions.
Ask All Knowledge Base Sites All Knowledge Base Sites JunosE Defect (KA)Knowledge BaseSecurity AdvisoriesTechnical BulletinsTechnotes Sign in to display secure content and recently viewed articlesHow to install SPC or SPC II modules in an SRX5000 chassis cluster
This article explains how to install SPC or SPC II modules in an existing SRX5000 chassis cluster using the minimum downtime procedure.
If the chassis cluster is operating in active-active mode, you must transition it to active-passive mode before using this procedure. You transition the cluster to active-passive mode by making one node primary for all redundancy groups.
>show chassis cluster status root@srx5K> show chassis cluster status Mar 13 21:32:41 Monitor Failure codes: CS Cold Sync monitoring FL Fabric Connection monitoring GR GRES monitoring HW Hardware monitoring IF Interface monitoring IP IP monitoring LB Loopback monitoring MB Mbuf monitoring NH Nexthop monitoring NP NPC monitoring SP SPU monitoring SM Schedule monitoring CF Config Sync monitoring RE Relinquish monitoring Cluster ID: 1 Node Priority Status Preempt Manual Monitor-failures Redundancy group: 0 , Failover count: 1 node0 100 primary no no None node1 1 secondary no no None Redundancy group: 1 , Failover count: 1 node0 100 primary yes no None node1 1 secondary yes no None Redundancy group: 2 , Failover count: 1 node0 100 secondary yes no None node1 1 primary yes no None
>request chassis cluster failover redundancy group 2 node 0
To install first-generation SRX5K-SPC-2-10-40 SPCs, both of the services gateways in the cluster must be running Junos OS Release 11.4R2S1, 12.1R2, or later.
To install next-generation SRX5K-SPC-4-15-320 SPCs, both of the services gateways in the cluster must be running Junos OS Release 12.1X44-D10, or later.
You must install SPCs of the same type and in the same slots in both of the services gateways in the cluster. Both services gateways in the cluster must result in the same physical configuration and slot locations post upgrade.
If you are adding first-generation SRX5K-SPC-2-10-40 SPC in an existing cluster installed with next-generation SRX5K-SPC-4-15-320 SPC, you must install the new SPCs so that a next-generation SRX5K-SPC-4-15-320 SPC is the SPC in the original lowest-numbered slot. For example, if the chassis already has two first-generation SPCs installed in slots 2 and 3, you cannot install SRX5K-SPC-4-15-320 SPCs in slots 0 or 1. You will need to make sure that an SRX5K-SPC-4-15-320 SPC is installed in the slot providing center point (CP) functionality (in this case, slot 2). This ensures that the CP functionality is performed by an SRX5K-SPC-4-15-320 SPC.
If you are replacing next-generation SRX5K-SPC-4-15-320 SPCs in the services gateways, both services gateways must already be equipped with high-capacity power supplies and fan trays.
Console connections to both chassis cluster nodes are necessary to allow unique config adjustments and due to device power off via 'halt' method used.
NOTES:
This procedure was compiled with the assumption that node0 is the primary for control plane (RG0) and data plane (RG1+) and configured with higher priority than the secondary node. Ensure you have two separate console CLI sessions to each node before proceeding with the steps below. Allow ~15 minutes after each reboot for the respective node to come up with all its modules online in the procedure.
Link naming below are examples only and will be dependent upon current configurations
set interfaces xe-13/0/0 disable
set interfaces xe-13/1/0 disable
set security flow tcp-session no-syn-check
set security flow tcp-session no-sequence-check
deactivate chassis cluster redundancy-group 1 preempt
deactivate chassis cluster redundancy-group 2 preempt
deactivate chassis cluster redundancy-group 1 interface-monitor
deactivate chassis cluster redundancy-group 1 ip-monitoring
deactivate chassis cluster redundancy-group 2 interface-monitor
deactivate chassis cluster redundancy-group 2 ip-monitoring
deactivate chassis cluster control-link-recovery
{primary:node0}[edit]
root@srx5K#commit
delete chassis cluster control-port
set chassis cluster control-ports fpc 10 port 0 (dummy SPC port)
set chassis cluster control-ports fpc 22 port 0 (dummy SPC port)
delete interface fab0
delete interface fab1
set interfaces fab0 fabric-options member-interfaces xe-1/4/0 (dummy non-used port)
set interfaces fab1 fabric-options member-interfaces xe-13/4/0 (dummy non-used port)
{primary:node0}[edit]root@srx5K#commit
node0:
configuration check succeeds
error: remote commit configuration failed on node1
error: commit failed
error: Connection to node1 has been broken
e.g.,
{primary:node0}[edit]
root@srx5K# exit
The configuration has been changed but not committed
Discard uncommitted changes? [yes,no] (yes) no <<< SHOULD be "no"
Exit aborted
{primary:node0}[edit]
root@srx5K# commit and-quit
node0:
commit complete
Exiting configuration mode
{primary:node1}
>request system power-off
>show version
>show chassis fpc pic-status
>show chassis cluster status (node0 should be “lost” status)
Node 0 set interfaces xe-1/0/0 disable
set interfaces xe-1/1/0 disable
delete interfaces xe-13/0/0 disable
delete interfaces xe-13/1/0 disable
commit check
Node 1 set interfaces xe-1/0/0 disable
set interfaces xe-1/1/0 disable
delete interfaces xe-13/0/0 disable
delete interfaces xe-13/1/0 disable
commit check
{primary:node0}[edit]
root@srx5K#commit
{primary:node1}[edit]
root@srx5K#commit
>show security flow session summary
("Sessions-in-use" counter is incrementing) {primary:node0}
>request system power-off
delete chassis cluster control-port
set chassis cluster control-ports fpc 0 port 0
set chassis cluster control-ports fpc 12 port 0
delete interface fab0
delete interface fab1
set interfaces fab0 fabric-options member-interfaces xe-1/3/0
set interfaces fab1 fabric-options member-interfaces xe-13/3/0
commit
{primary:node0}
>request system halt
Halt the system ? [yes,no] (no) yes
delete chassis cluster control-port
set chassis cluster control-ports fpc 0 port 0
set chassis cluster control-ports fpc 12 port 0
delete interface fab0
delete interface fab1
set interfaces fab0 fabric-options member-interfaces xe-1/3/0
set interfaces fab1 fabric-options member-interfaces xe-13/3/0
commit
>show chassis cluster status
>show chassis cluster interfaces
>show chassis cluster information detail
>show chassis fpc pic-status
delete interfaces xe-1/0/0 disable
delete interfaces xe-1/1/0 disable
commit
delete security flow tcp-session no-syn-check
delete security flow tcp-session no-sequence-check
activate chassis cluster redundancy-group 1 interface-monitor
activate chassis cluster redundancy-group 1 ip-monitoring
activate chassis cluster redundancy-group 2 interface-monitor
activate chassis cluster redundancy-group 2 ip-monitoring
>show chassis cluster status
>show chassis cluster ip-monitoring status
>show chassis cluster interfaces
activate chassis cluster redundancy-group 1 preempt
activate chassis cluster redundancy-group 2 preempt
>request chassis cluster failover redundancy group 0 node 0
>request chassis cluster failover redundancy group 1 node 0
>request chassis cluster failover reset redundancy-group 0
>request chassis cluster failover reset redundancy-group 1
Getting Up and Running with Junos
Getting Up and Running with Junos Security Alerts and Vulnerabilities Product Alerts and Software Release Notices Problem Report (PR) Search Tool EOL Notices and Bulletins JTAC User Guide Customer Care User Guide Pathfinder SRX High Availability Configurator SRX VPN Configurator Training Courses and Videos End User Licence Agreement Global Search