Knowledge Search


×
 

SRX Getting Started - Configure Chassis Cluster (High Availability) on a SRX100 device

  [KB15669] Show Article Properties


Summary:

This article describes the basic setup of a Chassis Cluster (High Availability), also known as JSRP, on a SRX100 device.

For other topics, go to the SRX Getting Started main page.


Symptoms:

Configure SRX100 devices as a Chassis Cluster.

The following topology will be used for the configuration.

Topology notes:  

  • Both reths (reth 0.0 and reth 1.0) belong to Redundancy Group 1, the data plane. 
  • Redundancy Group 0 is the control plane.
  • fe-0/0/5 was selected for the fabric (data) link in this example.

For the Deployment Guide for SRX Series Services Gateways in Chassis Cluster Configuration, refer to TN260.

For other SRX devices, refer to:

SRX210 KB15505
SRX220 KB21312
SRX240 KB15504
SRX550 KB25889
SRX650 KB15503
SRX1400 TN10
SRX3000 series TN10
SRX5000 series TN10

Cause:

Solution:
This section contains the following:


Prerequisites

Before proceeding with configuring the device for a Chassis Cluster, complete these prerequisites:


a. 
In the SRX configuration, remove any existing configuration associated with the interfaces that will be transformed into fxp0 (out-of-band management) and fxp1 (control link) when the chassis cluster feature is enabled.

For the SRX100, these interfaces are fe-0/0/6 and fe-0/0/7.   The fe-0/0/6 interface will be mapped to fxp0 (out-of-band management) and the fe-0/0/7 interface will be mapped to fxp1 (control).  The interfaces that are mapped to fxp0 and fxp1 are device specific. For more information on this, refer to KB15356 - How are interfaces assigned on J-Series and SRX platforms when the chassis cluster is enabled?

For help on removing the existing configuration on these interfaces, refer to KB27713 - How to remove references to the interfaces that will be used as fxp0 and fxp1.

Important note:
If you do not perform this prerequisite, then your chassis cluster may not come up; it may go into a Hold/Lost state on both nodes.



b. Confirm that the HARDWARE on both devices is the same.

Verify using this command on both devices:
root> show chassis hardware detail
Hardware inventory:
Item             Version  Part number  Serial number     Description
Chassis                                AU4810AF1095      SRX100H
Routing Engine   REV 16   750-021773   AT4810AF1095      RE-SRX100H
  da0     999 MB  ST72682                                Nand Flash
  usb0 (addr 1)  DWC OTG root hub 0    vendor 0x0000     uhub0
  usb0 (addr 2)  product 0x005a 90     vendor 0x0409     uhub1
  usb0 (addr 3)  ST72682  High Speed Mode 64218 STMicroelectronics umass0
FPC 0                                                    FPC
  PIC 0                                                  8x FE Base PIC
Power Supply 0

For more information, refer to KB16141 - What are the minimum hardware and software requirements for a Chassis Cluster on SRX?



c. Confirm that the SOFTWARE on both standalone devices is the same Junos OS version.

Verify using this command on both devices:
root> show version
Model: srx100h
JUNOS Software Release [11.4R.7.5]

d. Confirm that the LICENSE keys are the same on both devices.  

There is not a separate license for chassis cluster. However, both firewalls must have the identical features and license keys enabled or installed.  Note that the license keys are not required to configure your chassis cluster, but they are required once your chassis cluster is in production and you need to use those features on either device.

Verify the license keys by using this command on both devices:
root> show system license


e.  If running Junos 10.4 or earlier, Ethernet switching is not supported.
For more information, refer to Disabling Switching on SRX100, SRX210, SRX220, and SRX240 Devices Before Enabling Chassis Clustering .     




Configuration

The following are the basic steps required for configuring a Chassis Cluster on SRX100 devices.    


Step 1.  Physically connect the two devices together to form the control and fabric (data) links. 

Control link: 
On the SRX100 device, connect fe-0/0/7 on device A to fe-0/0/7 on device B.  The fe-0/0/7 interface on device B will change to fe-1/0/7 after clustering is enabled in Step 2.
Note: It is strongly recommended that the interfaces used for the control link are connected directly with a cable (instead of a switch). If a switch must be used, then refer to KB25017.


Fabric (Data) link: 

On the SRX100 device, connect fe-0/0/5 on device A to fe-0/0/5 on device B. The fe-0/0/5 interface on device B will change to fe-1/0/5 after clustering is enabled in Step 2.  
Note:
The Fabric (Data) link can be any available open port either onboard or gPIM other than fe-0/0/6 and fe-0/0/7. 


It is helpful to know that after step 2, the following will interface assignments will occur:
  • fe-0/0/6 will become fxp0 and used as for individual management of each of the devices
  • fe-0/0/7 will become fxp1 and used as the control link between the two devices   (This is also documented in KB15356.)
  • The other interfaces are also renamed on the secondary device. For example, on a SRX 100 device, the fe-0/0/0 interface is renamed to fe-1/0/0 on the secondary node 1. Refer to the complete mapping for each SRX Series device: Node Interfaces on Active SRX Series Chassis Clusters.



Step 2.  Enable cluster mode and reboot the devices. Note that this is done in operational mode and not with a configure mode command.

     > set chassis cluster cluster-id <0-15> node <0-1> reboot
For example:
On device A:    >set chassis cluster cluster-id 1 node 0 reboot
On device B:    >set chassis cluster cluster-id 1 node 1 reboot
  • Cluster id will be the same on both devices, but the node id should be different as one device is node0 the other device is node1.
  • This command will need to be done on both devices.
  • The range for the cluster-id is 0-15. Setting it to 0 is the equivalent of disabling cluster mode. User has only 1-15 (15 cluster IDs) ids for working cluster, so user can calculate virtual MAC only for these 15 cluster ids. For more information, refer to [KB13689] How is the virtual MAC address derived for reth interfaces on J-Series and SRX?
After the reboot, note how the fe-0/0/6 and fe-0/0/7 interfaces are re-purposed to fxp0 and fxp1 respectively.



NOTE: The following steps 3 - 8 can all be performed on the primary device (Device A), and they will be automatically copied over to the secondary device (Device B) when a commit is done.



Step 3.  Configure the device specific configurations such as host names and management IP addresses.
This is specific to each device and is the only part of the configuration that is unique to its specific node.  This is done by entering the following commands (all on the primary node):

    On device A:
    {primary:node0}
    # set groups node0 system host-name <name-node0>      -Device A's host name
    # set groups node0 interfaces fxp0 unit 0 family inet address <ip address/mask>  -Device A's management IP address on fxp0 interface

    # set groups node1 system host-name <name-node1>      -Device B's host name
    # set groups node1 interfaces fxp0 unit 0 family inet address <ip address/mask   -Device B's management IP address on fxp0 interface


    The 'set apply-groups' command is run so that the individual configs for each node, set by the above commands, are applied only to that node. This command is required.


Step 4.  Configure the FAB links (data plane links for RTO sync, etc).
For this example we will use physical ports fe-0/0/5 from each node.

    On device A:
    {primary:node0}
    -fab0 is node0 (Device A) interface for the data link
    # set interfaces fab0 fabric-options member-interfaces fe-0/0/5

    -fab1 is node1 (Device B) interface for the data link    

    # set interfaces fab1 fabric-options member-interfaces fe-1/0/5    

    Note: There are no configuration commands for the Control link connection. Only the SRX5600 and SRX5800 platforms require configuration commands for the Control link (SPC port).



Step 5.  Configure the Redundancy Group 0 for the Routing Engine failover properties.  Also configure Redundancy Group 1 (all the interfaces will be in one Redundancy Group in this example) to define the failover properties for the reth interfaces.

Note:  If you want to use multiple Redundancy Groups for the interfaces, refer to the Security Configuration Guide.

    {primary:node0}
    # set chassis cluster redundancy-group 0 node 0 priority 100
    # set chassis cluster redundancy-group 0 node 1 priority 1
    # set chassis cluster redundancy-group 1 node 0 priority 100
    # set chassis cluster redundancy-group 1 node 1 priority 1


Step 6.  Configure the interface monitoring.  Monitoring the health of the interfaces is one way to trigger Redundancy group failover. Note: Interface monitoring is not recommended for redundancy-group 0.

    On device A:
    {primary:node0}
    # set chassis cluster redundancy-group 1 interface-monitor fe-0/0/0 weight 255
    # set chassis cluster redundancy-group 1 interface-monitor fe-0/0/1 weight 255
    # set chassis cluster redundancy-group 1 interface-monitor fe-1/0/0 weight 255
    # set chassis cluster redundancy-group 1 interface-monitor fe-1/0/1 weight 255

Step 7.  Configure the Redundant Ethernet interfaces (Reth interface) and assign the Redundant interface to a zone. Make sure that you setup your max number of redundant interfaces as follows:

    On device A:
    {primary:node0}
    # set chassis cluster reth-count <max-number>

    -for first interface in the group (on Device A)
    # set interfaces <node0-interface-name> fastether-options redundant-parent reth0  

    -for second interface in the group (on Device B) 
    # set interfaces <node1-interface-name> fastether-options redundant-parent reth0  

    -set up redundancy group for interfaces 

    # set interfaces reth0 redundant-ether-options redundancy-group <group-number>       

    # set interfaces reth0.0 family inet address <ip address/mask>
    # set security zones security-zone <zone> interfaces reth0.0

For example:

    On device A:
    {primary:node0} 
    # set chassis cluster reth-count 2

    -for first interface in the group (on Device A)
    # set interfaces fe-0/0/1 fastether-options redundant-parent reth1    

    -for second interface in the group (on Device B)
    # set interfaces fe-1/0/1 fastether-options redundant-parent reth1    

    -set up redundancy group for interfaces
    # set interfaces reth1 redundant-ether-options redundancy-group 1      
    # set interfaces reth1 unit 0 family inet address 192.168.1.1/24

    -for first interface in the group (on Device A)
    # set interfaces fe-0/0/0 fastether-options redundant-parent reth0    

    -for second interface in the group (on Device B)
    # set interfaces fe-1/0/0 fastether-options redundant-parent reth0    

    -set up redundancy group for interfaces
    # set interfaces reth0 redundant-ether-options redundancy-group 1
        
    # set interfaces reth0 unit 0 family inet address 10.10.10.200/24
    # set security zones security-zone untrust interfaces reth0.0
    # set security zones security-zone trust interfaces reth1.0



Step 8.  Commit and changes will be copied over to the Secondary Node, Device B.
    On device A:
    {primary:node0}
    # commit

This will prepare the basic clustering setting for both the routers.

TIP: If you want to manage this cluster via NSM, refer to KB20795.





Technical Documentation

Chassis Cluster for Security Devices

Junos 12.1X45-D10 Junos 11.4
  • PDF--See Chapter 48, Chassis Cluster (page 1319)
  • HTML




Verification

You can check the cluster status with the following commands.
show chassis cluster status
show chassis cluster interfaces
show chassis cluster statistics
show chassis cluster control-plane statistics
show chassis cluster data-plane statistics
show chassis cluster status redundancy-group 1
Refer to the Junos Security Configuration Guide for what these commands mean:
  • HTML - Verifying the Chassis Cluster Configuration




Troubleshooting



Related Links: