Support Support Downloads Knowledge Base Case Manager My Juniper Community

Knowledge Base

Search our Knowledge Base sites to find answers to your questions.

Ask All Knowledge Base Sites All Knowledge Base Sites JunosE Defect (KA)Knowledge BaseSecurity AdvisoriesTechnical BulletinsTechnotes Sign in to display secure content and recently viewed articles

[M/MX] Advertising multiple prefixes in LDP egress policy causes label churn with link protection enabled

0

0

Article ID: KB28196 KB Last Updated: 29 Oct 2013Version: 1.0
Summary:

This article describes the issue where label churn is observed when the label distribution protocol (LDP) is advertising multiple prefixes using an egress policy.

Symptoms:

Following is the topology used in this article, based on logical systems:

R-1-------R-2
 \        /
  \      /
   \    /
    \  /
     R-3

In the above topology

  • OSPF is the IGP

  • OSPF link protection is enabled between R-1 and R-3

  • R-3 is advertising 2 prefixes 3.3.3.3 and 4.4.4.4 using an ldp egress policy

lab@R-1# run show interfaces terse
Interface Admin Link Proto Local Remote
ge-1/1/1
ge-1/1/1.1 up up inet 10.1.1.1/24
mpls
ge-1/1/2
ge-1/1/2.3 up up inet 30.1.1.2/24
mpls
lo0
lo0.1 up up inet 1.1.1.1 --> 0/0

lab@R-2# run show interfaces terse
Interface Admin Link Proto Local Remote
ge-1/1/2
ge-1/1/2.1 up up inet 10.1.1.2/24
mpls
ge-1/1/2.2 up up inet 20.1.1.2/24
mpls
lo0
lo0.2 up up inet 2.2.2.2 --> 0/0

lab@R-3# run show interfaces terse
Interface Admin Link Proto Local Remote
ge-1/1/1
ge-1/1/1.2 up up inet 20.1.1.1/24
mpls
ge-1/1/1.3 up up inet 30.1.1.1/24
mpls
lo0
lo0.3 up up inet 3.3.3.3 --> 0/0
4.4.4.4 --> 0/0


lab@R-3# show interfaces lo0
unit 3 {
family inet {
address 3.3.3.3/32;
address 4.4.4.4/32;
}
}

[edit]
lab@R-3# show protocols ldp
egress-policy ldp-egress;

[edit]
lab@R-3# show policy-options
policy-statement ldp-egress {
term 1 {
from {
protocol direct;
route-filter 3.3.3.3/32 exact;
route-filter 4.4.4.4/32 exact;
}
then accept;
}
}

[edit]
lab@R-3#

lab@R-1# show protocols ospf
area 0.0.0.0 {
interface all;
interface ge-1/1/2.3 {
link-protection;
}
}

lab@R-2# show protocols ospf
area 0.0.0.0 {
interface all;
}

lab@R-3# show protocols ospf
area 0.0.0.0 {
interface all;
}

The inet.3 routing table looks as follows in R-1 and R2:

lab@R-1# run show route table inet.3

inet.3: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2.2.2.2/32 *[LDP/9] 00:11:02, metric 1
> to 10.1.1.2 via ge-1/1/1.1
3.3.3.3/32 *[LDP/9] 00:01:41, metric 1
> to 30.1.1.1 via ge-1/1/2.3
to 10.1.1.2 via ge-1/1/1.1, Push 494161
4.4.4.4/32 *[LDP/9] 00:01:41, metric 1
> to 30.1.1.1 via ge-1/1/2.3
to 10.1.1.2 via ge-1/1/1.1, Push 494161

Since R-1's link towards R-3 is protected, R-1 has two next hops to reach 3.3.3.3 and 4.4.4.4 in the routing table:

lab@R-2# run show route table inet.3

inet.3: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

1.1.1.1/32 *[LDP/9] 00:09:31, metric 1
> to 10.1.1.1 via ge-1/1/2.1
3.3.3.3/32 *[LDP/9] 00:00:10, metric 1
> to 20.1.1.1 via ge-1/1/2.2
4.4.4.4/32 *[LDP/9] 00:00:10, metric 1
> to 20.1.1.1 via ge-1/1/2.2

When link protection is enabled on the link in R2 towards R-3, a label churn is seen on both R-1 and R2:

lab@R-2# show | compare rollback 1
[edit logical-systems R2 protocols ospf area 0.0.0.0]
interface all { ... }
+ interface ge-1/1/2.2 {
+ link-protection;
+ }

lab@R-2# show protocols ospf
area 0.0.0.0 {
interface all;
interface ge-1/1/2.2 {
link-protection;
}
}

[edit]
lab@R-2#


lab@R-2# run show route table inet.3

inet.3: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

1.1.1.1/32 *[LDP/9] 00:25:32, metric 1
> to 10.1.1.1 via ge-1/1/2.1
3.3.3.3/32 *[LDP/9] 00:00:00, metric 1
> to 20.1.1.1 via ge-1/1/2.2
to 10.1.1.1 via ge-1/1/2.1, Push 754962
4.4.4.4/32 *[LDP/9] 00:00:00, metric 1
> to 20.1.1.1 via ge-1/1/2.2
to 10.1.1.1 via ge-1/1/2.1, Push 754962

[edit]
lab@R-2# run show route table inet.3

inet.3: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

1.1.1.1/32 *[LDP/9] 00:25:34, metric 1
> to 10.1.1.1 via ge-1/1/2.1
3.3.3.3/32 *[LDP/9] 00:00:00, metric 1
> to 20.1.1.1 via ge-1/1/2.2
to 10.1.1.1 via ge-1/1/2.1, Push 760146
4.4.4.4/32 *[LDP/9] 00:00:00, metric 1
> to 20.1.1.1 via ge-1/1/2.2
to 10.1.1.1 via ge-1/1/2.1, Push 760146

[edit]
lab@R-2# run show route table inet.3

inet.3: 3 destinations, 5 routes (3 active, 2 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

1.1.1.1/32 *[LDP/9] 00:25:36, metric 1
> to 10.1.1.1 via ge-1/1/2.1
3.3.3.3/32 +[LDP/9] 00:00:00, metric 1
> to 20.1.1.1 via ge-1/1/2.2
to 10.1.1.1 via ge-1/1/2.1, Push 764850
-[LDP/9] 00:00:00, metric 1
> to 20.1.1.1 via ge-1/1/2.2
to 10.1.1.1 via ge-1/1/2.1, Push 764818
4.4.4.4/32 +[LDP/9] 00:00:00, metric 1
> to 20.1.1.1 via ge-1/1/2.2
to 10.1.1.1 via ge-1/1/2.1, Push 764850
-[LDP/9] 00:00:00, metric 1
> to 20.1.1.1 via ge-1/1/2.2
to 10.1.1.1 via ge-1/1/2.1, Push 764818

[edit]
lab@R-2#

lab@R-1# run show route table inet.3

inet.3: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2.2.2.2/32 *[LDP/9] 00:25:46, metric 1
> to 10.1.1.2 via ge-1/1/1.1
3.3.3.3/32 *[LDP/9] 00:00:00, metric 1
> to 30.1.1.1 via ge-1/1/2.3
to 10.1.1.2 via ge-1/1/1.1, Push 632321
4.4.4.4/32 *[LDP/9] 00:00:00, metric 1
> to 30.1.1.1 via ge-1/1/2.3
to 10.1.1.2 via ge-1/1/1.1, Push 632321

[edit]
lab@R-1# run show route table inet.3

inet.3: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2.2.2.2/32 *[LDP/9] 00:25:48, metric 1
> to 10.1.1.2 via ge-1/1/1.1
3.3.3.3/32 *[LDP/9] 00:00:00, metric 1
> to 30.1.1.1 via ge-1/1/2.3
to 10.1.1.2 via ge-1/1/1.1, Push 636737
4.4.4.4/32 *[LDP/9] 00:00:00, metric 1
> to 30.1.1.1 via ge-1/1/2.3
to 10.1.1.2 via ge-1/1/1.1, Push 636737

[edit]
lab@R-1# run show route table inet.3

inet.3: 3 destinations, 5 routes (3 active, 2 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2.2.2.2/32 *[LDP/9] 00:25:50, metric 1
> to 10.1.1.2 via ge-1/1/1.1
3.3.3.3/32 +[LDP/9] 00:00:00, metric 1
> to 30.1.1.1 via ge-1/1/2.3
to 10.1.1.2 via ge-1/1/1.1, Push 640929
-[LDP/9] 00:00:00, metric 1
> to 30.1.1.1 via ge-1/1/2.3
to 10.1.1.2 via ge-1/1/1.1, Push 640897
4.4.4.4/32 +[LDP/9] 00:00:00, metric 1
> to 30.1.1.1 via ge-1/1/2.3
to 10.1.1.2 via ge-1/1/1.1, Push 640929
-[LDP/9] 00:00:00, metric 1
> to 30.1.1.1 via ge-1/1/2.3
to 10.1.1.2 via ge-1/1/1.1, Push 640897

[edit]
lab@R-1#

Note that the label mapping is changing very frequently, causing the rpd CPU utilization to go high.

lab@sivas-DUT-M10i-6# run show system processes extensive | match rpd
1972 root 1 110 0 36552K 12120K RUN 3:00 26.81% rpd
1971 root 1 109 0 36552K 12060K RUN 2:50 25.93% rpd
Cause:

The reason for the label churn is that both R-1 sees the link protection enabling on R2 as a label forwarding state change. In R2 a new label is allocated for the prefixes 3.3.3.3 and 4.4.4.4 one by one, i.e, the new label for 3.3.3.3 is advertised first, then the same label is allocated for 4.4.4.4 and then advertised next. R2 cannot allocate the same label for 3.3.3.3 as before, because the label is already in use for 4.4.4.4, so it allocates a new label.

When the peer receives this advertisement, it perceives it to be a forwarding change and allocates new label for these prefixes one by one and advertises back.

This process continues indefinitely and causes the label churn.

Solution:

This problem can be resolved by using the knob ldp-deaggregate under the edit protocols ldp hierarchy, which will cause the same label to be advertised in a label forwarding state change. This will enable the routers receiving the label mapping message to update the label database with the same label again.

lab@R-1# show protocols ldp
deaggregate;
interface all;

lab@R-2# show protocols ldp
deaggregate;
interface all;


lab@R-3# show protocols ldp
egress-policy ldp-egress;
deaggregate;
interface all;

After this configuration change on all three routers, the label mapping looks consistent:

lab@R-1# run show route table inet.3

inet.3: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2.2.2.2/32 *[LDP/9] 00:01:58, metric 1
> to 10.1.1.2 via ge-1/1/1.1
3.3.3.3/32 *[LDP/9] 00:01:52, metric 1
> to 30.1.1.1 via ge-1/1/2.3
to 10.1.1.2 via ge-1/1/1.1, Push 554506
4.4.4.4/32 *[LDP/9] 00:01:52, metric 1
> to 30.1.1.1 via ge-1/1/2.3
to 10.1.1.2 via ge-1/1/1.1, Push 554490

[edit]
lab@R-1# run show route table inet.3

inet.3: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2.2.2.2/32 *[LDP/9] 00:02:00, metric 1
> to 10.1.1.2 via ge-1/1/1.1
3.3.3.3/32 *[LDP/9] 00:01:54, metric 1
> to 30.1.1.1 via ge-1/1/2.3
to 10.1.1.2 via ge-1/1/1.1, Push 554506
4.4.4.4/32 *[LDP/9] 00:01:54, metric 1
> to 30.1.1.1 via ge-1/1/2.3
to 10.1.1.2 via ge-1/1/1.1, Push 554490


lab@R-2# run show route table inet.3

inet.3: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

1.1.1.1/32 *[LDP/9] 00:02:10, metric 1
> to 10.1.1.1 via ge-1/1/2.1
3.3.3.3/32 *[LDP/9] 00:02:04, metric 1
> to 20.1.1.1 via ge-1/1/2.2
to 10.1.1.1 via ge-1/1/2.1, Push 712731
4.4.4.4/32 *[LDP/9] 00:02:04, metric 1
> to 20.1.1.1 via ge-1/1/2.2
to 10.1.1.1 via ge-1/1/2.1, Push 712715

[edit]
lab@R-2# run show route table inet.3

inet.3: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

1.1.1.1/32 *[LDP/9] 00:02:12, metric 1
> to 10.1.1.1 via ge-1/1/2.1
3.3.3.3/32 *[LDP/9] 00:02:06, metric 1
> to 20.1.1.1 via ge-1/1/2.2
to 10.1.1.1 via ge-1/1/2.1, Push 712731
4.4.4.4/32 *[LDP/9] 00:02:06, metric 1
> to 20.1.1.1 via ge-1/1/2.2
to 10.1.1.1 via ge-1/1/2.1, Push 712715

[edit]
lab@R-2#

This issue is tracked under PR504104, and there is fix available starting 10.4 release. This article explains the workaround to fix the issue in case one of the affected releases is run.

Comment on this article > Affected Products Browse the Knowledge Base for more articles related to these product categories. Select a category to begin.

Getting Up and Running with Junos

Getting Up and Running with Junos Security Alerts and Vulnerabilities Product Alerts and Software Release Notices Problem Report (PR) Search Tool EOL Notices and Bulletins JTAC User Guide Customer Care User Guide Pathfinder SRX High Availability Configurator SRX VPN Configurator Training Courses and Videos End User Licence Agreement Global Search