The timestamp reported for packet arrival in NetFlow records may be seen to show inaccurate time due to incorrect synchronization with Network Time Protocol (NTP).
This article describes the reason for the inaccurate timestamp and the steps to resolve the issue.
Users may observe the following symptoms:
-
Abnormal flow aggregation data caused by incorrect timestamp
-
Packet TCP dump, which shows a difference between the packet arrival time and the NetFlow reported time. You can see the difference between the StartTime and EndTime as shown below:
TCP Example
Cisco NetFlow/IPFIX
Version: 10
Length: 1360
Timestamp: Apr 13, 2019 23:24:55.000000000 PDT
ExportTime: 1555223095
FlowSequence: 1038743472
Observation Domain Id: 0
Set 1 [id=256] (8 flows)
FlowSet Id: (Data) (256)
FlowSet Length: 676
[Template Frame: 424 (received after this frame)]
Flow 1
SrcAddr: 190.217.109.66
DstAddr: 157.240.6.19
IP ToS: 0x00
Protocol: TCP (6)
SrcPort: 42640 (42640)
DstPort: 443 (443)
ICMP Type: 0x0000
InputInt: 859
SrcAS: 3549
DstAS: 65000
BGPNextHop: 157.240.40.105
OutputInt: 856
Octets: 52
Packets: 1
[Duration: 0.000000000 seconds (switched)]
StartTime: 2553672.470000000 seconds
EndTime: 2553672.470000000 seconds
NextHop: 157.240.40.105
SrcMask: 22
DstMask: 32
TCP Flags: 0x10, ACK
IPVersion: 4
[Duration: 0.000000000 seconds (milliseconds)]
StartTime: Apr 13, 2019 21:52:07.216000000 PDT
EndTime: Apr 13, 2019 21:52:07.216000000 PDT
Epoch time may change due to NTP reset or any change in the time of a router. Epoch time will also change if it is intentionally changed via configuration or a date change is made via the Command Line Interface (CLI). When triggered, the previously known time is used in the NetFlow record export, resulting in inaccurate time reporting for NetFlow record packet arrival.
As documented in PR1431498, the issue has been resolved in Junos OS releases 18.2R3, 18.3R3, 17.4R3, 18.1R3-S6, and 19.1R2.
Meanwhile a workaround is as follows:
Note: Restarting multi-svcs
will have no impact. However, the NetFlow record stops until the process restarts. It is recommended to plan for a maintenance window before executing the commands below.
- Log in to the shell as the
root
user.
-
Connect to the vhclient
.
-
Find the process PID for multi-svcs
.
- Restarting the
multi-svcs
process (kill -9 PID
) of multi-svcs.elf
will correct the problem.
After the multi-svcs
process restarts, the issue is resolved.
Example
labroot@router> start shell user root
Password:
root@router:/var/home/labroot # vhclient -s
Last login: Fri May 24 12:38:38 PDT 2019 from springbank on pts/5
root@router-node:~#
root@router-node:~# ps -aux | grep multi
Warning: bad ps syntax, perhaps a bogus '-'? See http://procps.sf.net/faq.html
root 8034 13.4 0.4 950636 144116 ? Sl Jun04 2535:50 /usr/sbin/spbfpc-multi-svcs.elf -c 0x1
root 8235 0.0 0.0 4412 504 pts/0 S+ 14:21 0:00 grep multi
root@router-node:~# kill -9 8034
root@router-node:~# ps -aux | grep multi
Warning: bad ps syntax, perhaps a bogus '-'? See http://procps.sf.net/faq.html
root 25699 0.0 0.0 4412 500 pts/3 S+ 14:28 0:00 grep multi
root@router-node:~# ps -aux | grep multi
Warning: bad ps syntax, perhaps a bogus '-'? See http://procps.sf.net/faq.html
root 25993 20.5 0.3 606400 97544 ? Sl 14:28 0:00 /usr/sbin/spbfpc-multi-svcs.elf -c 0x1
root 26081 0.0 0.0 4412 500 pts/3 S+ 14:28 0:00 grep multi
root@router-node:~#