Support Support Downloads Knowledge Base Case Manager My Juniper Community

Knowledge Base

Search our Knowledge Base sites to find answers to your questions.

Ask All Knowledge Base Sites All Knowledge Base Sites JunosE Defect (KA)Knowledge BaseSecurity AdvisoriesTechnical BulletinsTechnotes Sign in to display secure content and recently viewed articles

[CSO] Admin portal WebUI login not working

0

0

Article ID: KB35860 KB Last Updated: 05 Jun 2020Version: 1.0
Summary:

The Contrail Service Orchestration (CSO) admin portal is used for all administration processes and requires login credentials to be submitted on the WebUI for access. Sometimes, users may observe that the admin portal login does not work and that credentials are being regarded as incorrect for all users even when correct credentials are entered.

This article describes how to troubleshoot and resolve the issue.

 

Symptoms:

Admin portal login does not work and credentials for all users are displayed as being incorrect even when users have entered valid credentials.

 

Cause:

To troubleshoot this problem in CSO, you need to first check the iamsvc microservice in the CSO K8 microservices node. This microservice, along with keystone, is responsible for user authentication.

  1. Check the pod status of the iamsvc microservice by logging in to the K8-microservice node.

root@k8-microservices1:~# kubectl get pods -n central -o wide

NAME                      READY  STATUS   RESTARTS AGE  IP           NODE              NOMINATED  READINESS 
                                                                                       NODE       GATES
csp.csp-iamsvc-iamsvc                                                              
-noauth-6955dd6b4f-6p29j  2/2    Running  0        56d  10.244.68.56 k8-microservices1 <none>     <none>

The above output confirms that the iamsvc microservice has no problems.

  1. Check whether any pods are crashing by using kubectl get pods -n central | grep crash.

As you can see above, pods are crashing continuously.

  1. Check the logs of the pod that is crashing continuously by using kubectl logs -f <pod name> -n central – c <container name >.

The above logs indicate that the connection with rabbitmq is failing.

  1. Run a component health check to see whether any services are unhealthy:

  • For 5.1.1, run the components_health.sh script from the startup server location /root/Contrail_Service_Orchestration_5.1.1.

  • For 4.x, run the components_health.sh script from the installer server.

<Below is a snip from the script output>

INFO     ************************************************************************
INFO     HEALTH CHECK FOR INFRASTRUCTURE COMPONENTS STARTED IN CENTRAL ENVIRONMENT 
INFO     ************************************************************************

INFO     Health Check for Infrastructure Component Rabbitmq Started
INFO     The Infrastructure Component Rabbitmq is unhealthy 

The above output indicates that Rabbitmq is unhealthy.

 

Solution:

To resolve this issue, perform the following:

  1. Log in to the rabbitmq cluster and check the cluster status:

rabbitmqctl cluster_status
Cluster status of node rabbit@ip-192.168.1.100...
[{nodes,[{disc,['rabbit@ip-192-168-1-100','rabbit@ip-192-168-1-101',
                'rabbit@ip-192-168-1-102']}]},
 {running_nodes,['rabbit@ip-192-168-1-101','rabbit@ip-192-168-1-102',
                 'rabbit@ip-192-168-1-100']},
 {cluster_name,<<"rabbit@ip-192-168-1-101.abc.internal">>},
 {partitions,[]},
 {alarms,[{'rabbit@ip-192-168-1-101',[]},    <<No alarms
          {'rabbit@ip-192-168-1-102',[{badrpc, nodedown}]},  <<<<< This is wrong.
          {'rabbit@ip-192-168-1-100',[]}]}]  <<No alarms

The above log shows that one node is down.

  1. Collect rabbitmq logs for analysis [/var/log/rabbitmq.log] and identifying errors. Check the rabbitmqstats.log also.

  2. Restart rabbitmq by using service rabbitmq-server stop, and service rabbitmq-server start.
  3. Verify rabbitmq status by using service rabbitmq-server status.

When rabbitmq comes back up, the CSO login should work. 

<Log Snip> from Central 

rabbitmqctl cluster_status

Cluster status of node rabbit@ip-192.168.1.100...
[{nodes,[{disc,['rabbit@ip-192-168-1-100','rabbit@ip-192-168-1-101',
                'rabbit@ip-192-168-1-102']}]},
 {running_nodes,['rabbit@ip-192-168-1-101','rabbit@ip-192-168-1-102',
                 'rabbit@ip-192-168-1-100']},
 {cluster_name,<<"rabbit@ip-192-168-1-101.abc.internal">>},
 {partitions,[]},
 {alarms,[{'rabbit@ip-192-168-1-101',[]},   <<No alarms
          {'rabbit@ip-192-168-1-102',[]},   <<No alarms
          {'rabbit@ip-192-168-1-100',[]}]}] <<No alarms

<Log Snip> from Regional

rabbitmqctl cluster_status

Cluster status of node rabbit@ip-192.168.2.100 ...
[{nodes,[{disc,['rabbit@ip-192-168-2-100','rabbit@ip-192-168-2-101',
                'rabbit@ip-192-168-2-102']}]},
 {running_nodes,['rabbit@ip-192-168-2-101','rabbit@ip-192-168-2-102',
                 'rabbit@ip-192-168-2-100']},
 {cluster_name,<<"rabbit@ip-192-168-2-101.abc.internal">>},
 {partitions,[]},
 {alarms,[{'rabbit@ip-192-168-2-101',[]},   <<No alarms
          {'rabbit@ip-192-168-2-102',[]},  <<No alarms
          {'rabbit@ip-192-168-2-100',[]}]}] <<No alarms

Note: For log analysis and any queries on recovery, contact Support.

 

Comment on this article > Affected Products Browse the Knowledge Base for more articles related to these product categories. Select a category to begin.

Getting Up and Running with Junos

Getting Up and Running with Junos Security Alerts and Vulnerabilities Product Alerts and Software Release Notices Problem Report (PR) Search Tool EOL Notices and Bulletins JTAC User Guide Customer Care User Guide Pathfinder SRX High Availability Configurator SRX VPN Configurator Training Courses and Videos End User Licence Agreement Global Search