Knowledge Search


×
 

[Contrail] Understanding namespace and isolation concepts

  [KB35121] Show Article Properties


Summary:

This article discusses how Kubernetes namespace (NS) works in Contrail environments and how Contrail extends the feature.

Solution:

One analogy we use when introducing the 'namespace' concept is openstack 'project', or 'tenant'. That is exactly how Contrail looks at it. Whenever a new 'namespace' object is created, 'contrail-kube-manager' (KM) gets notification of the object creation event and create the corresponding 'project' in Contrail. 

To differentiate between multiple Kubernetes clusters in Contrail, a Kubernetes cluster name is added to the Kubernetes NS or project name. The default Kubernetes cluster name is 'k8s'.  So if you create a Kubernetes NS called 'ns-user-1', a project called 'k8s-ns-user-1' will be created in contrail.

Non-Isolated NS

The Kubernetes basic networking requirement is a "flat"/"NATless" network; any pod can talk to any pod in any namespace, and any CNI providers should ensure that. Consequently, in Kubernetes by default, all namespaces are not isolated.

Note: The term "isolated" and "non-isolated" are in the context of (Contrail) networking only. 
 

k8s-default-pod-network and k8s-default-service-network

To provide networking for all non-isolated namespace, there should be a 'common' VRF (virtual routing and forwarding table) or RI (routing instance). In a Contrail Kubernetes environment, two 'default' VNs are pre-configured in k8s default NS, for pod and service respectively. Correspondingly, there are 2 VRFs; each with the same name as their corresponding VN. The name of the two VNs/VRFs are in this format:

<k8s-cluster-name>-<namespace name>-[pod|service]-network

So for 'default' NS with a default cluster name 'k8s', the two VN/VRF names will become:

  • 'k8s-default-pod-network': pod VN/VRF, with the default subnet 10.32.0.0/12
  • 'k8s-default-service-network': service VN/VRF, with a default subnet 10.96.0.0/12

Note: The default subnet for pod or service is configurable.

It is important to know that these 2 default VNs are 'shared' between all of the 'non-isolated' namespaces. This means they will be available for any new non-isolated NS that you create, implicitly.  That is why pods from all non-isolated NS including default NS can talk to each other. On the other hand, any VNs that you create will be isolated with other VNs, regardless of same or different NS. Communication between pods in two different VNs requires Contrail network policy. For the isolated NS, however, it will be a different scenario.
 

Isolated NS 

In contrast, 'isolated' namespace has its own default pod-network and service-network, including two new VRFs are also created for each "isolated" namspace. The same flat-subnets '10.32.0.0/12' and '10.96.0.0/12' are shared by the pod and service networks in the isolated namespaces. However, since the networks are with a different VRF, by default it is isolated with other NS. Pods launched in isolated NS can only talk to service and pods on the same namespace. Additional configurations, e.g. policy, is required to enable the pod to reach the network outside of the current namespace.

To illustrate this concept, here is an example:

  • Suppose you have 3 namespaces
  • The 'default' NS and two user NS: 'ns-non-isolated' and 'ns-isolated'
  • In each NS, you create one user VN: 'vn-left-1'
  • You will end up to having the following VN/VRFs in Contrail:

NS default

  • default-domain:k8s-default:k8s-default-pod-network
  • default-domain:k8s-default:k8s-default-service-network
  • default-domain:k8s-default:k8s-vn-left-1-pod-network

NS ns-non-isolated

  • default-domain:k8s-ns-non-isolated:k8s-vn-left-1-pod-network

NS ns-isolated

  • default-domain:k8s-ns-isolated:k8s-ns-isolated-pod-network
  • default-domain:k8s-ns-isolated:k8s-ns-isolated-service-network
  • default-domain:k8s-ns-isolated:k8s-vn-left-1-pod-network

Note: The above name is mentioned in the FQDN format. In contrail domain is the top-level object, followed by project/tenant and followed by virtual-networks.

Here is the yaml file to create an isolated namespace:

----
$ cat ns-isolated.yaml
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    "opencontrail.org/isolation" : "true"
  name: ns-isolated
----

To create the NS:
----
kubectl create -f ns-isolated.yaml

$ kubectl get ns
NAME          STATUS    AGE
contrail      Active    8d
default       Active    8d
ns-isolated   Active    1d  #<--
kube-public   Active    8d
kube-system   Active    8d
----

The annotations under metadata is an additional item comparing to standard (non-isolated) k8s namespace. The value of 'true' indicates this is an isolated.

NS:

  annotations:

    "opencontrail.org/isolation" : "true"

This part of the definition is Juniper's extension. 'contrail-kube-manager' (KM), reads the namespace 'metadata' from 'kube-apiserver', parses the information defined in the 'annotations' object, and sees that the 'isolation' flag is set to 'true'. It then creates the tenant with the corresponding routing instances (one for pod and one for service) instead of using the default NS routing instances for the isolated namespace. Fundamentally, that is how the 'isolation' is implemented. 

Related Links: