Kubernetes Networking in 5 Minutes
Kubernetes Cluster: Kubernetes coordinates a highly available cluster of computers that are connected to work as a single unit. The abstractions in Kubernetes allow you to deploy containerized applications to a cluster without tying them specifically to individual machines.
Kubernetes automates the distribution and scheduling of application containers across a cluster in a more efficient way.
A Kubernetes cluster consists of two types of resources:
· The Master coordinates the cluster
· Nodes are the worker that run applications.
Kubernetes Namespaces:
Namespace as a virtual cluster inside your Kubernetes cluster. You can have multiple namespaces inside a single Kubernetes cluster, and they are all logically isolated from each other. Namespace provides an additional qualification to a resource name. This is helpful when multiple teams are using the same cluster and there is a potential of name collision. It can be as a virtual wall between multiple clusters.
Inside the same namespace you can discover the other applications by service name. The isolation namespaces provide allow you to reuse the same service name in different namespaces, resolving to the applications running in those namespaces. This allows you to create your different “environments” in the same cluster if you wish to do so. For development, test, acceptance and production you would create 4 separate namespaces
Kubectl get namespace
All objects such as pods, services, volumes, etc… are part of a namespace. If you do not specify a namespace when creating or viewing your objects, they will be created in the “default” namespace. When you want to interact with objects in a different namespace than “default”, you must pass the -n flag to kubectl
kubectl get pods -n kube-system
kubectl get pod — all-namespaces
There are 3 Namespaces in k8s.
Default: The default namespace for objects with no other namespace
Kube-system: The namespace for objects created by the Kubernetes system
Kube-public: This namespace is created automatically and is readable by all users (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster.
How to create custom namespace:
kubectl create namespace test
kubectl delete ns test
vi deployment-namspace.yml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
namespace: prod
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.9.1
ports:
- containerPort: 80
kubectl get deployment — all-namespaces
Similarly you create namespaces and assign to your deployment, services, pod, replicas, replication controller etc.
In Kubernetes Networking we have
· Container to container communication: two or more containers communication
· Pod to Pod communication: is the communication in between two different pods, having various images and replicas.
· Pod-to-Service communication: is how a service enables pod to communicate with any other pod
· External-to-service Communication: is how an external to service communicate which are from external network sources to cluster sources via ingress network.
How Does Kubernetes Networking Compared to Docker Networking?
Kubernetes manages networking through CNI’s on top of docker and just attaches devices to docker. While docker with docker swarm also has its own networking capabilities, such as overlay, bridging, etc, the CNI’s provide similar types of functions.
There are 2 types of communication. The inter-node communication and the intra-node communication.
Intra-node Pod Network
Intra-node pod network is basically the communication between two different pods on the same node.
Assume a packet is going from pod1 to pod2.
The packet leaves Pod 1’s network at eth0 and enters the root network at veth0
Then, the packet passes onto the Linux bridge (cbr0) which discovers the destination using an ARP request
So, if veth1 has the IP, the bridge now knows where to forward the packet.
Inter-node pod network
Consider two nodes having various network namespaces, network interfaces, and a Linux bridge.
Now, assume a packet travels from pod1 to a pod4 which is on a different node.
- The packet leaves the pod 1 network and enters the root network at veth0
- Then the packet passes on to the Linux bridge (cbr0) whose responsibility is to make an ARP request to find the destination.
- After the bridge realizes that this pod doesn’t have the destination address, the packet comes back to the main network interface eth0.
- The packet now leaves the node 1 to find its destination on the other node and enters the route table who routes the packet to the node whose CIDR block contains the pod4.
- So, now the packet reaches node2 and then the bridge takes the packet which makes an ARP request to find out that the IP belonging to veth0.
- Finally, the packet crosses the pipe-pair and reaches pod4.
So, that’s how pods communicate with each other.
Services
· A service in Kubernetes is the entry for traffic into your application. It can be used for accessing an application just internally in the Kubernetes cluster or to expose the application.
· Basically, services are a type of resource that configures a proxy to forward the requests to a set of pods, which will receive traffic & is determined by the selector. Once the service is created it has an assigned IP address which will accept requests on the port.
- Now, there are various service types that give you the option for exposing a service outside of your cluster IP address.
Types of Services
There are mainly 4 types of services.
ClusterIP: This is the default service type which exposes the service on a cluster-internal IP by making the service only reachable within the cluster.
NodePort: This exposes the service on each Node’s IP at a static port. Since, a ClusterIP service, to which the NodePort service will route, is automatically created. We can contact the NodePort service outside the cluster.
LoadBalancer: This is the service type which exposes the service externally using a cloud provider’s load balancer. So, the NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.
ExternalName: This service type maps the service to the contents of the externalName field by returning a CNAME record with its value.
How do external services connect to these networks right?
That’s by none other than Ingress Network
Ingress Network
Ingress network is the most powerful way of exposing services as it is a collection of rules that allow inbound connections that can be configured to give services externally through reachable URLs. So, it basically acts as an entry point to the Kubernetes cluster that manages external access to the services in a cluster.
We have 2 nodes, having the pod and root network namespaces with a Linux bridge. In addition to this, we also have a new virtual ethernet device called flannel0(network plugin) added to the root network.
Now, we want the packet to flow from pod1 to pod 4.
· So, the packet leaves pod1’s network at eth0 and enters the root network at veth0.
· Then it is passed on to cbr0, which makes the ARP request to find the destination and it thereafter finds out that nobody on this node has the destination IP address.
· So, the bridge sends the packet to flannel0 as the node’s route table is configured with flannel0.
· Now, the flannel daemon talks to the API server of Kubernetes to know all the pod IPs and their respective nodes to create mappings for pods IPs to node IPs.
· The network plugin wraps this packet in a UDP packet with extra headers changing the source and destination IP’s to their respective nodes and sends this packet out via eth0.
· Now, since the route table already knows how to route traffic between nodes, it sends the packet to the destination node2.
· The packet arrives at eth0 of node2 and goes back to flannel0 to de-capsulate and emits it back in the root network namespace.
· Again, the packet is forwarded to the Linux bridge to make an ARP request to find out the IP that belongs to veth1.
· The packet finally crosses the root network and reaches the destination Pod4.
That’s how external services are connected with the help of an ingress network.
Network Plugins:
A container orchestration system is responsible for managing the network through which containers and services communicate. Kubernetes uses a library called Container Network Interface (CNI) as an interface between the cluster and various network providers. There are a number of network providers that can be used in Kubernetes.
· Weave Net: Weave Net is a powerful cloud native networking toolkit. It creates a virtual network that connects Docker containers across multiple hosts and enables their automatic discovery. A Weave Net pod should be running on each Node and any further pods you create will be automatically attached to the Weave network.
kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d ‘\n’)”
· Calico: Calico provides simple, scalable and secure virtual networking, which allows it to seamlessly integrate your Kubernetes cluster with existing data center infrastructure without the need for overlays.
· Flannel: Flannel is a very simple overlay network that satisfies the Kubernetes requirements.
· AWS VPC CNI for Kubernetes: Using this CNI plugin allows Kubernetes pods to have the same IP address inside the pod as they do on the VPC network. The CNI allocates AWS Elastic Networking Interfaces (ENIs) to each Kubernetes node and using the secondary IP range from each ENI for pods on the node. This CNI plugin offers high throughput and availability, low latency, and minimal network jitter. Additionally, users can apply existing AWS VPC networking and security best practices for building Kubernetes clusters. This includes the ability to use VPC flow logs, VPC routing policies, and security groups for network traffic isolation.
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.5/config/v1.5/aws-k8s-cni.yaml
Lab 1:
Scenario: How can we assign a service to running deployment?
Step 1: Create a folder in your directory and change the working directory path to that folder
mkdir service-assignment
cd service-assignment
Step2: Now create deployment YAML files, for the web application
vi webapp.yml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: webapp1
labels:
app: webapp-sql
tier: frontend
spec:
replicas: 1
selector:
matchLabels:
app: webapp-sql
tier: frontend
template:
metadata:
labels:
app: webapp-sql
tier: frontend
spec:
containers:
- name: webapp1
image: nginx:1.7.9
ports:
- containerPort: 80
Step3: Once you create the deployment files, deploy the applications.
kubectl apply -f webapp.yml
kubectl get deployment
Step 4: Now, you have to create services (NodePort) for the applications.
vi webservice.yml
apiVersion: v1
kind: Service
metadata:
name: webapp-sql
spec:
selector:
app: webapp-sql
tier: frontend
ports:
- port: 80
type: NodePort
kubectl apply -f webservice.yml
kubectl get service
Step 5: Now, check the configuration of running pods.
kubectl get pods
Lab 2: Load Balancer
vi deployment-for-load-balancer.yml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx-deployment
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx-deployment
spec:
containers:
- name: nginx-deployment
image: nginx:1.9.1
ports:
- containerPort: 80
vi service-loadbalancer.yml
apiVersion: v1
kind: Service
metadata:
name: nginx-deployment
spec:
selector:
app: nginx-deployment
ports:
- port: 80
type: LoadBalancer
<VM IP>:80
For externally map svc port and ip with AWS ELB