Labels, Selectors, Replication, and Replicaset in Kubernetes
All about Labels, Selectors, Replication, and replicaset in Kubernetes
Play this article
Labels
# pod5.yml
kind: Pod
apiVersion: v1
metadata:
name: delhipod
labels:
env: development
class: pods
spec:
containers:
- name: c00
image: ubuntu
command:
[
"/bin/bash",
"-c",
"while true; do echo Hello DevOps; sleep 5 ; done",
]
kubectl apply -f pod5.yml
kubectl get pods --show-labels
kubectl label pods delhipod myname=tushar
kubectl get pods --show-labels
kubectl get pods -l env=development
kubectl get pods -l env!=development
kubectl delete pod -l env!=development
kubectl get pods
Label-selector
(=,!=)
name: tushar
class: nodes
project: development
(in, notin and exists)
env in (production, dev)
env notin (team1,team2)
kubectl get pods -l 'env in (development, testing)'
kubectl get pods -l 'env notin (development, testing)'
kubectl get pods -l class=pods,myname=tushar
Node-Selector
kind: Pod
apiVersion: v1
metadata:
name: nodelabels
labels:
env: development
spec:
containers:
- name: c00
image: ubuntu
command:
[
"/bin/bash",
"-c",
"while true; do echo Hello-Bhupinder; sleep 5 ; done",
]
nodeSelector:
hardware: t2-medium
kubectl get nodes
# get your nodes and add label to one of them
kubectl label nodes nodename hardware=t2-medium
Scaling and Replication
-
###
- Replication:
- if we add replicas=2 in config file then it will create two pods of same kind on one node.
- if one pod fails second pod will continue as a backup pod.
- Reliability:
- By having multiple versions of an application, you prevent problems if one or more fails.
- Load Balancing:
- Having multiple versions of a container enables you to easily send traffic to different instances to prevent overloading of a single instance or node.
- Scaling:
- When load does become too much for the number of existing instances, kubernetes enables you to easily scale up your application, adding additional instances as needed.
- Rolling updates:
- Updates to a service by replacing pods one by one.
Replication Controller
apiVersion: v1
kind: ReplicationController # this defines to create the object of replication type
metadata:
name: nginx
spec:
replicas: 3 # the element defines the desired number of pods
selector: # tells the controller which pods to watch/belong to this rc
app: nginx # this must match the labels
template: # template element defines a template to launch a new pod
metadata:
name: nginx
labels: # selectors values need to match the labels values specified in the pod template
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
- name: C00
image: ubuntu
command: ["bin/bash", "-c", "echo Hi"]
Replica Set
apiVersion: apps/v1
kind: ReplicaSet # this defines to create the object of replication type
metadata:
name: nginx
spec:
replicas: 3 # the element defines the desired number of pods
selector: # tells the controller which pods to watch/belong to this rc
matchExpressions:
- {key: myname,operator: In,values: [Tushar,Tush,Tusha]}
- {key: env,operator: NotIn,values: [production]}
app: nginx # this must match the labels
template: # template element defines a template to launch a new pod
metadata:
name: nginx
labels: # selectors values need to match the labels values specified in the pod template
app: nginx
myname: Tush
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
- name: C00
image: ubuntu
command: ["bin/bash","-c","echo Hi"]
kubectl apply -f myrs.yml
kubectl get rs
kubectl scale --replicas=1 rs/myrs # for scaling down
ย