I recently got into Cloud Native Foundations Scholarship Program by Udacity, and with that, my day to day interaction with Kubernetes increased. The exposure is good because I am learning Kubernetes and patching problems and sharing some tricks to solve my problems here.
One of them is to delete all pods in a ReplicaSet. I went through many StackOverflow questions with one line finalizers that would do the job instantly, but it was all hit and try with 20 commands to find the one. In addition to that, pasting random finalizers which you don’t understand on the terminal is not recommended.
Also, even if I am a massive fan of googling stuff out, I get tired of googling the same queries ten times.
Delete all pods in ReplicaSet (My Approach)
The purpose of a ReplicaSet is to keep a consistent number of Pods are always running. As a result, you can scale the number of pods (declaratively/imperatively) to anything that your server can manage.
Theoretically, you can scale your ReplicaSet to zero pods, and that’s what I did using the subcommand scale by scaling the number of pods to Zero on the ReplicaSet.
The following imperative command helped me to remove all the pods in a ReplicaSet without deleting the ReplicaSet.
kubectl scale rs/new-replica-set --replicas=0
new-replica-set would be replaced by the ReplicaSet whose pods you want to delete. You can deploy new pods via the ReplicaSet using the above command with the value in
--replicas as your desire number of pods or via declarative way using
kubectl edit replicaset new-replica-set .
Delete ReplicaSet + Pods
This is a standard and straightforward approach if you want to delete the ReplicaSet as well as the pods together by using subcommand delete with new-replica-set replaced with the name of ReplicaSet you want to delete.
kubectl delete rs new-replica-set
Thanks for reading this post! Feel free to share it with your friends/colleagues if this was of any help! Consider subscribing to our newsletter if you want Tutorials and Articles delivered to your inbox weekly.
Read more from us here: