Kubernetes Command Guide
Posted on Sat 24 May 2025 in Technology
Kubernetes Resource Kit
What is Kubernetes?
Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications.
Learn Kubernetes Basic
Creating and Managing a Service
Creating the Deployment Manifest for Nginx
Here is an example of a deployment manifest for Nginx:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Apply the deployment:
kubectl apply -f nginx-deployment.yaml
Creating a Service
A Kubernetes Service exposes a set of Pods as a network service. Here's an example of creating a Service for an Nginx deployment:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Apply the Service:
kubectl apply -f nginx-service.yaml
Exposing a Service
To expose a deployment as a service, you can use the kubectl expose
command:
kubectl expose deployment nginx-deployment --type=LoadBalancer --name=nginx-service
This command creates a Service of type LoadBalancer
that routes traffic to the Pods managed by the nginx-deployment
.
Scaling Nginx Pods
To scale the number of Pods from 2 to 10, use the following command:
kubectl scale deployment nginx-deployment --replicas=10
Verify the scaling:
kubectl get pods
Rolling Updates
Rolling updates allow you to update the Pods in a deployment incrementally without downtime. Here's how to perform a rolling update:
- Update the deployment manifest with a new container image version.
- Apply the updated manifest:
kubectl apply -f nginx-deployment.yaml
- Monitor the rollout status:
kubectl rollout status deployment/nginx-deployment
If needed, you can roll back to the previous version:
kubectl rollout undo deployment/nginx-deployment
Deploying with Helm
Helm is a package manager for Kubernetes that simplifies application deployment. Here's an example of deploying Nginx using Helm:
- Add the Helm repository:
helm repo add bitnami https://charts.bitnami.com/bitnami
- Install the Nginx chart:
helm install my-nginx bitnami/nginx
- List the Helm releases:
helm list
Deploying with Kustomize
Kustomize is a Kubernetes-native configuration management tool that allows you to customize application configurations without modifying the original YAML manifests. Here's an example of deploying Nginx using Kustomize:
Folder Structure for Kustomize
To better understand how Kustomize organizes files, here's an example folder structure:
kustomize/
├── base/
│ ├── kustomization.yaml
│ ├── nginx-deployment.yaml
│ └── nginx-service.yaml
├── staging/
│ ├── kustomization.yaml
│ └── replicas-patch.yaml
└── production/
├── kustomization.yaml
└── replicas-patch.yaml
- base/: Contains the base manifests and the
kustomization.yaml
file for the default configuration. - staging/: Contains the overlay for the staging environment, including patches for specific configurations.
- production/: Contains the overlay for the production environment, including patches for specific configurations.
Step 1: Create the Base Manifests
Create the following files for the base deployment:
nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21.6
ports:
- containerPort: 80
nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
kustomization.yaml
resources:
- nginx-deployment.yaml
- nginx-service.yaml
namePrefix: base-
commonLabels:
environment: base
Step 2: Create Overlays for Different Environments
staging/kustomization.yaml
resources:
- ../base
namePrefix: staging-
commonLabels:
environment: staging
patchesStrategicMerge:
- replicas-patch.yaml
staging/replicas-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
production/kustomization.yaml
resources:
- ../base
namePrefix: prod-
commonLabels:
environment: production
patchesStrategicMerge:
- replicas-patch.yaml
production/replicas-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 5
Step 3: Apply the Configuration
To deploy the resources with Kustomize, navigate to the desired overlay directory and run:
kubectl apply -k staging/
or
kubectl apply -k production/
Benefits of Kustomize
- No need to modify the original YAML files.
- Supports overlays for environment-specific configurations.
- Simplifies resource management with reusable components.
- Provides a clear folder structure for managing configurations.
Helm vs. Kustomize
Feature | Helm | Kustomize |
---|---|---|
Purpose | Package manager for Kubernetes | Configuration customization |
Template Engine | Yes (uses Go templates) | No |
Dependency Management | Yes | No |
Learning Curve | Moderate | Low |
Helm is ideal for managing complex applications with dependencies, while Kustomize is better suited for customizing existing Kubernetes manifests.
What is a DaemonSet?
A DaemonSet ensures that a copy of a Pod runs on all (or some) nodes in the cluster. Use cases include running logging agents, monitoring agents, or storage daemons.
Example:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
spec:
selector:
matchLabels:
name: fluentd
template:
metadata:
labels:
name: fluentd
spec:
containers:
- name: fluentd
image: fluentd:latest
What is HPA (Horizontal Pod Autoscaler)?
HPA automatically scales the number of Pods in a deployment based on CPU/memory usage or custom metrics.
Example:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
What are Sidecar Containers?
Sidecar containers are helper containers that run alongside the main container in a Pod. They are used for tasks like logging, monitoring, or proxying.
Example use case: Adding a logging sidecar to an Nginx container.
apiVersion: v1
kind: Pod
metadata:
name: nginx-with-sidecar
spec:
containers:
- name: nginx
image: nginx:latest
- name: log-collector
image: fluentd:latest
volumeMounts:
- name: logs
mountPath: /var/log/nginx
volumes:
- name: logs
emptyDir: {}
What is a Custom Resource Definition (CRD)?
CRDs extend Kubernetes by allowing users to define their own resource types. For example, creating a Database
resource.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: databases.example.com
spec:
group: example.com
names:
kind: Database
listKind: DatabaseList
plural: databases
singular: database
scope: Namespaced
versions:
- name: v1
served: true
storage: true
ConfigMap vs RBAC
- ConfigMap: Used to store configuration data as key-value pairs.
- RBAC (Role-Based Access Control): Used to define permissions for users and applications.
ConfigMap Example
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
nginx.conf: |
server {
listen 80;
server_name localhost;
}
RBAC Example
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
Differences Between AKS, EKS, and GKE
Feature | AKS (Azure) | EKS (AWS) | GKE (Google Cloud) |
---|---|---|---|
Managed Control Plane | Yes | Yes | Yes |
Auto-Scaling | Yes | Yes | Yes |
Integration | Azure Services | AWS Services | Google Cloud Services |
Pricing | Pay for worker nodes | Pay for worker nodes | Pay for worker nodes |
Ease of Use | High | Medium | High |
Cheers...