Best CKA Practice Tests and Labs to Use Before Your Exam
Set up CKA practice labs with kind, minikube, and kubeadm. Includes exercises for every exam domain and a practice test plan.
Table of Contents
The best way to prepare for the CKA is to practice in a real Kubernetes cluster. Not flashcards. Not video courses. Actual hands-on time running kubectl commands, breaking clusters, and fixing them. You can set up a free practice environment in under 10 minutes with kind or minikube, and that single setup will carry you through 90% of your CKA preparation.
This guide shows you how to build CKA practice labs, what to practice in each exam domain, and how to structure your practice sessions to simulate exam conditions. If you have not read the exam overview yet, start with our CKA study guide and CKA exam format breakdown first.
Setting Up Your CKA Practice Environment
You need a Kubernetes cluster to practice on. You have three main options, and each is good for different things.
Option 1: kind (Kubernetes in Docker)
kind runs Kubernetes clusters inside Docker containers. It is fast, lightweight, and can simulate multi-node clusters on a single machine.
Best for: Quick practice sessions, testing workloads, networking exercises.
# Install kind
# On macOS with Homebrew
brew install kind
# On Linux
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.24.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
# Create a simple cluster
kind create cluster --name cka-practice
# Create a multi-node cluster
cat <<EOF > kind-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
EOF
kind create cluster --name cka-multi --config kind-config.yaml
# Delete a cluster when done
kind delete cluster --name cka-practice
A multi-node kind cluster gives you something close to the exam environment. You have a control plane node and worker nodes, which means you can practice draining nodes, cordon/uncordon, and scheduling exercises.
Limitation: kind does not give you direct SSH access to nodes the way the exam does. For node-level troubleshooting, you need docker exec to get into the kind container:
docker exec -it cka-multi-control-plane bash
This is fine for practice, but the experience is slightly different from the exam where you SSH to actual nodes.
Option 2: minikube
minikube runs a single-node Kubernetes cluster. It is the simplest option and works on any OS.
Best for: Beginners, quick experiments, testing deployments and services.
# Install minikube
# On macOS
brew install minikube
# On Linux
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
# Start a cluster
minikube start
# Start with a specific Kubernetes version
minikube start --kubernetes-version=v1.31.0
# Stop the cluster
minikube stop
# Delete the cluster
minikube delete
minikube is great for practicing workloads, services, ConfigMaps, Secrets, and storage. It is less ideal for multi-node scenarios like node draining or kubeadm upgrades.
Multi-node minikube is possible:
minikube start --nodes 3 --kubernetes-version=v1.31.0
This gives you a 3-node cluster with one control plane and two workers.
Option 3: kubeadm on VMs
kubeadm is what the CKA exam uses to build clusters. Practicing with kubeadm on VMs gives you the most realistic exam experience.
Best for: Cluster installation and upgrade exercises, etcd backup/restore, node troubleshooting.
You need VMs. Vagrant with VirtualBox is the easiest local setup:
# Vagrantfile
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/jammy64"
config.vm.define "controlplane" do |cp|
cp.vm.hostname = "controlplane"
cp.vm.network "private_network", ip: "192.168.56.10"
cp.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 2
end
end
config.vm.define "node01" do |n|
n.vm.hostname = "node01"
n.vm.network "private_network", ip: "192.168.56.11"
n.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 2
end
end
end
Then bootstrap the cluster with kubeadm:
# On the control plane node
sudo kubeadm init --apiserver-advertise-address=192.168.56.10 --pod-network-cidr=10.244.0.0/16
# Set up kubeconfig
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Install a CNI (Flannel in this example)
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
# On the worker node, run the join command from kubeadm init output
sudo kubeadm join 192.168.56.10:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
This is the most time-consuming setup, but it is the closest to what you will see on the CKA. If cluster installation and upgrades are your weak points, this is the practice environment you need.
Cloud alternatives: You can also spin up VMs on any cloud provider. A few small instances running for a few hours cost very little. This avoids the RAM requirements of running VMs locally.
Practice with the included sessions
Your CKA exam purchase includes two practice sessions that simulate the real exam environment. They are harder than the actual exam.
Register for the CKA ExamCKA Practice Labs by Domain
Here are specific exercises for each exam domain. Do these in your practice cluster. Time yourself. The goal is not just to complete each task, but to complete it within the time limit.
Lab 1: Cluster Architecture, Installation and Configuration (25%)
These exercises cover the heaviest admin-focused topics on the exam.
Exercise 1.1: RBAC Setup (target time: 8 minutes)
Create the following RBAC configuration:
- A ServiceAccount called
deploy-botin theproductionnamespace - A Role called
deployerthat can create, get, list, update, and delete Deployments and Services - A RoleBinding that binds the
deployerRole to thedeploy-botServiceAccount
# Create the namespace first
kubectl create namespace production
# Try it yourself before looking at the solution below
Solution:
kubectl create serviceaccount deploy-bot -n production
kubectl create role deployer \
--verb=create,get,list,update,delete \
--resource=deployments,services \
-n production
kubectl create rolebinding deploy-bot-deployer \
--role=deployer \
--serviceaccount=production:deploy-bot \
-n production
# Verify
kubectl auth can-i create deployments --as=system:serviceaccount:production:deploy-bot -n production
# Should output: yes
kubectl auth can-i delete pods --as=system:serviceaccount:production:deploy-bot -n production
# Should output: no
Exercise 1.2: etcd Backup and Restore (target time: 10 minutes)
Back up etcd to /tmp/etcd-backup.db. Then restore it to a new data directory /var/lib/etcd-restored. Update the etcd pod manifest to use the new directory.
This exercise requires a kubeadm cluster. On kind, you can practice the backup command but the restore process differs slightly.
# Find the etcd cert paths
kubectl describe pod etcd-controlplane -n kube-system | grep -A 5 "Command"
# Backup
ETCDCTL_API=3 etcdctl snapshot save /tmp/etcd-backup.db \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key
# Verify the backup
ETCDCTL_API=3 etcdctl snapshot status /tmp/etcd-backup.db --write-table
# Restore to new directory
ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcd-backup.db \
--data-dir=/var/lib/etcd-restored
# Update the etcd manifest
sudo vim /etc/kubernetes/manifests/etcd.yaml
# Change the volume hostPath from /var/lib/etcd to /var/lib/etcd-restored
The manifest edit is what trips people up. After the restore, you must update the hostPath in the etcd static pod manifest to point to the new data directory. The kubelet will automatically restart the etcd Pod when it detects the manifest change.
Exercise 1.3: Cluster Upgrade (target time: 15 minutes)
Upgrade the control plane from one minor version to the next. This is best practiced on a kubeadm cluster.
# Check current version
kubectl get nodes
kubeadm version
# Upgrade kubeadm
sudo apt-get update
sudo apt-get install -y kubeadm=1.31.0-1.1
# Plan the upgrade
sudo kubeadm upgrade plan
# Apply the upgrade
sudo kubeadm upgrade apply v1.31.0
# Drain the control plane
kubectl drain controlplane --ignore-daemonsets
# Upgrade kubelet and kubectl
sudo apt-get install -y kubelet=1.31.0-1.1 kubectl=1.31.0-1.1
sudo systemctl daemon-reload
sudo systemctl restart kubelet
# Uncordon
kubectl uncordon controlplane
# Verify
kubectl get nodes
Repeat on worker nodes using kubeadm upgrade node instead of kubeadm upgrade apply. Practice this process at least 3 times until you can do it without looking up commands.
Lab 2: Troubleshooting (30%)
Troubleshooting is 30% of the exam. These labs build the debugging instincts you need.
Exercise 2.1: Fix a Broken Pod (target time: 5 minutes)
Create a Pod that will break, then fix it.
# Create a broken Pod (wrong image name)
kubectl run broken-pod --image=ngnix:latest
# Observe the error
kubectl get pod broken-pod
# STATUS: ImagePullBackOff or ErrImagePull
# Debug it
kubectl describe pod broken-pod
# Look at Events section: "Failed to pull image ngnix:latest"
# Fix the image name
kubectl set image pod/broken-pod broken-pod=nginx:latest
# Or delete and recreate
kubectl delete pod broken-pod
kubectl run broken-pod --image=nginx:latest
Exercise 2.2: Fix a Broken Service (target time: 7 minutes)
# Create a deployment
kubectl create deployment web --image=nginx --replicas=2
# Create a service with a wrong selector (intentional mistake)
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: web-svc
spec:
selector:
app: web-wrong
ports:
- port: 80
targetPort: 80
EOF
# The service has no endpoints
kubectl get endpoints web-svc
# ENDPOINTS: <none>
# Debug: check the selector
kubectl get svc web-svc -o yaml | grep -A 2 selector
kubectl get pods --show-labels
# Fix: The deployment Pods have label app=web, not app=web-wrong
kubectl patch svc web-svc --type='json' -p='[{"op":"replace","path":"/spec/selector/app","value":"web"}]'
# Verify
kubectl get endpoints web-svc
# Should now show Pod IPs
Exercise 2.3: Fix a Broken Node (target time: 10 minutes)
On a kubeadm cluster, simulate a broken kubelet:
# SSH to the worker node
ssh node01
# Stop the kubelet (simulating a failure)
sudo systemctl stop kubelet
# From the control plane, the node will show NotReady
kubectl get nodes
# node01 should be NotReady
# SSH to the node and debug
ssh node01
sudo systemctl status kubelet
# Active: inactive (dead)
# Check logs
sudo journalctl -u kubelet -n 20
# Fix: start the kubelet
sudo systemctl start kubelet
# Verify from control plane
kubectl get nodes
For more advanced practice, try modifying the kubelet configuration file to introduce an error (wrong API server address, wrong certificate path), then debug and fix it using journalctl -u kubelet.
Exercise 2.4: Fix a Broken Control Plane Component (target time: 10 minutes)
# On a kubeadm cluster, introduce an error in the API server manifest
# CAUTION: This will take down your API server temporarily
sudo cp /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/kube-apiserver-backup.yaml
# Edit the manifest and change the image to something wrong
sudo sed -i 's|kube-apiserver:v1|kube-apiserver:v999|' /etc/kubernetes/manifests/kube-apiserver.yaml
# The API server will go down
# kubectl commands will fail
# Debug by checking the container runtime
sudo crictl ps -a | grep api
sudo crictl logs <container-id>
# Fix by restoring the correct manifest
sudo cp /tmp/kube-apiserver-backup.yaml /etc/kubernetes/manifests/kube-apiserver.yaml
# Wait for the API server to restart
kubectl get nodes
This type of exercise builds the muscle memory you need for exam-day troubleshooting. The key skill is knowing where to look: static pod manifests in /etc/kubernetes/manifests/, kubelet logs via journalctl, and container logs via crictl.
Lab 3: Services and Networking (20%)
Exercise 3.1: Create a NetworkPolicy (target time: 8 minutes)
Scenario: In the web-app namespace, create a NetworkPolicy that allows Pods with label tier: frontend to receive traffic only from Pods with label tier: api on port 80. All other ingress traffic should be denied.
kubectl create namespace web-app
# Create test pods
kubectl run frontend --image=nginx -n web-app --labels=tier=frontend
kubectl run api --image=busybox -n web-app --labels=tier=api -- sleep 3600
kubectl run attacker --image=busybox -n web-app --labels=tier=attacker -- sleep 3600
# Create the NetworkPolicy
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-policy
namespace: web-app
spec:
podSelector:
matchLabels:
tier: frontend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tier: api
ports:
- protocol: TCP
port: 80
EOF
# Test: Get the frontend Pod IP
FRONTEND_IP=$(kubectl get pod frontend -n web-app -o jsonpath='{.status.podIP}')
# From api Pod (should work)
kubectl exec api -n web-app -- wget -qO- --timeout=2 http://$FRONTEND_IP
# From attacker Pod (should be blocked)
kubectl exec attacker -n web-app -- wget -qO- --timeout=2 http://$FRONTEND_IP
# Should timeout
Note: NetworkPolicy enforcement requires a CNI that supports it (Calico, Cilium). kind uses kindnet by default, which does not enforce NetworkPolicies. For NetworkPolicy testing, use kind with Calico or use a kubeadm cluster with a compatible CNI.
Exercise 3.2: Ingress Configuration (target time: 7 minutes)
Create an Ingress that routes /app to a service called app-svc on port 80 and /api to api-svc on port 8080.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /app
pathType: Prefix
backend:
service:
name: app-svc
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: api-svc
port:
number: 8080
Practice creating Ingress resources from memory. The YAML structure has several nested levels, and getting the indentation wrong will cause errors.
Lab 4: Workloads and Scheduling (15%)
Exercise 4.1: Deployments and Rollbacks (target time: 5 minutes)
# Create deployment
kubectl create deployment web --image=nginx:1.24 --replicas=3
# Update to a new version
kubectl set image deployment/web nginx=nginx:1.25
# Check the rollout
kubectl rollout status deployment/web
# Oops, update to a bad version
kubectl set image deployment/web nginx=nginx:9.9.9
# Pods will be in ImagePullBackOff
kubectl get pods
# Rollback
kubectl rollout undo deployment/web
# Verify
kubectl rollout status deployment/web
kubectl get pods
Exercise 4.2: Scheduling with Node Affinity (target time: 8 minutes)
# Label a node
kubectl label nodes <node-name> disk=ssd
# Create a Pod that uses node affinity to schedule on the labeled node
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: ssd-pod
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disk
operator: In
values:
- ssd
containers:
- name: nginx
image: nginx
EOF
# Verify it scheduled on the correct node
kubectl get pod ssd-pod -o wide
Exercise 4.3: Taints and Tolerations (target time: 5 minutes)
# Taint a node
kubectl taint nodes <node-name> env=production:NoSchedule
# Try creating a Pod (it will stay Pending if no other nodes are available)
kubectl run test --image=nginx
# Create a Pod with a toleration
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: tolerant-pod
spec:
tolerations:
- key: env
operator: Equal
value: production
effect: NoSchedule
containers:
- name: nginx
image: nginx
EOF
# Clean up: remove the taint
kubectl taint nodes <node-name> env=production:NoSchedule-
Lab 5: Storage (10%)
Exercise 5.1: PV and PVC Workflow (target time: 7 minutes)
Create a PersistentVolume, a PersistentVolumeClaim, and a Pod that mounts it.
# Create PV
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/data
EOF
# Create PVC
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
EOF
# Verify binding
kubectl get pv task-pv
kubectl get pvc task-pvc
# STATUS should be Bound
# Create a Pod using the PVC
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: storage-pod
spec:
containers:
- name: app
image: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: storage
volumes:
- name: storage
persistentVolumeClaim:
claimName: task-pvc
EOF
# Verify
kubectl exec storage-pod -- ls /usr/share/nginx/html
Get the official practice sessions
Two practice sessions come free with your CKA registration. They run in the same environment as the real exam.
Register for the CKA ExamStructuring Your CKA Practice Sessions
Random practice is less effective than structured practice. Here is how to organize your lab time for maximum exam readiness.
Week-by-week practice plan
Weeks 1 to 2: Foundation practice
- 30 minutes daily in a kind or minikube cluster
- Focus on imperative commands:
kubectl run,kubectl create,kubectl expose - Practice
--dry-run=client -o yamlworkflow for every resource type - Goal: Create any basic resource in under 2 minutes
Weeks 3 to 4: Domain-specific practice
- 45 minutes daily
- Run through the lab exercises above for each domain
- Time yourself on every exercise
- If you exceed the target time, do the exercise again the next day
Weeks 5 to 6: Troubleshooting focus
- 60 minutes daily
- Intentionally break things and fix them
- Practice the debugging workflow: get, describe, logs, exec
- Have someone else break your cluster and debug it blind (if possible)
Weeks 7 to 8: Exam simulation
- Take the first included practice session 7 to 10 days before your exam
- Review weak areas for 3 to 4 days
- Take the second practice session 2 to 3 days before your exam
- Light review only in the final 2 days (do not cram)
Practice session structure (daily)
A good daily practice session follows this pattern:
- Warm-up (5 minutes): Create a Deployment, expose it as a Service, scale it, do a rollback. This builds speed on the basics.
- Focused practice (30 to 45 minutes): Work on the domain you are currently studying. Use the lab exercises above.
- Speed drill (10 minutes): Pick 3 to 5 random tasks and try to complete each one as fast as possible. This simulates exam pressure.
Track your times
Keep a simple log of how long each task takes. The first time you do an etcd backup, it might take 15 minutes. After 5 practice sessions, it should take 5 minutes. If a task is not getting faster, you need to change your approach, not just repeat the same thing.
| Task | First Attempt | After 5 Practices | Target |
|---|---|---|---|
| RBAC setup | 12 min | 5 min | 5 min |
| etcd backup/restore | 15 min | 7 min | 8 min |
| NetworkPolicy | 10 min | 4 min | 5 min |
| PV/PVC/Pod | 8 min | 3 min | 4 min |
| Debug broken Pod | 5 min | 2 min | 3 min |
| Cluster upgrade | 20 min | 12 min | 12 min |
Building a Practice Test
You can build your own practice test by combining exercises from different domains. Here is a sample 2-hour practice test with realistic weights.
Setup: Create a fresh kind multi-node cluster. Set a 2-hour timer.
| # | Task | Weight | Domain |
|---|---|---|---|
| 1 | Create a Deployment with 3 replicas and expose it as a NodePort Service | 4% | Workloads |
| 2 | Create RBAC: ServiceAccount, Role (get/list pods), RoleBinding | 7% | Cluster |
| 3 | Fix a Pod in CrashLoopBackOff (create one with a bad command) | 5% | Troubleshooting |
| 4 | Create a PV (2Gi, ReadWriteOnce, hostPath), PVC, and mount in a Pod | 7% | Storage |
| 5 | Create a NetworkPolicy allowing ingress only from specific label | 7% | Networking |
| 6 | Perform an etcd backup to /tmp/etcd-backup.db | 8% | Cluster |
| 7 | Debug a Service with no endpoints (fix the selector) | 5% | Troubleshooting |
| 8 | Create a CronJob that runs every 5 minutes | 4% | Workloads |
| 9 | Drain a node, create a Pod with a toleration, uncordon the node | 7% | Workloads |
| 10 | Create a ConfigMap and Secret, mount both in a Pod | 7% | Workloads |
| 11 | Fix a broken kubelet on a worker node | 8% | Troubleshooting |
| 12 | Create an Ingress with path-based routing to two Services | 7% | Networking |
| 13 | Perform a Deployment rolling update and then rollback | 4% | Workloads |
| 14 | Scale a Deployment to 5 replicas and set resource limits | 4% | Workloads |
| 15 | Debug a Pod stuck in Pending (add a node label to fix scheduling) | 6% | Troubleshooting |
Score yourself honestly. Did the Pod actually end up Running? Did the Service have endpoints? Did the NetworkPolicy actually block unwanted traffic? The CKA grading system checks results, not effort.
Use the Included Practice Sessions
Your CKA exam purchase ($445) includes two practice sessions, 36 hours each. These are the closest thing to a real CKA practice test that exists.
First practice session: Take this 7 to 10 days before your scheduled exam. Treat it like the real exam. Set up your terminal, manage your time, do not check external resources beyond the allowed docs. After completing it, review your results carefully. Identify the domains where you lost points.
Between sessions: Spend 3 to 5 days focused on your weak areas. If you struggled with etcd backup, do it 5 more times. If NetworkPolicies were shaky, practice writing them from memory.
Second practice session: Take this 2 to 3 days before your exam. This is your confidence check. If you pass this one, you are ready. If you fail, consider rescheduling your exam and spending another week on practice.
The practice sessions are harder than the real exam. That is intentional. If you score 70% on the practice, you will likely score higher on the actual CKA. Do not panic if the practice test feels hard.
Common Practice Mistakes
Practicing only creation, not troubleshooting. Creating resources is the easy part. The CKA devotes 30% of the exam to troubleshooting. If your practice is 90% creating things and 10% fixing things, flip that ratio as your exam date approaches.
Not timing yourself. Every practice task should have a timer. Without time pressure, you are not simulating the exam. Knowing how to create a NetworkPolicy in 20 minutes is useless when the exam gives you 7 minutes.
Using the docs for everything. During practice, try to complete tasks without the docs first. Only check the docs when you are genuinely stuck. The goal is to need the docs for edge cases, not for basic syntax. If you find yourself checking the docs for kubectl create deployment syntax, you need more practice.
Skipping multi-cluster practice. The CKA uses multiple clusters. Practice switching contexts. Create two kind clusters and practice running commands against each one:
kind create cluster --name cluster1
kind create cluster --name cluster2
# List contexts
kubectl config get-contexts
# Switch contexts
kubectl config use-context kind-cluster1
kubectl config use-context kind-cluster2
Ignoring the setup phase. Every practice session should start with the same setup you will use on exam day:
alias k=kubectl
source <(kubectl completion bash)
complete -F __start_kubectl k
export EDITOR=vim
If this is not automatic by exam day, you are leaving speed on the table.
For more on what the exam covers and how to study each domain, see our CKA study guide. For salary expectations after passing, see Kubernetes Certification Salary. If you plan to continue beyond the CKA, read our Kubestronaut path guide for the optimal order to take all five Kubernetes certifications.
Start practicing for the CKA
$445 gets you the exam, a free retake, and two practice sessions. Build your own labs now and use the official practice sessions as a final check.
Register for the CKA ExamFAQ
What is the best free CKA practice test?
There is no single free practice test that perfectly replicates the CKA exam. The best free approach is building your own practice labs with kind or minikube and working through exercises for each exam domain. The two practice sessions included with your CKA purchase ($445) are the closest simulation of the real exam and are worth using.
How many practice hours do I need for the CKA?
Most people who pass report 40 to 80 hours of hands-on practice, spread over 4 to 8 weeks. That is roughly 1 to 2 hours daily. The exact number depends on your starting experience. If you already use Kubernetes at work, you might need 30 hours. If you are learning from scratch, plan for 80 or more.
Should I practice on kind or minikube?
Both work. kind is better for multi-node practice (node draining, scheduling) because it supports multi-node clusters natively. minikube is simpler to set up. For the most realistic experience, use kubeadm on VMs, since that is how clusters are built on the exam. Most people use kind for daily practice and kubeadm for cluster administration exercises.
Can I practice CKA labs on a cloud provider?
Yes. Spinning up 2 to 3 small VMs on any cloud provider and bootstrapping a kubeadm cluster is excellent practice. The cost is minimal if you only run them during practice sessions. This also gives you real SSH access to nodes, which is closer to the exam experience than kind.
How do I practice etcd backup and restore?
You need a kubeadm cluster for realistic etcd practice. kind clusters use etcd but the certificate paths are different. Set up a kubeadm cluster on VMs, practice the backup command, verify the snapshot, restore to a new directory, update the etcd manifest, and confirm the cluster comes back up. Do this at least 3 to 5 times.
What should I practice the most for the CKA?
Troubleshooting. It is 30% of the exam and the hardest to practice because it requires you to break things first. After troubleshooting, focus on RBAC, etcd backup/restore, and kubeadm upgrades from the Cluster Architecture domain (25%). Those two domains together are 55% of your score.
How do I know when I am ready for the CKA?
Take the included practice session. If you score above 70%, you are ready. If you can complete most lab exercises in this guide within the target times, you are ready. If you can create basic resources imperatively without checking the docs, you are ready. If any of those are not true, keep practicing.