Orka does not change how you upgrade Kubernetes. Follow the standard upgrade practices for your Kubernetes provider. This page explains how Orka services are affected during an upgrade and what you can do to minimize disruption.Documentation Index
Fetch the complete documentation index at: https://docs.macstadium.com/llms.txt
Use this file to discover all available pages before exploring further.
Orka 3.6 is validated against Kubernetes 1.35 on both on-prem and AWS EKS. Running VMs are not affected by a Kubernetes upgrade. Mac nodes are not part of the upgrade process and do not restart.
Service behavior during an upgrade
The standard Kubernetes node upgrade process cordons and drains each node, evicting pods and rescheduling them to available nodes. Services with a single replica may experience brief downtime during this window.| Service | Replicas (default) | Impact if down |
|---|---|---|
| API Server | 1 | API calls fail. Most integrations retry automatically, so downtime is often not noticeable. The orka3 CLI continues working for all commands except login and vm push. |
| Operator | 1 | Changes to Orka resources are not processed: VMs do not get scheduled or deleted. When the Operator comes back up, all queued changes are processed. No data is lost. |
| Webhooks | 3 | No requests pass while down. |
| Virtual Kubelet | Runs on each Mac host | The node cannot manage VMs. After a period of time the node is marked Not Ready by Kubernetes. |
Virtual Kubelet
The Virtual Kubelet does not need to be upgraded as part of a Kubernetes upgrade. It is forward-compatible with new Kubernetes versions and MacStadium will notify you if an upgrade is ever required.Operator leader election
The Operator uses a leader election model: only one pod is active at a time. If the active Operator pod is evicted during an upgrade, leader election takes up to 15 seconds. During this window, VM deployments may be slower but will not error.Minimizing disruption
If you need the Orka API Server and Operator to remain available throughout the upgrade:- Increase the replica count for the API Server and Operator deployments.
- Add a PodDisruptionBudget for the API Server, Operator, and Webhooks to ensure at least one healthy pod remains during node drain.
Post-upgrade verification
After completing the upgrade, verify that the Orka environment is fully operational. Nodes are ready:orka-apiserver, orka-operator, orka-webhook, and cert-manager pods are running.
VM deployment works:
200 response.

