Requires Orka 3.2 or later on an Apple Silicon cluster. If you’re on an earlier version, this feature is not available.
About
Orka 3.2 introduces Image Caching. This feature lets you download Orka images ahead of time to any node in the Orka cluster if you have admin privileges. Pre-caching images provides faster and more reliable VM deployments by reducing delays caused by network bandwidth issues. Without Image Caching, automated CI jobs pull new images from the local cluster datastore (NFS mount) or a remote registry during CI runtime, which can cause delays.System Requirements
Scheduled Caching is available for:- Orka 3.2+
- MacOS computers based on Apple Silicon
Overview
The first time a VM is run on a node, the image needs to be cached locally on that node. Previously, you could not bypass this step before running automated CI pipeline jobs. Since initial image pull speed depends on image size, network bandwidth, and node resource utilization, a build using a new VM image can take several minutes and produce inconsistent deployment times. Scheduled Caching lets you bypass these delays by pre-caching new images on Orka cluster nodes before CI automation starts. The cluster knows which nodes have necessary images cached, greatly reducing image loading and scheduled downtime. If a node cache lacks a needed image, you can pick nodes for cache operations and stipulate that VMs deploy on those nodes using those images. In other words, Scheduled Caching is an asynchronous operation.Key Concepts
- Image Caching allows preemptive copying of an Orka VM image on any cluster node member, to avoid delays caused by network bandwidth when images are pulled from the cluster NFS mount or from a public registry cloud service.
- An image is the bits on disk representing a VM that can be used for saving state and sharing modifications.
- MacStadium base VM Orka images are macOS OCI compliant VM images stored in our public GitHub registry ghcr.io/macstadium/orka-images/ with user credentials user/pwd: admin/admin , have Homebrew package manager installed, orka-vm-tools installed, and have screen sharing enabled and SSH access enabled.
- Cluster local storage is an NFS mounted filesystem for storing images locally (local registry service).
- A VM is a virtual runtime on top of the macOS host. The VM runs a guest OS image and macOS supports up to 2 running VMs per cluster node.
- Sequoia refers to macOS 15 (the latest Sequoia release available from Apple’s servers). macOS 26 Tahoe is also available as a guest OS starting in Orka 3.5.
Orka Cluster 3.0 and later can deploy a VM from multiple image sources: a cloud image datastore (OCI registry service), Orka’s local cluster registry (Cluster NFS mount), and lastly cached images on a cluster node member. Now in Orka 3.2 release there are three distinct CLI commands to display each type of storage and the images available on that specific datastore.
Getting Started
After the cluster nodes are upgraded to version 3.2, you can access the new Image Caching feature and support for Sequoia guest OS VMs. To gain familiarity with Scheduled Caching via the Orka3 CLI, take the following steps:- Run
orka3 imagecache -hto see the CLI tree structure: commands, subcommands and options/flags - Run
orka3 remote-image listto view Orka VM images available on MacStadium’s public registry (ghcr.io/macstadium/orka-images/) - Run
orka3 image listto view images already downloaded to the cluster local registry (cluster NFS mount) - Run
orka3 imagecache listto view images currently stored on Orka Cluster nodes - View the Orka Cluster node names by typing
orka3 nodes list - Add a new image to a cluster node
orka3 imagecache add - Check the status of an image caching operation
orka3 imagecache info - Rapidly deploy a new VM using a recently cached Orka image on a specific node
orka3 vm deploy --image <image_name> --node <node_name>
Scheduled Caching FAQs
General information and guidance on Scheduled Caching- Does caching have an impact on running VMs?
- Caching small images (16 GB) may be unnoticeable in VMs with long-running build times. (If a build takes 10 minutes, then a cache operation of 1 minute has a lower impact on the overall duration of the build job.)
- The greater the number of images cached, the more significant the impact on a running VM on that node, due to resource contention. The image caching operation consumes resources on the node, particularly disk I/O.
- How many cache operations can run at once?
- Currently, a cache job requires 0.5 CPU to run. On a machine with 8 CPU available, it can run a maximum of 16 jobs.
- Caching more images simultaneously is faster than caching multiple images sequentially.
- Does caching have an impact on build performance?
- Performance is impacted depending on the number of cache operations running.
- MacStadium recommends that build jobs and caching operations not run concurrently unless:
- The actual image size is small.
- Long-running builds are active, and a small performance hit is acceptable.
- Does caching have an impact on the performance of other cache operations?
- Yes. A VM running on a node with an active caching job will see a noticeable increase in task execution time, proportional to the size of the image being written.
- Does caching have an impact on network, memory, or CPU performance?
- Nodes with an active caching download can impact a VM’s performance, including network I/O operations and CPU/memory availability.
- Does caching have an impact on cluster performance?
- The Orka scheduler may experience increased latency on any node running active caching operations.
- Can I delete images from a cluster node cache?
- Orka’s Kubernetes control plane automatically manages node caches. There is no manual “remove” command. To update an image, cache a new copy under the same name or with an updated OCI tag.
Orka 3.5.1 fix:
orka3 ic add previously failed silently when caching images to nodes in custom namespaces. This is resolved in Orka 3.5.1. If you cache images to non-default namespaces, upgrade to 3.5.1 or later and re-cache any previously failed images.
