About
Orka 3.2 introduces a helpful new feature called Image Caching. This feature allows users with admin privileges to download Orka images ahead of time, to any node in the Orka cluster. This means users enjoy a faster and more reliable VM deployment experience as it helps to reduce delays caused by network bandwidth issues. Without Image Caching, users would likely encounter delays during VM deployment, as automated CI jobs pull new images from the local cluster datastore (NFS mount) or a remote registry (cloud OCI registry service) during CI runtime.System Requirements
Scheduled Caching is available for:- Orka 3.2+
- MacOS computers based on Apple Silicon
Overview
The first time a VM is run on a node, the image needs to be cached locally on that node. Previously, there was no user control to bypass this step prior to running automated CI pipeline jobs. Since initial image pull speed is impacted by many variables (image size, network bandwidth, node resource utilization), a build operation using a new VM image deployment can take several minutes to complete and even provide an inconsistent deployment time. The Orka Scheduled Caching feature allows admin users to bypass these typical causes of delay by pre-caching new images on Orka Cluster nodes before CI automation starts. The cluster knows which nodes have necessary images cached, greatly reducing image loading and scheduled downtime. But in the case where a node cache lacks a needed image, users can use the new feature to pick nodes for cache operations and stipulate VMs deploy on those nodes using those images. In other words, Scheduled Caching is an asynchronous operation.Key Concepts
- Image Caching allows preemptive copying of a Orka VM image on any cluster node member, to avoid delays caused by network bandwidth when images are pulled from the cluster NFS mount or from a public registry cloud service.
- An image is the bits on disk representing a VM that can be used for saving state and sharing modifications.
- MacStadium base VM Orka images are macOS OCI compliant VM images stored in our public GitHub registry ghcr.io/macstadium/orka-images/ with user credentials user/pwd: admin/admin , have Homebrew package manager installed, orka-vm-tools installed, and have screen sharing enabled and SSH access enabled.
- Cluster local storage is an NFS mounted filesystem for storing images locally (local registry service).
- A VM is a virtual runtime on top of the macOS host. The VM runs a guest OS image and macOS supports up to 2 running VMs per cluster node.
- Sequoia refers to macOS 15.0 the latest public GA release available from Apple’s servers.
Getting Started
After the cluster nodes are upgraded to version 3.2, users can access the new Image Caching feature and support for Sequoia guest OS VMs. To gain familiarity with Scheduled Caching via the Orka3 CLI, take the following steps:- Run
orka3 imagecache -hto see the CLI tree structure: commands, subcommands and options/flags - Run
orka3 remote-image listto view Orka VM images available on MacStadium’s public registry (ghcr.io/macstadium/orka-images/) - Run
orka3 image listto view images already downloaded to the cluster local registry (cluster NFS mount) - Run
orka3 imagecache listto view images currently stored on Orka Cluster nodes - View the Orka Cluster node names by typing
orka3 nodes list - Add a new image to a cluster node
orka3 imagecache add - Check the status of a image caching operation
orka3 imagecache info - Rapidly deploy a new VM using a recently cached Orka image on a specific node
orka3 vm deploy --image <image_name> --node <node_name>
Scheduled Caching FAQs
General information and guidance on Scheduled Caching- Does caching have an impact on running VMs?
- Caching small images (16 GB) may be unnoticeable in VMs with long-running build times. (If a build takes 10 minutes, then a cache operation of 1 minute has a lower impact on the overall duration of the build job.)
- The greater the number of images cached, the more significant the impact on a running VM on that node, due to resource contention. The image caching operation consumes resources on the node, particularly disk I/O.
- How many cache operations can run at once?
- Currently, a cache job requires 0.5 CPU to run. On a machine with 8 CPU available, it can run a maximum of 16 jobs.
- Caching more images simultaneously is faster than caching multiple images sequentially.
- Does caching have an impact on build performance?
- Performance is impacted depending on the number of cache operations running.
- MacStadium recommends that build jobs and caching operations not run concurrently unless:
- The actual image size is small.
- Long-running builds are active, and a small performance hit is acceptable.
- Does caching have an impact on the performance of other cache operations?
- Yes. A VM running on a node with an active caching job will see a noticeable increase in task execution time, proportional to the size of the image being written.
- Does caching have an impact on network, memory, or CPU performance?
- Nodes with an active caching download can impact a VM’s performance, including network I/O operations and CPU/memory availability.
- Does caching have an impact on cluster performance?
- The Orka scheduler may experience increased latency on any node running active caching operations.
- Can I delete images from a cluster node cache?
- Orka’s Kubernetes control plane automatically manages node caches. There is no manual “remove” command. To update an image, cache a new copy under the same name or with an updated OCI tag.