Skip to main content
IMPORTANT Always ensure that your cluster, Orka tools and integrations, and Orka VM Tools run matching versions. For example, the respective available 3.x versions.

Orka 3.2.2

Improvements and fixes

Fix: incomplete VM deletion

Addressed an issue with Orka preventing Kubernetes garbage collector from fully deleting a VM after the underlying pod is deleted first, particularly if a Kubernetes foregroundDeletion finalizer is present in the VM.

Fix: Custom users created during authentication

Custom users created during authentication now have Orka cluster admin privileges. The change is proactive and prevents a future patch release to elevate users with cluster admin privileges.

New feature: Apple ID Authentication

Apple ID login is now available to Orka customers running Sequoia host machines on Apple Silicon hardware.

Requirements and Limitations

Supported configurations:

  • Host machine: macOS Sequoia on Apple Silicon
  • Guest OS (VM): macOS Sequoia updated via IPSW

Limitations:

  • The Host OS must be running MacOS Sequoia or later
  • The Guest OS (VM) must be created using a Sequoia IPSW. Apple ID login will not work with VMs that were upgraded from previous versions of macOS like Sonoma.

Setup instructions:

To set up an Apple ID compatible VM environment with Orka:
  1. Download Orka Desktop to begin the VM creation process. Makes sure your Host OS is Sequoia or later.
  2. Build a new guest image using a Sequoia IPSW file with the requirements specified above.
  3. Test and use your new VM locally.
  4. If you want to deploy the image to your Orka cluster, use Orka Desktop to push your image to an OCI repository, and then use the Orka CLI to deploy to your Orka cluster.
Ensure that your host system meets the requirements above before creating your Sequoia guest machine. For more detailed instructions, refer to the following Apple documentation: https://developer.apple.com/documentation/Virtualization/using-icloud-with-macos-virtual-machines

Orka 3.2.1

Improvements and fixes

Image caching deletion fix

The Orka Operator can in rare occurrences fail to update an Orka node status if a cache deletion operation does not fully complete. This can cause numerous unexpected errors including subsequent image cache failures. The Operator can now recover node status properly even after a cache deletion operation fails.

Fix: Issue where image import directory was removed by the cache clearing processs

Previously, when importing an image, the destination directory was created prior to extracting the image to a temporary location. Users were experiencing an issue where the extraction process did not complete prior to a caching process starting. Upon seeing that space is needed, empty directories would then be pruned in the cache. This would then result in the destination directory being removed, and when trying to move files into place, the process would fail due to the destination directory no longer existing. This was fixed in Orka 3.2.1, and the destination directory is now created at the time when the extracted files are moved into place. Additionally, temporary files are now cleaned up during caching, resolving an issue where users were unable to cache additional images due to temporary files previously not being deleted as expected.

Orka node status logic update

In corner cases during cache operations, an incorrect/outdated node status is returned during the operation resulting in a failure. The logic was improved to prevent this condition.

Improved Orka node architecture detection

To ensure a node type is properly identified in mixed node environments, Intel vs. Apple Silicon (M-series) systems, the system info is checked with the node label to identify architecture type.

GitHub Actions Plugin null VM name issue

A recent upstream change to the ARC project caused previously used variables that Orka runners used to provide unique names to Orka runners return null VM names. The v1.1.5 GitHub Actions Orka plugin was changed to use natively created VM names from Kubernetes. The plugin also now implements jobId to identify jobs to improve support of concurrent/multiple runners.

Orka 3.2.0 What’s New in the Release

Orka Cluster version 3.2 introduces a new feature, Scheduled Caching, and also provides support for Sequoia 15.0 VM images.
🗒️NOTE Sequoia support is dependent on requesting that some or all nodes are upgraded to Sequoia 15.0 in the upgrade request service ticket.

Scheduled Caching

Scheduled caching enables users to configure their Orka clusters to proactively pull VM images, avoiding long startup times when the VM is first run. The scheduled caching feature requires planning for target nodes and optimal maintenance window periods. There is a noticeable impact on VMs running during image cache downloads (particularly on I/O operations). It is best to avoid caching while scheduling running builds unless the image size is smaller < 16GB.
🗒️NOTE The cache downloads directly from OCI repositories are currently slower than normal image download times by approximately 2-3x. For the beta release it is better to cache from NFS stores. If necessary, pull repo images down to NFS stores and then begin cache downloads to cluster nodes. Caching an image to more than five nodes simultaneously also adds delay to cache download completion time.

Sequoia Support

The Orka Cluster 3.2 release also introduces official support for macOS Sequoia release 15.0. To deploy Sequoia based VMs on your cluster you will need to have your cluster nodes upgraded to the Sequoia 15.0 release as well as the host OS is a dependency of running a Sequoia guest OS. When requesting your Orka Cluster upgrade to release 3.2 you should also specify any nodes you wish to run Sequoia VMs on as well so the OS can be upgraded.

Kubernetes update

While upgrading Orka Cluster to release 3.2, MacStadium will also upgrade the cluster to Kubernetes stable version 1.3.0. This process must be done serially upgrading to each successive version from the current version, typically 1.5.3. Therefore, an Orka Cluster upgrade requires a longer maintenance window, and coupled with a possible node OS update, adds additional potential steps extending the entire upgrade duration even longer. For larger install bases, over 25 nodes, it is important to consider these upgrade duration factors when requesting your software upgrade.

Upgrading

WARNING
  • A scheduled maintenance window is required.
  • This release requires a maintenance window of up to 3 hours depending on the size of the cluster.
🗒️NOTES
  • Orka 3.2.0 is a new Orka release upgrade. For more information, see Orka Upgrades.
  • For customers who have not yet upgraded to Orka Cluster 3.0.0 please read the migration guide before submitting a ticket.
  1. Submit a ticket through the MacStadium portal.
  2. Schedule a time for the maintenance window that works using the link provided in the ticket.
    The suggested time must be Monday through Thursday, 6 am or 10 pm PST (9 am or 1 pm EST) , depending on MacStadium Global Operations calendar availability.
  3. Follow this migration guide to configure your cluster and tools after the migration. If you are upgraded from Orka 2.4.x, review the 2.4.x to 3.0.0: CLI Mapping and 2.4.x to 3.0.0: API Mapping to decide how to migrate it.