VM Scheduling
Introduced with Orka 2.0, VM scheduling allows you to control the algorithm used when scheduling VMs between the nodes. If no changes are applied, the default scheduling algorithm is used where VMs are scheduled in such a way that keeps balance between free and used resources on each node. It can be changed to the value of most-allocated where VMs are scheduled in a way that tries to exhaust resources on one node before starting scheduling on another. You can also control the VM scheduling algorithm when creating a VM configuration and when deploying a VM. Read more about VM scheduling in the MacStadium blog.GPU Passthrough
Introduced with Orka 1.5.0 for MacPro hosts and with Orka 1.7.0 for Mac Mini hosts, GPU Passthrough allows you to use the GPU available on a node from within a VM deployed on that node. It is disabled by default and can be enabled in the cluster either at the time of cluster provisioning or during an API update.Intel only: GPU passthrough is available for Mac Intel nodes only. This configuration does not apply to Apple silicon nodes.
VM Internet Isolation
Introduced with Orka 1.5.3, VM Internet isolation lets you control internet access from within VMs. It is disabled by default. When enabled, VMs cannot access the internet. The feature can be enabled during cluster provisioning or Orka upgrade. See how to request an Orka upgrade.VM Network Isolation
Introduced with Orka 1.5.1, VM network isolation lets you control access from one VM to another and from a VM to the Orka API. It is disabled by default. When enabled, VMs cannot communicate with each other and cannot access the Orka API. The feature can be enabled during cluster provisioning or Orka upgrade. See how to request an Orka upgrade.Burst Capacity
Introduced in Orka 3.3, Orka Burst gives you dedicated, on-demand access to elastic cluster capacity. Burst nodes are provisioned temporarily and returned when the workload completes, letting you handle spikes without permanently expanding your cluster footprint. To add burst nodes to your account, contact MacStadium through the Account Portal.UDID Generation Control
Introduced in Orka 3.3.2, clusters can be configured for either consistent or dynamic UDID generation:- Consistent UDIDs (default): each VM deployed from the same image gets the same machine identifier. Useful for code signing workflows that expect a stable machine ID.
- Dynamic UDIDs: each deployment gets a unique identifier. Useful for VDI, MDM enrollment, and remote desktop use cases where each VM represents a distinct machine.
Component Version Visibility
Introduced in Orka 3.3, you can check which version of each Orka component is running:Display Resolution per VM
Introduced in Orka 3.4, you can set custom display resolution when deploying or configuring a VM:--display-width, --display-height, --display-dpi. These flags work with both orka3 vm deploy and orka3 vm create. VMs created from IPSW files default to 1920×1080×96 if no display flags are specified.
Constraints: width 320–3840 px, height 480–2160 px, DPI 60–240 px (320 px optional max).
Harbor OCI Storage
Introduced in Orka 3.3.2 and made the default in Orka 3.5, Harbor OCI storage is MacStadium’s managed image registry. New Orka deployments use Harbor by default. Existing NFS-based deployments retain their current configuration. Capabilities include OCI-compliant image storage, role-based access control, activity auditing, and Prometheus metrics support. See Using Harbor OCI Storage with the Orka CLI for setup details.Bridged Networking
Introduced in Orka 3.5 for on-prem deployments. Bridged networking lets Orka VMs connect directly to a physical network as native devices, receiving IP addresses from your existing DHCP server. This enables direct communication with other network devices without NAT.Customers must configure their own DHCP server. Static IP assignment through Orka is not currently supported. Bridged networking is available for on-prem deployments only.

