Overview
This guide walks through the full setup of Orka Engine Orchestration for VDI deployments. It covers network preparation, controller and host configuration, Orka Engine installation, VM deployment, and the optional Semaphore web interface. There are three roles in this setup:- Controller: the machine running Ansible and the orchestration playbooks. This is typically a dedicated VM.
- Hosts: physical Mac hardware running Orka Engine and the macOS VMs.
- VMs: macOS virtual machines provisioned and managed through the orchestration library.
Prerequisites
- An Ansible controller (macOS or Linux, minimum: 2 vCPU, 4 GB RAM, 20 GB storage; recommended: 4 vCPU, 8 GB RAM, 50 GB storage)
- One or more physical Apple Silicon based Mac hosts with macOS Sonoma, Sequoia, or Tahoe Minimum specifications per Orka host: • Apple Silicon processor (any M-series chip) • 8GB RAM • 512GB storage • 1GB Ethernet • macOS 13 (Ventura) or later
- Recommended specifications per Orka host: • M2 Pro, M4 Pro, or higher • 16GB+ RAM (32GB+ recommended for high-density deployments) • 1TB+ storage • 10GB Ethernet • macOS 14 (Sonoma), 15 (Sequoia), or 26 (Tahoe)
Network Configuration
Static IP Assignment
Assign a static IP address to each Mac host before installing Orka Engine. Either configure the IP manually or use a DHCP reservation. To configure manually:- Open System Settings then Network, then select your interface (Ethernet or Wi-Fi).
- Set IP Address, Subnet Mask, Router, and DNS Servers from your management VLAN.
- Apply settings and verify connectivity.
- Note the MAC address of each host from System Settings then Network then Details then Hardware.
- Configure your DHCP server to assign a fixed IP to each MAC address.
- Verify the host receives the reserved IP.
| Field | Example |
|---|---|
| Hostname | mac-node-1 |
| IP address | 10.0.100.10 |
| MAC address | a1:b2:c3:d4:e5:f6 |
| Hardware model | Mac mini M4 |
| Network interface | en0 |
Firewall Requirements (Citrix DaaS)
If deploying Orka for VDI with Citrix DaaS, ensure the following traffic is permitted: Outbound from VMs (TCP 443):[customer_ID].xendesktop.net- Citrix DaaS controller*.*.nssvc.net- Citrix Gateway Service*.citrixworkspacesapi.net- Gateway connectivity checks- On-premises delivery controller FQDNs (for CVAD deployments)
- If Citrix Rendezvous is enabled, also allow outbound TCP/UDP 443 to ..nssvc.net.
- TCP/UDP 1494 and 2598 for HDX sessions
1. Controller Setup
Run the steps in this section on the designated Ansible controller machine.1.1 Set the Hostname
Run on: Controller Set a consistent hostname before configuring anything else. This ensures that your VDI tools are able to label the host devices accordingly. Replace example-controller with your chosen name.1.2 Install Homebrew
Run on: Controller1.3 Install Ansible
Run on: Controller Modern versions of macOS protect against installing python packages system-wide. Install Ansible through pipx in a virtual environment so it is isolated from system Python packages.2. Host Setup
Run the steps in this section on each physical Mac host. Repeat for every host in your fleet.2.1 Set the Hostname
Run on: Host(s) Replace example-host0 with the appropriate hostname for each machine. Use a short name (no dots) for HostName.2.2 Install Homebrew
Run on: Host(s)2.3 Check Python
Run on: Host(s) Python 3 is required for Ansible to manage the host. It will be installed in macOS by default, but running the below command will ensure it is present and configured.3. Configure SSH Access
Run the steps in this section on the controller. Ansible uses SSH key authentication to connect to each host.3.1 Generate an SSH Key
Run on: Controller If you do not already have an SSH key, generate one. Accept the default file path or specify your own.3.2 Copy the Key to Each Host
Run on: Controller Run ssh-copy-id once per host, substituting the correct IP address each time.4. Configure the Orchestration Library
Run all steps in this section on the controller.4.1 Clone the Repository
Run on: Controller The orchestration library lives at~/orka-automation/orka-engine-orchestration on the controller.
4.2 Create the Inventory File
Run on: Controller The inventory file tells Ansible which hosts to manage. Create it at./inventory inside the repository.
4.3 Configure Group Variables
Run on: Controller Group variables apply settings to all hosts in your inventory. Open the file created in the previous step:| max_vms_per_host | Maximum VMs allowed per host |
|---|---|
| engine_binary | Path to the Orka Engine binary |
| ansible_user | SSH username on each host |
| vm_image | Default base image for VM deployments |
| network_interface | (Optional) Network interface for bridged networking, e.g. en0 |
5. Install Orka Engine
This playbook downloads and installs the Orka Engine package on every host in your inventory. It applies the license key and starts the Orka Engine service automatically. Run on: Controller5.1 Verify the Installation
Run on: Controller5.2 Force Reinstall or Upgrade
Run on: Controller Add the install_engine_force flag to reinstall or upgrade to a newer version:6. Deploy and Manage VMs
All playbook commands are run from the controller inside ~/orka-automation/orka-engine-orchestration.6.1 Deploy VMs
Run on: Controller Deploy a group of VMs. Thevm_group name is used to track and manage the VMs as a set.
en0 for Ethernet):
--tags plan:
6.2 List VMs
Run on: Controller6.3 Manage Individual VMs
Run on: Controller Start, stop, or delete a specific VM by name. Valid values for desired_state are running, stopped, and absent.6.4 Delete VMs
Run on: Controller Delete a specific number of VMs from a group. Add —tags plan to preview before deleting.6.5 Pull Images
Run on: Controller Pre-pull an image to all hosts so it is ready for fast deployment:7. Semaphore Web UI (Optional)
Semaphore provides a browser-based interface for running orchestration playbooks without CLI access. All task templates, inventory, and repository configuration are set up automatically on first launch.7.1 Install Docker
Run on: Controller Docker is required to run Semaphore. Install it through Homebrew:7.2 Configure the Environment
Run on: Controller Copy the example environment file and generate an encryption key:7.3 Start Semaphore
Run on: Controllerhttp://localhost:3000. Log in with the admin credentials you set in the .env file.
7.4 Configure SSH Credentials
After logging in, navigate to Key Store and edit the Mac Hosts SSH key. Replace the placeholder credentials with the actual SSH username and private key (or password) for your Mac hosts.7.5 Available Task Templates
The following templates are pre-configured and ready to use in the Orka Engine Orchestration project. Each template prompts for required inputs when you click Run.| Template | Playbook | Required Inputs |
|---|---|---|
| Deploy VMs | deploy.yml | vm_group, desired_vms |
| Delete VMs | delete.yml | vm_group, delete_count |
| Manage VM | vm.yml | vm_name, desired_state |
| List VMs | list.yml | vm_group (optional) |
| Pull Image | pull_image.yml | remote_image_name |
| Install Engine | install_engine.yml | orka_license_key, engine_url |
| Create Image | create_image.yml | remote_image_name, vm_image |
7.6 Stop Semaphore
Stop containers but keep data8. Image Management
8.1 Using MacStadium Public Images
MacStadium maintains public base images through GitHub Container Registry. No authentication is required. Reference them by OCI path:ghcr.io/macstadium/orka-images/[os-version]:latest
Available images include Tahoe, Sequoia, Ventura, and Sonoma. Use the image name matching your target macOS version.
8.2 Building a Custom Golden Image
For VDI deployments, build a golden image with Citrix VDA and your organization’s software pre-installed. Thecreate_image.yml playbook deploys a temporary VM from a base image, runs your customization scripts, pushes the resulting image to your registry, then cleans up the VM.
Add your customization scripts to the /scripts folder in the repository, then run:
Run on: Controller
8.3 Distributing Images to Hosts
Pre-pull images to all hosts before deploying VMs to avoid pull delays at deployment time: Run on: Controller8.4 Private Registry Naming
If using a self-hosted registry (Harbor, Docker Registry, JFrog Artifactory), use a consistent naming convention:registry.example.com/orka/citrix-vda/sequoia-finance:v1.0
registry.example.com/orka/citrix-vda/sequoia-engineering:v2.1
registry.example.com/orka/citrix-vda/tahoe-marketing:latest
Note: Store registry credentials in Ansible Vault, never in plain text. Exclude vault files from Git using .gitignore.
9. Repository and Version Control
Fork theorka-engine-orchestration repository on GitHub so you can customize it and track changes over time.
• Add a .gitignore to exclude SSH keys, vault files, and other secrets.
• Never commit passwords, tokens, or private keys.
• Tag releases used in production.
• Use separate branches for staging and production changes.
• Document your inventory structure and playbook customizations in the repository README.
Support Resources
Note: The following remote host CLI commands are for advanced troubleshooting and diagnostics. It is recommended to use the Ansible playbooks or Semaphore UI when available.| MacStadium Support | support@macstadium.com |
|---|---|
| orka-engine-orchestration repo | orka-engine-orchestration on GitHub |
| Orka Engine CLI help | orka-engine —help |
| VM commands | orka-engine vm —help |
| Image commands | orka-engine image —help |
| Ansible docs | Ansible documentation |
| Citrix VDA for macOS | Citrix documentation |

