Documentation Index
Fetch the complete documentation index at: https://docs.macstadium.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
This guide walks through the full setup of Orka Engine Orchestration for VDI deployments. It covers network preparation, controller and host configuration, Orka Engine installation, VM deployment, and the Semaphore web interface.
There are three roles in this setup:
- Controller: the machine running Ansible and the orchestration playbooks. This is typically a dedicated VM.
- Hosts: physical Mac hardware running Orka Engine and the macOS VMs.
- VMs: macOS virtual machines provisioned and managed through the orchestration library.
Note: Total deployment time can depend on node count, but budget 30-60 minutes to set up your initial controller and host.
Prerequisites
- An Ansible controller (macOS or Linux, minimum: 2 vCPU, 4 GB RAM, 20 GB storage; recommended: 4 vCPU, 8 GB RAM, 50 GB storage)
- One or more physical Apple silicon based Mac hosts with macOS Sonoma, Sequoia, or Tahoe Minimum specifications per Orka host:
- Apple silicon processor (any M-series chip)
- 8GB RAM
- 512GB storage
- 1GB Ethernet
- macOS 13 (Ventura) or later
- Recommended specifications per Orka host:
- M2 Pro, M4 Pro, or higher
- 16GB+ RAM (32GB+ recommended for high-density deployments)
- 1TB+ storage
- 10GB Ethernet
- macOS 14 (Sonoma), 15 (Sequoia), or 26 (Tahoe)
An Orka Engine license key and installer URL (contact your MacStadium account representative)
An administrator account on each machine
Network connectivity between the controller and all Mac hosts
Note: Orka 3.5.0 or later is required for bridged networking support. This guide assumes that the controller is being deployed on a dedicated macOS device.
Network Configuration
Static IP Assignment
Assign a static IP address to each Mac host before installing Orka Engine. Either configure the IP manually or use a DHCP reservation.
To configure manually:
- Open System Settings then Network, then select your interface (Ethernet or Wi-Fi).
- Set IP Address, Subnet Mask, Router, and DNS Servers from your management VLAN.
- Apply settings and verify connectivity.
To use a DHCP reservation:
- Note the MAC address of each host from System Settings then Network then Details then Hardware.
- Configure your DHCP server to assign a fixed IP to each MAC address.
- Verify the host receives the reserved IP.
Document the following for each host before proceeding. This information populates the Ansible inventory file.
| Field | Example |
|---|
| Hostname | mac-node-1 |
| IP address | 10.0.100.10 |
| MAC address | a1:b2:c3:d4:e5:f6 |
| Hardware model | Mac mini M4 |
| Network interface | en0 |
Firewall Requirements (Citrix DaaS)
If deploying MacStadium VDI with Citrix DaaS, ensure the following traffic is permitted:
Outbound from VMs (TCP 443):
[customer_ID].xendesktop.net - Citrix DaaS controller
*.*.nssvc.net - Citrix Gateway Service
*.citrixworkspacesapi.net - Gateway connectivity checks
- On-premises delivery controller FQDNs (for CVAD deployments)
- If Citrix Rendezvous is enabled, also allow outbound TCP/UDP 443 to ..nssvc.net.
Inbound to VMs:
- TCP/UDP 1494 and 2598 for HDX sessions
1. Controller Setup
Run the steps in this section on the designated Ansible controller machine.
1.1 Set the Hostname
Run on: Controller
Set a consistent hostname before configuring anything else. This ensures that your VDI tools are able to label the host devices accordingly. Replace example-controller with your chosen name.
sudo scutil --set ComputerName "example-controller"
sudo scutil --set LocalHostName "example-controller"
sudo scutil --set HostName "example-controller"
dscacheutil -flushcache
1.2 Install Homebrew
Run on: Controller
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew doctor
1.3 Install Ansible
Run on: Controller
Modern versions of macOS protect against installing python packages system-wide. Install Ansible through pipx in a virtual environment so it is isolated from system Python packages.
brew install pipx
pipx install ansible-core==2.18.4
pipx install ansible==11.4.0
Note: Optionally install sshpass if you prefer password-based authentication instead of key exchange: brew tap esolitos/ipa && brew install esolitos/ipa/sshpass
2. Host Setup
Run the steps in this section on each physical Mac host. Repeat for every host in your fleet.
2.1 Set the Hostname
Run on: Host(s)
Replace example-host0 with the appropriate hostname for each machine. Use a short name (no dots) for HostName.
sudo scutil --set ComputerName "example-host0"
sudo scutil --set LocalHostName "example-host0"
sudo scutil --set HostName "host0"
dscacheutil -flushcache
2.2 Install Homebrew
Run on: Host(s)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew doctor
2.3 Check Python
Run on: Host(s)
Python 3 is required for Ansible to manage the host. It will be installed in macOS by default, but running the below command will ensure it is present and configured.
Run the steps in this section on the controller. Ansible uses SSH key authentication to connect to each host.
3.1 Generate an SSH Key
Run on: Controller
If you do not already have an SSH key, generate one. Accept the default file path or specify your own.
3.2 Copy the Key to Each Host
Run on: Controller
Run ssh-copy-id once per host, substituting the correct IP address each time.
ssh-copy-id administrator@10.254.235.xx
Verify the connection works before moving on:
ssh administrator@10.254.235.xx
Run all steps in this section on the controller.
4.1 Clone the Repository
Run on: Controller
The orchestration library lives at ~/orka-automation/orka-engine-orchestration on the controller.
mkdir -p ~/orka-automation
cd ~/orka-automation
git clone https://github.com/macstadium/orka-engine-orchestration.git
cd orka-engine-orchestration
4.2 Create the Inventory File
Run on: Controller
The inventory file tells Ansible which hosts to manage. Create it at ./inventory inside the repository.
mkdir -p dev/group_vars/all
touch dev/group_vars/all/main.yml
Create the inventory file
Inventory file contents - add the IP address of each host:
[hosts]
10.0.100.10
10.0.100.11
10.0.100.12
Run on: Controller
Group variables apply settings to all hosts in your inventory. Open the file created in the previous step:
vim dev/group_vars/all/main.yml
File contents:
max_vms_per_host: 2
engine_binary: /usr/local/bin/orka-engine
ansible_user: administrator
vm_image: ghcr.io/macstadium/orka-images/sequoia:latest
Variable reference:
| max_vms_per_host | Maximum VMs allowed per host |
|---|
| engine_binary | Path to the Orka Engine binary |
| ansible_user | SSH username on each host |
| vm_image | Default base image for VM deployments |
| network_interface | (Optional) Network interface for bridged networking, e.g. en0 |
5. Install Orka Engine
This playbook downloads and installs the Orka Engine package on every host in your inventory. It applies the license key and starts the Orka Engine service automatically.
Run on: Controller
cd ~/orka-automation/orka-engine-orchestration
ansible-playbook install_engine.yml -i inventory -e "orka_license_key=YOUR-LICENSE-KEY" -e "engine_url=https://distribution.macstadium.com/orka-engine/official/3.5.2/orka-engine.pkg"
Note: Obtain your license key and installer URL from your MacStadium account representative.
5.1 Verify the Installation
Run on: Controller
ansible hosts -i inventory -m shell -a "orka-engine --version"
ansible hosts -i inventory -m shell -a "/usr/local/bin/orka-engine info"
5.2 Force Reinstall or Upgrade
Run on: Controller
Add the install_engine_force flag to reinstall or upgrade to a newer version:
ansible-playbook install_engine.yml -i inventory -e "orka_license_key=YOUR-LICENSE-KEY" -e "engine_url=https://distribution.macstadium.com/orka-engine/official/3.5.2/orka-engine.pkg" -e "install_engine_force=true"
6. Deploy and Manage VMs
All playbook commands are run from the controller inside ~/orka-automation/orka-engine-orchestration.
6.1 Deploy VMs
Run on: Controller
Deploy a VM by name. Run once per VM to deploy multiple.
ansible-playbook deploy.yml -i inventory -e "vm_name=vdi-group-01" -e "vm_image=<your-image>"
For VDI workloads, use bridged networking so VMs receive IP addresses directly on your LAN. Specify the network interface (typically en0 for Ethernet):
ansible-playbook deploy.yml -i inventory -e "vm_name=vdi-group-01" -e "vm_image=<your-image>" -e "network_interface=en0"
Run this command once for each additional VM, using a unique vm_name each time.
Note: When network_interface is not specified, VMs deploy in NAT mode. Bridged mode is recommended for VDI because VMs are directly reachable by Citrix Cloud and end users without port forwarding.
To preview what will be deployed without making any changes, add --tags plan:
ansible-playbook deploy.yml -i inventory -e "vm_name=vdi-group-01" -e "vm_image=<your-image>" --tags plan
6.2 List VMs
Run on: Controller
ansible-playbook list.yml -i inventory -e "vm_name=vdi-group"
6.3 Manage Individual VMs
Run on: Controller
Start, stop, or delete a specific VM by name. Valid values for desired_state are running, stopped, and absent.
ansible-playbook vm.yml -i inventory -e "vm_name=my-vm-name" -e "desired_state=running"
6.4 Delete VMs
Run on: Controller
Delete a VM by name.
ansible-playbook delete.yml -i inventory -e "vm_name=vdi-group-01"
Run this command once for each VM to remove.
6.5 Pull Images
Run on: Controller
Pre-pull an image to all hosts so it is ready for fast deployment:
ansible-playbook pull_image.yml -i inventory -e "remote_image_name=ghcr.io/macstadium/orka-images/sequoia:latest"
7. Semaphore Web UI
Semaphore provides a browser-based interface for running orchestration playbooks. It is the primary way IT administrators interact with MacStadium VDI. The CLI is available for troubleshooting and advanced or custom workflows. All task templates, inventory, and repository configuration are set up automatically on first launch.
7.1 Install Prerequisites
Run on: Controller
Docker and uv are required to run Semaphore.
Install Docker through Homebrew:
brew install docker docker-compose
Note: Docker Desktop is an alternative if you prefer a GUI installer. Either option works.
Install uv (used to run the Semaphore configuration script):
Run on: Controller
Copy the example environment file and generate an encryption key:
cd ~/orka-automation/orka-engine-orchestration
cp semaphore/.env.example semaphore/.env
Generate a 32-byte base64 key
head -c32 /dev/urandom | base64
Open semaphore/.env and paste the generated key as the value for SEMAPHORE_ACCESS_KEY_ENCRYPTION:
Set this value to the key generated above:
SEMAPHORE_ACCESS_KEY_ENCRYPTION=your-generated-key-here
Also set your admin username and password in the same file before starting Semaphore.
7.3 Start Semaphore
Run on: Controller
cd ~/orka-automation/orka-engine-orchestration
docker compose up -d
Semaphore is available at http://localhost:3000. Log in with the admin credentials you set in the .env file.
You can configure SSH credentials either via the setup script or manually through the Semaphore UI.
Option A: Setup script (recommended)
uv is required to run this script. It is a lightweight Python package manager that executes the configuration script without requiring a virtual environment. It handles SSH key setup and optionally sets default VM credentials and OCI registry credentials in one step.
SEMAPHORE_ADMIN=$YOUR_ADMIN SEMAPHORE_ADMIN_PASSWORD=$YOUR_ADMIN_PASSWORD uv run ./semaphore/configure_semaphore.py --ssh-key-file $YOUR_KEY
Run uv run ./semaphore/configure_semaphore.py --help to see additional parameters, including default VM username/password and OCI registry credentials.
Option B: Manual (UI)
After logging in, navigate to Key Store and edit the Mac Hosts SSH key. Replace the placeholder credentials with the actual SSH username and private key (or password) for your Mac hosts.
7.5 Available Task Templates
The following templates are pre-configured and ready to use in the Orka Engine Orchestration project. Templates are organized by category. Each template prompts for required inputs when you click Run.
| Template | Playbook | Survey Variables |
|---|
| VM: Deploy VM | deploy.yml | vm_name, vm_image |
| VM: Delete VM | delete.yml | vm_name |
| VM: Manage VM | vm.yml | vm_name, desired_state (running, stopped, absent) |
| VM: List VMs | list.yml | vm_name (optional) |
| VM: Provision User to VM | provision_user.yml | vm_name, new_username, new_user_password |
| Images: Pull Image | pull_image.yml | remote_image_name |
| Images: Create Image | create_image.yml | remote_image_name, vm_image |
| Images: Push Image | push_image.yml | vm_name, oci_url |
| Images: List Images | list_images.yml | vm_name (optional) |
| Images: Delete Image | delete_image.yml | vm_name |
| Engine: Install Engine | install_engine.yml | orka_license_key, engine_url |
| Android: Install Android SDK | install_android_sdk.yml | install_android_sdk_force (optional) |
| Android: Install SDK Components | sdkmanager_install.yml | platform, image_types (optional) |
| Android: Uninstall SDK Components | sdkmanager_uninstall.yml | platform |
| Android: Deploy AVD | deploy_avd.yml | vm_name, platform (optional), image_type (optional) |
| Android: List AVDs | list_avds.yml | vm_name (optional) |
| Android: Delete AVD | delete_avd.yml | vm_name, avd_index |
| Android: Manage AVD | avd.yml | vm_name, desired_state (running, stopped, absent), avd_index (optional), cpu (optional), memory (optional) |
| Citrix: Install Citrix VDA | install_citrix_vda.yml | vm_name |
| Citrix: Register Citrix VDA | register_citrix_vda.yml | vm_name, enrollment_token |
7.6 Stop Semaphore
Stop containers but keep data
Stop and remove all data (database, task history)
8. Image Management
8.1 Using MacStadium Public Images
MacStadium maintains public base images through GitHub Container Registry. No authentication is required. Reference them by OCI path: ghcr.io/macstadium/orka-images/[os-version]:latest
Available images include Tahoe, Sequoia, Ventura, and Sonoma. Use the image name matching your target macOS version.
8.2 Building a Custom Golden Image
For VDI deployments, build a golden image with Citrix VDA and your organization’s software pre-installed. The create_image.yml playbook deploys a temporary VM from a base image, runs your customization scripts, pushes the resulting image to your registry, then cleans up the VM.
Add your customization scripts to the /scripts folder in the repository, then run:
Run on: Controller
ansible-playbook create_image.yml -i inventory -e "vm_image=ghcr.io/macstadium/orka-images/sequoia:latest" -e "remote_image_name=registry.example.com/citrix-vda/sequoia-golden:v1.0"
8.3 Distributing Images to Hosts
Pre-pull images to all hosts before deploying VMs to avoid pull delays at deployment time:
Run on: Controller
ansible-playbook pull_image.yml -i inventory -e "remote_image_name=registry.example.com/citrix-vda/sequoia-golden:v1.0"
8.4 Private Registry Naming
If using a self-hosted registry (Harbor, Docker Registry, JFrog Artifactory), use a consistent naming convention:
registry.example.com/orka/citrix-vda/sequoia-finance:v1.0
registry.example.com/orka/citrix-vda/sequoia-engineering:v2.1
registry.example.com/orka/citrix-vda/tahoe-marketing:latest
Note: Store registry credentials in Ansible Vault, never in plain text. Exclude vault files from Git using .gitignore.
9. Repository and Version Control
Fork the orka-engine-orchestration repository on GitHub so you can customize it and track changes over time.
• Add a .gitignore to exclude SSH keys, vault files, and other secrets.
• Never commit passwords, tokens, or private keys.
• Tag releases used in production.
• Use separate branches for staging and production changes.
• Document your inventory structure and playbook customizations in the repository README.
Support Resources
Note: The following remote host CLI commands are for advanced troubleshooting and diagnostics. It is recommended to use the Ansible playbooks or Semaphore UI when available.
| MacStadium Support | support@macstadium.com |
|---|
| orka-engine-orchestration repo | orka-engine-orchestration on GitHub |
| Orka Engine CLI help | orka-engine —help |
| VM commands | orka-engine vm —help |
| Image commands | orka-engine image —help |
| Ansible docs | Ansible documentation |
| Citrix VDA for macOS | Citrix documentation |