Skip to main content

Overview

This guide walks through the full setup of Orka Engine Orchestration for VDI deployments. It covers network preparation, controller and host configuration, Orka Engine installation, VM deployment, and the optional Semaphore web interface. There are three roles in this setup:
  • Controller: the machine running Ansible and the orchestration playbooks. This is typically a dedicated VM.
  • Hosts: physical Mac hardware running Orka Engine and the macOS VMs.
  • VMs: macOS virtual machines provisioned and managed through the orchestration library.
Note: Total deployment time can depend on node count, but budget 30-60 minutes to set up your initial controller and host.

Prerequisites

  • An Ansible controller (macOS or Linux, minimum: 2 vCPU, 4 GB RAM, 20 GB storage; recommended: 4 vCPU, 8 GB RAM, 50 GB storage)
  • One or more physical Apple Silicon based Mac hosts with macOS Sonoma, Sequoia, or Tahoe Minimum specifications per Orka host: • Apple Silicon processor (any M-series chip) • 8GB RAM • 512GB storage • 1GB Ethernet • macOS 13 (Ventura) or later
  • Recommended specifications per Orka host: • M2 Pro, M4 Pro, or higher • 16GB+ RAM (32GB+ recommended for high-density deployments) • 1TB+ storage • 10GB Ethernet • macOS 14 (Sonoma), 15 (Sequoia), or 26 (Tahoe)
An Orka Engine license key and installer URL (contact your MacStadium account representative) An administrator account on each machine Network connectivity between the controller and all Mac hosts Note: Orka 3.5.0 or later is required for bridged networking support. This guide assumes that the controller is being deployed on a dedicated macOS device.

Network Configuration

Static IP Assignment

Assign a static IP address to each Mac host before installing Orka Engine. Either configure the IP manually or use a DHCP reservation. To configure manually:
  1. Open System Settings then Network, then select your interface (Ethernet or Wi-Fi).
  2. Set IP Address, Subnet Mask, Router, and DNS Servers from your management VLAN.
  3. Apply settings and verify connectivity.
To use a DHCP reservation:
  1. Note the MAC address of each host from System Settings then Network then Details then Hardware.
  2. Configure your DHCP server to assign a fixed IP to each MAC address.
  3. Verify the host receives the reserved IP.
Document the following for each host before proceeding. This information populates the Ansible inventory file.
FieldExample
Hostnamemac-node-1
IP address10.0.100.10
MAC addressa1:b2:c3:d4:e5:f6
Hardware modelMac mini M4
Network interfaceen0

Firewall Requirements (Citrix DaaS)

If deploying Orka for VDI with Citrix DaaS, ensure the following traffic is permitted: Outbound from VMs (TCP 443):
  • [customer_ID].xendesktop.net - Citrix DaaS controller
  • *.*.nssvc.net - Citrix Gateway Service
  • *.citrixworkspacesapi.net - Gateway connectivity checks
  • On-premises delivery controller FQDNs (for CVAD deployments)
  • If Citrix Rendezvous is enabled, also allow outbound TCP/UDP 443 to ..nssvc.net.
Inbound to VMs:
  • TCP/UDP 1494 and 2598 for HDX sessions

1.  Controller Setup

Run the steps in this section on the designated Ansible controller machine.

1.1  Set the Hostname

Run on: Controller Set a consistent hostname before configuring anything else. This ensures that your VDI tools are able to label the host devices accordingly. Replace example-controller with your chosen name.
sudo scutil --set ComputerName "example-controller"
sudo scutil --set LocalHostName "example-controller"
sudo scutil --set HostName "example-controller"
dscacheutil -flushcache

1.2  Install Homebrew

Run on: Controller

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

brew doctor

1.3  Install Ansible

Run on: Controller Modern versions of macOS protect against installing python packages system-wide. Install Ansible through pipx in a virtual environment so it is isolated from system Python packages.
brew install pipx
pipx install ansible-core==2.18.4
pipx install ansible==11.4.0
Note: Optionally install sshpass if you prefer password-based authentication instead of key exchange: brew tap esolitos/ipa && brew install esolitos/ipa/sshpass

2.  Host Setup

Run the steps in this section on each physical Mac host. Repeat for every host in your fleet.

2.1  Set the Hostname

Run on: Host(s) Replace example-host0 with the appropriate hostname for each machine. Use a short name (no dots) for HostName.
sudo scutil --set ComputerName "example-host0"
sudo scutil --set LocalHostName "example-host0"
sudo scutil --set HostName "host0"
dscacheutil -flushcache

2.2  Install Homebrew

Run on: Host(s)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

brew doctor

2.3  Check Python

Run on: Host(s) Python 3 is required for Ansible to manage the host. It will be installed in macOS by default, but running the below command will ensure it is present and configured.
python3 --version

3.  Configure SSH Access

Run the steps in this section on the controller. Ansible uses SSH key authentication to connect to each host.

3.1  Generate an SSH Key

Run on: Controller If you do not already have an SSH key, generate one. Accept the default file path or specify your own.
ssh-keygen -t ed25519

3.2  Copy the Key to Each Host

Run on: Controller Run ssh-copy-id once per host, substituting the correct IP address each time.
ssh-copy-id administrator@10.254.235.xx
Verify the connection works before moving on:
ssh administrator@10.254.235.xx

4.  Configure the Orchestration Library

Run all steps in this section on the controller.

4.1  Clone the Repository

Run on: Controller The orchestration library lives at ~/orka-automation/orka-engine-orchestration on the controller.
mkdir -p ~/orka-automation
cd ~/orka-automation
git clone https://github.com/macstadium/orka-engine-orchestration.git
cd orka-engine-orchestration

4.2  Create the Inventory File

Run on: Controller The inventory file tells Ansible which hosts to manage. Create it at ./inventory inside the repository.
mkdir -p dev/group_vars/all
touch dev/group_vars/all/main.yml
Create the inventory file
vim inventory
Inventory file contents - add the IP address of each host:
[hosts]
10.0.100.10
10.0.100.11
10.0.100.12

4.3  Configure Group Variables

Run on: Controller Group variables apply settings to all hosts in your inventory. Open the file created in the previous step:
vim dev/group_vars/all/main.yml
File contents:
max_vms_per_host: 2
engine_binary: /usr/local/bin/orka-engine

ansible_user: administrator
vm_image: ghcr.io/macstadium/orka-images/sequoia:latest
Variable reference:
max_vms_per_hostMaximum VMs allowed per host
engine_binaryPath to the Orka Engine binary
ansible_userSSH username on each host
vm_imageDefault base image for VM deployments
network_interface(Optional) Network interface for bridged networking, e.g. en0

5.  Install Orka Engine

This playbook downloads and installs the Orka Engine package on every host in your inventory. It applies the license key and starts the Orka Engine service automatically. Run on: Controller
cd ~/orka-automation/orka-engine-orchestration

ansible-playbook install_engine.yml -i inventory -e "orka_license_key=YOUR-LICENSE-KEY" -e "engine_url=https://distribution.macstadium.com/orka-engine/official/3.5.2/orka-engine.pkg"
Note: Obtain your license key and installer URL from your MacStadium account representative.

5.1  Verify the Installation

Run on: Controller
ansible hosts -i inventory -m shell -a "orka-engine --version"
ansible hosts -i inventory -m shell -a "/usr/local/bin/orka-engine info"

5.2  Force Reinstall or Upgrade

Run on: Controller Add the install_engine_force flag to reinstall or upgrade to a newer version:
ansible-playbook install_engine.yml -i inventory -e "orka_license_key=YOUR-LICENSE-KEY" -e "engine_url=https://distribution.macstadium.com/orka-engine/official/3.5.2/orka-engine.pkg" -e "install_engine_force=true"

6.  Deploy and Manage VMs

All playbook commands are run from the controller inside ~/orka-automation/orka-engine-orchestration.

6.1  Deploy VMs

Run on: Controller Deploy a group of VMs. The vm_group name is used to track and manage the VMs as a set.
ansible-playbook deploy.yml -i inventory -e "vm_group=vdi-group" -e "desired_vms=4"
For VDI workloads, use bridged networking so VMs receive IP addresses directly on your LAN. Specify the network interface (typically en0 for Ethernet):
ansible-playbook deploy.yml -i inventory -e "vm_group=vdi-group" -e "desired_vms=4" -e "network_interface=en0"
Note:** **When network_interface is not specified, VMs deploy in NAT mode. Bridged mode is recommended for VDI because VMs are directly reachable by Citrix Cloud and end users without port forwarding. To preview what will be deployed without making any changes, add --tags plan:
ansible-playbook deploy.yml -i inventory -e "vm_group=vdi-group" -e "desired_vms=4" --tags plan

6.2  List VMs

Run on: Controller
ansible-playbook list.yml -i inventory -e "vm_group=vdi-group"

6.3  Manage Individual VMs

Run on: Controller Start, stop, or delete a specific VM by name. Valid values for desired_state are running, stopped, and absent.
ansible-playbook vm.yml -i inventory -e "vm_name=my-vm-name" -e "desired_state=running"

6.4  Delete VMs

Run on: Controller Delete a specific number of VMs from a group. Add —tags plan to preview before deleting.
ansible-playbook delete.yml -i inventory -e "vm_group=vdi-group" -e "delete_count=2"

6.5  Pull Images

Run on: Controller Pre-pull an image to all hosts so it is ready for fast deployment:
ansible-playbook pull_image.yml -i inventory -e "remote_image_name=ghcr.io/macstadium/orka-images/sequoia:latest"

7.  Semaphore Web UI (Optional)

Semaphore provides a browser-based interface for running orchestration playbooks without CLI access. All task templates, inventory, and repository configuration are set up automatically on first launch.

7.1  Install Docker

Run on: Controller Docker is required to run Semaphore. Install it through Homebrew:
brew install docker docker-compose
Note: Docker Desktop is an alternative if you prefer a GUI installer. Either option works.

7.2  Configure the Environment

Run on: Controller Copy the example environment file and generate an encryption key:
cd ~/orka-automation/orka-engine-orchestration

cp semaphore/.env.example semaphore/.env
Generate a 32-byte base64 key
head -c32 /dev/urandom | base64
Open semaphore/.env and paste the generated key as the value for SEMAPHORE_ACCESS_KEY_ENCRYPTION:
vim semaphore/.env
Set this value to the key generated above:
SEMAPHORE_ACCESS_KEY_ENCRYPTION=your-generated-key-here
Also set your admin username and password in the same file before starting Semaphore.

7.3  Start Semaphore

Run on: Controller
cd ~/orka-automation/orka-engine-orchestration
docker compose up -d
Semaphore is available at http://localhost:3000. Log in with the admin credentials you set in the .env file.

7.4  Configure SSH Credentials

After logging in, navigate to Key Store and edit the Mac Hosts SSH key. Replace the placeholder credentials with the actual SSH username and private key (or password) for your Mac hosts.

7.5  Available Task Templates

The following templates are pre-configured and ready to use in the Orka Engine Orchestration project. Each template prompts for required inputs when you click Run.
TemplatePlaybookRequired Inputs
Deploy VMsdeploy.ymlvm_group, desired_vms
Delete VMsdelete.ymlvm_group, delete_count
Manage VMvm.ymlvm_name, desired_state
List VMslist.ymlvm_group (optional)
Pull Imagepull_image.ymlremote_image_name
Install Engineinstall_engine.ymlorka_license_key, engine_url
Create Imagecreate_image.ymlremote_image_name, vm_image

7.6  Stop Semaphore

Stop containers but keep data
docker compose down
Stop and remove all data (database, task history)
docker compose down -v

8.  Image Management

8.1  Using MacStadium Public Images

MacStadium maintains public base images through GitHub Container Registry. No authentication is required. Reference them by OCI path: ghcr.io/macstadium/orka-images/[os-version]:latest Available images include Tahoe, Sequoia, Ventura, and Sonoma. Use the image name matching your target macOS version.

8.2  Building a Custom Golden Image

For VDI deployments, build a golden image with Citrix VDA and your organization’s software pre-installed. The create_image.yml playbook deploys a temporary VM from a base image, runs your customization scripts, pushes the resulting image to your registry, then cleans up the VM. Add your customization scripts to the /scripts folder in the repository, then run: Run on: Controller
ansible-playbook create_image.yml -i inventory -e "vm_image=ghcr.io/macstadium/orka-images/sequoia:latest" -e "remote_image_name=registry.example.com/citrix-vda/sequoia-golden:v1.0"

8.3  Distributing Images to Hosts

Pre-pull images to all hosts before deploying VMs to avoid pull delays at deployment time: Run on: Controller
ansible-playbook pull_image.yml -i inventory -e "remote_image_name=registry.example.com/citrix-vda/sequoia-golden:v1.0"

8.4  Private Registry Naming

If using a self-hosted registry (Harbor, Docker Registry, JFrog Artifactory), use a consistent naming convention: registry.example.com/orka/citrix-vda/sequoia-finance:v1.0 registry.example.com/orka/citrix-vda/sequoia-engineering:v2.1 registry.example.com/orka/citrix-vda/tahoe-marketing:latest Note: Store registry credentials in Ansible Vault, never in plain text. Exclude vault files from Git using .gitignore.

9.  Repository and Version Control

Fork the orka-engine-orchestration repository on GitHub so you can customize it and track changes over time. •    Add a .gitignore to exclude SSH keys, vault files, and other secrets. •    Never commit passwords, tokens, or private keys. •    Tag releases used in production. •    Use separate branches for staging and production changes. •    Document your inventory structure and playbook customizations in the repository README.

Support Resources

Note: The following remote host CLI commands are for advanced troubleshooting and diagnostics. It is recommended to use the Ansible playbooks or Semaphore UI when available.
MacStadium Supportsupport@macstadium.com
orka-engine-orchestration repoorka-engine-orchestration on GitHub
Orka Engine CLI helporka-engine —help
VM commandsorka-engine vm —help
Image commandsorka-engine image —help
Ansible docsAnsible documentation
Citrix VDA for macOSCitrix documentation