Skip to main content

Environment Preparation

Orka-Engine-Orchestration is the control plane for managing virtual desktops. It runs in a dedicated VM and connects to physical Mac hosts with Orka Engine deployed. Orka Engine provides the virtualization layer that enables macOS VMs to run on physical Macs.

Network Configuration (VLANs, Bridged Networking)

Pre-installation network configuration setup:

Before installing Orka Engine, configure the network settings on each Mac host as follows: Static IP assignment (recommended):
  1. Open System Settings → Network → Select interface (Ethernet or Wi-Fi)
  2. Configure manually:
 * IP Address: Assign from your management VLAN (e.g., 10.0.100.10)

 * Subnet Mask: Match your network (e.g., 255.255.255.0)

 * Router: Gateway address (e.g., 10.0.100.1)

 * DNS Servers: Your organization's DNS servers
  1. Apply settings and verify connectivity
DHCP reservation (alternative):
  1. Note the Mac’s MAC address (System Settings → Network → Details → Hardware)
  2. Configure DHCP server to always assign the same IP to this MAC address
  3. Verify the Mac receives the reserved IP address

Document information for each host:

  • Hostname (e.g., mac-node-1, mac-node-2)
  • IP address (e.g., 10.0.100.10, 10.0.100.11)
  • MAC address
  • Hardware model (e.g., Mac mini M4, Mac Studio M2 Ultra)
  • Network interface used (en0, en1, etc.)
This documentation serves as your source for creating the Ansible inventory file that drives Orka Engine Orchestration operations.

Network Configuration for Orka VMs

Bridged vs. NAT Networking

Orka Engine supports two networking modes for VMs: NAT Mode (Default)
  • VMs receive internal IP addresses not directly accessible from the network
  • Requires port forwarding to access VMs from outside the host
  • Suitable for CI/CD workloads where direct network access isn’t required
Bridged Mode (Recommended for VDI)
  • VMs receive IP addresses on the same subnet as the physical Mac hosts
  • Direct Layer 2 connectivity simplifies network configuration
  • No NAT complications for inbound Citrix HDX connections
  • VMs are directly reachable by Citrix Cloud and end users
Enabling Bridged Networking To deploy VMs with bridged networking, specify the network_interface variable (typically en0 for Ethernet) when running playbooks:
ansible-playbook deploy.yml -i inventory -e "vm_group=citrix-vda" -e "desired_vms=2" -e "network_interface=en0"
When network_interface is not specified, VMs deploy in NAT mode. Network Requirements Ensure your network meets these requirements for bridged VDI deployments:
  • DHCP: Sufficient IP address scope to assign addresses to VMs, or configure static reservations
  • DNS: VMs must resolve Citrix Cloud domains (see Firewall Configuration below)
  • Routing: VMs on the bridged network must have outbound internet access for Citrix registration
Note: Detailed network segmentation (VLANs, firewall zones) should be planned based on your organization’s security requirements. Consult your network team for enterprise deployments. Firewall Configuration (Citrix-specific) If you are deploying Orka for VDI with Citrix DaaS, ensure the following connections are permitted: Outbound from VMs (TCP 443):
  • [customer_ID].xendesktop.net - Citrix DaaS controller
  • *.*.nssvc.net- Citrix Gateway Service
  • *.citrixworkspacesapi.net - Gateway connectivity checks
  • On-premises delivery controller FQDNs (for CVAD deployments)
Inbound to VMs:
  • TCP ports 1494 and 2598 - HDX sessions
  • UDP ports 1494 and 2598 - HDX sessions
Outbound HDX via Rendezvous (if enabled):
  • TCP/UDP port 443 to *.*.nssvc.net
Proxy support: Citrix VDA supports proxy configuration for both enrollment/registration and HDX sessions. Be sure to document your proxy settings including any authentication requirements, bypass lists, and PAC file URLs.

Install Orka-Engine-Orchestration in a VM and Orka Engine on Mac Hardware

Getting Started

  1. Obtain Orka Engine installer and License:
 * Work with your MacStadium Solution Engineer and Account Exec to obtain Orka License Keys for your environment.

 * Obtain the Orka Engine download URL from your MacStadium Solution Engineer. This URL is passed to the installation playbook as the engine_url variable (e.g., [https://distribution.macstadium.com/orka-engine/official/3.5.](https://distribution.macstadium.com/orka-engine/official/3.5.2/orka-engine.pkg)[2](https://distribution.macstadium.com/orka-engine/official/3.5.2/orka-engine.pkg)[/orka-engine.pkg](https://distribution.macstadium.com/orka-engine/official/3.5.2/orka-engine.pkg)).

   * Note: Orka Engine version 3.5.0 or later is required for bridge networking support

 * Orka Engine requires a valid license key to operate. To request a license key:
1. Contact your MacStadium account representative 2. Provide: Your organization’s name, deployment details, use case 3. Your sales representative will submit your information to our team for provisioning 4. Receive license key via email 5. Activate: orka-engine license set --key YOUR_KEY
 * Orka Engine is installed on macOS Hosts using the Ansible playbook from the [orka-engine-orchestration](https://github.com/macstadium/orka-engine-orchestration "https://github.com/macstadium/orka-engine-orchestration") repository. Orka Engine is supported on macOS Sonoma, Sequoia, or Tahoe
  1. Set up the orka-engine-orchestration repository on your control machine:
 1. Fork or clone the repository: `git clone https://github.com/macstadium/orka-engine-orchestration.git`
 2. Install required Ansible dependencies.
 3. Create or update the inventory file with the IPs of your physical macOS nodes (see the Ansible Inventory Management section below).
Control Machine Requirements:
  • Minimum: 2 vCPU cores, 4GB RAM, 20GB available storage
  • Recommended: 4 vCPU cores, 8GB RAM, 50GB available storage

Set up SSH key-based authentication to Mac nodes:

cat >> ~/.ssh/config << 'EOF'
Host mac-node-*
    User admin
    IdentityFile ~/.ssh/ansible_orka_key
    StrictHostKeyChecking no
    UserKnownHostsFile /dev/null
EOF

chmod 600 ~/.ssh/config

Deploy SSH keys

Copy public key to each Orka host
ssh-copy-id -i ~/.ssh/ansible_orka_key.pub admin@10.0.100.10
ssh-copy-id -i ~/.ssh/ansible_orka_key.pub admin@10.0.100.11
ssh-copy-id -i ~/.ssh/ansible_orka_key.pub admin@10.0.100.12

Register and Configure Mac Hosts with Ansible Inventory Management

Ansible inventory management enables centralized orchestration of Orka hosts and VMs. Proper inventory structure simplifies deployment, updates, and lifecycle management. Organize your Ansible inventory file (inventory.ini) to reflect your infrastructure: Host groups:
  • [hosts] - Physical Mac machines running Orka Engine
Essential host variables:
  • ansible_user - SSH username (typically admin)
  • vm_image - Default VM image for deployments
  • max_vms_per_host - Maximum VMs allowed per host (optional)
  • engine_binary - Path to the Orka engine binary (default: defined in your inventory or group vars)
Example dev/inventory.ini configuration:
[hosts] 

10.0.100.10 
10.0.100.11 
10.0.100.12
Example group_vars/all/main.yml configuration:
ansible_user=admin
vm_image=ghcr.io/macstadium/orka-images/sequoia:latest
max_vms_per_host=2
engine_binary=/usr/local/bin/orka-engine

Network Accessibility

Confirm each host is reachable at its assigned IP address:
ansible hosts -i inventory -m setup

Install Orka Engine

Perform the following installation after updating the inventory files with info for each physical Mac
cd ~/orka-automation/orka-engine-orchestration

ansible-playbook install_engine.yml -i environments/production/inventory -e "orka_license_key=YOUR-LICENSE-KEY-HERE" -e "engine_url=ORKA-ENGINE-URL-HERE"
What the playbook does:
  1. Downloads the Orka Engine package from the URL provided via the [engine_url] variable.
  2. Installs the Orka Engine package on each Mac host with administrative privileges
  3. Applies license key configuration
  4. Starts Orka Engine service
  • Note: This installs Orka Engine on EACH physical Mac host defined in your inventory file. The installation is automated and runs in parallel across your Mac fleet.

Force Redeployment or Upgrade

To force reinstallation or upgrade to a newer version:
ansible-playbook install_engine.yml -i environments/production/inventory -e "orka_license_key=YOUR-LICENSE-KEY-HERE" -e "engine_url=ORKA-ENGINE-URL-HERE" -e "install_engine_force=true"

Post-Installation Configuration and Verification

After installation completes: Orka Engine Service:
  • Service starts automatically via launchd
  • Service status: Verify with orka-engine info

Verification

Verify Orka Engine is properly installed and running using the following commands
ansible hosts -i environments/production/inventory -m shell -a "orka-engine --version" 

ansible hosts -i environments/production/inventory -m shell -a "ps aux | grep -v grep | grep orka-engine" 

ansible hosts -i environments/production/inventory -m shell -a " /usr/local/bin/orka-engine info"

Network Validation

Before deploying Orka for VDI at scale, you may wish to validate your network configuration by reviewing the following:
  1. Connectivity tests: Verify VMs can reach all required Citrix endpoints
  2. Port tests: Confirm TCP 443 outbound and HDX ports inbound
  3. DNS resolution: Test name resolution for Citrix domains
  4. Latency measurement: Establish baseline network performance
  5. Bandwidth testing: Verify sufficient throughput for HDX sessions
Create an Ansible playbook that performs these checks automatically across all Orka hosts. Run this validation after any network changes and before production deployments. The deploy.yml and delete.yml playbooks support —tags plan to preview what will be deployed or deleted without making any changes. Document your network architecture in your Git repository, including VLAN diagrams, IP allocation tables, and firewall rules.

Create a Git repository

Start by forking the orka-engine-orchestration repository on GitHub. This gives you a versioned starting point with all required playbooks, roles, and inventory structure already in place. You can customize it for your environment and track your changes over time. When setting up a Git repository, it is important to consider the following best practices:
  • Create a .gitignore file to exclude sensitive data from Git (SSH keys, vault files, or other secure credentials)
  • Commit your Ansible playbook structure, golden templates, and role definitions
  • Never commit passwords, tokens, or private keys
  • Tag releases for production deployments
  • Maintain separate branches for staging, deployment, and production
  • Document your playbook’s purpose and its directory structure in your project’s README

Create an OCI registry

You will need an OCI-compliant container registry to store and distribute MacOS VM images. MacStadium provides public base images through GitHub Container Registry at ghcr.io/macstadium/orka-images. This includes:
  • macOS Tahoe, Sequoia, Ventura, and Sonoma base images
  • Regular updates aligned with macOS releases
This option requires no additional setup or authentication. Simply reference the public image paths in your playbooks using the standard naming pattern:
ghcr.io/macstadium/orka-images/[os-version]:latest
If you are using Packer in addition to Ansible, MacStadium offers Packer templates leveraging our Sequoia and Tahoe base images extended with common MacOS development tools. For production environments, you may also opt to deploy images to an internal, self-hosted registry such as Harbor, Docker, or JFrog Artifactory. This option provides:
  • Full control over image versions and distribution
  • Faster image pulls within your network
  • The ability to store customized golden images with Citrix VDA pre-installed
  • Compliance with corporate security policies
  • Image scanning and vulnerability assessment
Registry naming conventions Structure your private registry paths logically:
registry.example.com/orka/citrix-vda/sonoma-finance:v1.0


registry.example.com/orka/citrix-vda/sequoia-engineering:v2.1


registry.example.com:5000/orka/citrix-vda/tahoe-marketing:latest
This format includes the registry URL, project/organization, image purpose, and a semantic version tag. Credential Management Store your registry credentials in an Ansible Vault, never in plain text. Your vault file should include:
  • Container registry URL and port
  • Username and password or API tokens
  • Citrix enrollment token and customer ID
Reference these vault variables in your playbooks, and configure Ansible to automatically decrypt them using a vault password file (kept secure and excluded from Git). Image workflow:
  1. Start with a MacStadium base image. Reference it by its OCI path (e.g. ghcr.io/macstadium/orka-images/sonoma:latest).
  2. Customize it with Citrix VDA and your organization’s software
  3. Build and push the image by running create_image.yml. This deploys a temporary VM from the base image, runs your scripts, pushes the resulting image to your registry, then cleans up the VM, for example:
ansible-playbook create_image.yml -i inventory -e "vm_image=ghcr.io/macstadium/orka-images/sonoma:latest" -e "remote_image_name=registry.example.com/citrix-vda/sonoma-golden:v1.0"
  1. Cache the image on all hosts so it’s available for fast deployment, for example:
ansible-playbook pull_image.yml -i inventory -e "remote_image_name=registry.example.com/citrix-vda/sonoma-golden:v1.0" 
  1. Deploy VMs from this golden image, for example:
ansible-playbook deploy.yml -i inventory -e "vm_group=citrix-vda" -e "desired_vms=2" -e "vm_image=registry.example.com/citrix-vda/sonoma-golden:v1.0"

Support Resources

For additional assistance with Ansible playbooks and Orka operations:
  • MacStadium Support Portal: MacStadium
  • Orka CLI Reference:
orka-engine --help
  • VM Commands:
orka-engine vm --help
  • Image Commands:
orka-engine image --help