Mantle 2.0 Guide: VMware and Linux Builds
Overview
This guide describes the Datacenter Build workflows in Mantle 2.0 for both VMware and Enterprise Linux platforms. These workflows are designed to standardize infrastructure deployment from bare metal through initial configuration, optional virtualization components, and post-build automation.
Use this guide when you need to:
- Provision multiple VMware ESXi hosts through PXE
- Deploy and configure vCenter as part of a managed build workflow
- Optionally enable vSAN and distributed switching
- Build Enterprise Linux hosts through PXE and optionally configure KVM virtualization
- Run post-install VM, OVA, and playbook tasks from within Mantle
1. Datacenter Builds in Mantle 2.0
Datacenter Builds provide an end-to-end automation workflow for standing up infrastructure from bare metal. Mantle captures the intended end state in the build form and executes the build in a predictable sequence.
For VMware-based deployments, Mantle can:
- PXE provision ESXi across multiple hosts
- Deploy vCenter to a selected host
- Create the vSphere datacenter and cluster
- Add hosts into the cluster
- Configure vSAN, if selected
- Create a Distributed Virtual Switch (DVS), if selected
- Optionally deploy OVAs and run playbooks after infrastructure creation
For Enterprise Linux deployments, Mantle can:
- PXE provision one or more Linux hosts
- Apply management network and install settings
- Optionally configure virtualization settings for KVM
- Run post-deployment playbooks
2. Prerequisites
"Before you start"
Required connectivity
- Mantle must have reliable network connectivity to the PXE-facing interfaces of each target server.
- PXE boot traffic, imaging services, and build-time orchestration paths must be reachable and stable.
Required inputs
- Host inventory: a list of target systems, or another reliable method of identifying the systems to be provisioned
- Credentials: build-time credentials for ESXi, vCenter, or Linux host access
- Network design: management addressing, subnetting, VLAN usage, MTU requirements, and any vSAN or vMotion network intent
Optional assets
- OVAs intended for post-build deployment
- Playbooks intended for post-build configuration, hardening, or validation
3. Core Build Concepts
Build form as the desired state
Mantle treats the build form as a declaration of the target environment. The platform then performs the required provisioning and configuration steps to reach that state.
Cluster-first VMware workflow
For VMware builds, hosts are provisioned first, then enrolled into vCenter and the target cluster. Optional capabilities such as vSAN and distributed switching are applied after the environment is ready.
Optional day-2 actions
After the core platform build is complete, Mantle can optionally:
- Deploy virtual machines from ISO-based definitions
- Deploy one or more OVAs
- Execute playbooks in a defined order
4. VMware Datacenter Build Workflow
This section documents the VMware Datacenter Build process in Mantle 2.0.
4.1 Workflow summary
A typical VMware build proceeds in this sequence:
- PXE provision ESXi across the selected hosts
- Deploy vCenter to the designated target host
- Create the vSphere datacenter and cluster
- Add all provisioned ESXi hosts into the cluster
- Configure vSAN, if selected
- Create a Distributed Virtual Switch, if selected
- Deploy optional post-install virtual machines or OVAs
- Execute optional playbooks
4.2 Create a new VMware build
- In the upper-right corner, select + New Build.
- Under Build Type, select Datacenter Build.
- Under Platform, select VMware.
- Under Select Source, choose New.
4.3 BIOS section
In the BIOS section:
- Select the target device type
- Select the serial port used for BIOS automation
- If BIOS automation is not required, enable Disable Bios
4.4 ESXi section
Complete the required values in the ESXi section.
ESXi host fields
- Host IP: Management IP address assigned to the ESXi host
- Hostname: Hostname assigned to the ESXi host
- Username: Optional override for the default ESXi administrative user
- Password: Password Mantle uses for authentication
Use + to add additional ESXi hosts to the build.
Build settings
- Select ESXi Version: ESXi version to deploy
- Select License: ESXi license or license profile
- Subnet: Management subnet for the ESXi hosts
- Domain: Optional DNS domain suffix
- PXE Server: PXE server used for boot and provisioning
- Datastore Name: Datastore name to create or use
Network settings
- Subnet Mask: IPv4 mask applied to every ESXi host that Mantle provisions.
- Default Gateway: Routed next hop used during and after installation.
- DNS Server 0: Primary resolver handed to the hosts through the kickstart file.
- DNS Server 1 (optional): Secondary resolver for redundancy.
- NTP Server 0: Primary time source so certificates and vCenter joins succeed.
- NTP Server 1 (optional): Backup NTP endpoint if the first server is unreachable.
- VLAN (optional): Tagged VLAN ID for the management vmk if the port group is trunked.
- vmk0 MTU: Layer-2 MTU for the management vmkernel adapter (set higher if you require jumbo frames).
- Enable SSH: Toggles SSH access on the ESXi hosts after provisioning.
vSwitch0 settings
- Standard Uplink: Primary physical NIC assigned to vSwitch0
- Additional Uplinks: Additional uplinks for resiliency or throughput
- VM Network: Default port group name for VM traffic
- VLAN: VLAN ID for the VM Network port group, if required
- MTU: MTU value for vSwitch0 and VM traffic
- VM Network Security:
- Promiscuous Mode: Allow or block guest interfaces from receiving traffic not addressed to them.
- MAC Address Changes: Permit or deny MAC updates coming from the guest OS.
- Forged Transmits: Control whether ESXi drops packets with spoofed source MAC addresses.
- vSwitch0 Additional Port Groups: Define extra VLAN-backed networks that must exist on vSwitch0.
Additional vSwitches
- Switch Name: Friendly label for each extra standard switch Mantle should create.
- vSwitch Uplinks: Physical NICs that back the switch for resilience and throughput.
- Switch MTU: Layer-2 MTU applied to the switch and its port groups.
- vSwitch Port Groups: VLAN-backed networks hosted on the additional switch.
Additional VMkernel adapters
- Name: VMkernel interface label (e.g., vmk1 for vMotion).
- Port Group: Network that backs the VMkernel interface.
- Mask: Subnet mask or prefix length for the VMkernel IP.
- Gateway: Route used for this VMkernel network (only when required).
- Network: IP address assigned to the adapter.
- vMotion: Flag indicating whether the interface carries vMotion traffic.
- MTU: MTU for the VMkernel NIC, matching the backing network when jumbo frames are needed.
Extra kickstart commands
- Extra Kickstart Commands: Optional site- or program-specific commands appended to the ESXi kickstart configuration
4.5 VCSA and vSAN section
Complete the VCSA & vSAN section.
VCSA and vSAN settings
- Disable vSAN: Disables automatic vSAN configuration during the build
VCSA settings
- System Name: FQDN that vCenter advertises to hosts and administrators.
- Appliance Name: Inventory label for the VCSA virtual machine.
- Cluster Name: Target vSphere cluster Mantle will create or join.
- Datacenter Name: vSphere datacenter container for the build.
- IP Address: Static management IP assigned to VCSA.
- Default Gateway: Routed next hop for the appliance subnet.
- Network Prefix: CIDR prefix (for example
/24) paired with the IP. - SSO User: Primary
administrator@domainaccount used for SSO login. - SSO Domain: vCenter Single Sign-On domain (e.g.,
vsphere.local). - SSO Password: Password for the SSO user.
- OS Password: Root/console credential for the VCSA appliance OS.
- DNS 0 / DNS 1: Primary and secondary DNS resolvers reachable from VCSA.
- NTP 0 / NTP 1: Time sources used to keep vCenter in sync.
- Select License: vCenter license asset applied after deployment.
- Deployment Options: Sizing profile (Tiny/Small/etc.) for CPU, RAM, and storage.
- VCSA Version: Image or ISO build number Mantle should deploy.
- VCSA SSH Enable: Toggles SSH access to the appliance for break-glass support.
- Thin Disk Mode: Enables thin-provisioned storage for faster deploys on lab gear.
vSAN settings
- vmk: VMkernel adapter used for vSAN traffic
- Datastore Name: Name of the vSAN datastore
- Select License: vSAN license applied to the cluster
4.6 Post-Install section
In the Post-Install tab, enable or disable post-install actions such as creating virtual machines or deploying OVAs.
Post-install controls
- Disable Post Install: Disables all automated post-install tasks
VM section
- VM ISO: ISO asset Mantle should mount when creating the post-install VM.
- Name: Display name assigned to the VM once it is created.
- IP: Static address Mantle configures inside the guest OS if the workflow supports it.
- RAM (GB): Memory allocated to the VM during deployment.
- CPU Cores: vCPU count reserved for the VM.
- Port Groups: vSphere networks the VM NICs attach to.
- HDD Sizes (GB): Disk sizes for each virtual disk that Mantle should create.
Use + to add additional post-install virtual machines.
OVA section
- Select OVA: Choose the uploaded appliance Mantle should deploy after the core build.
Port mapping
- NIC: Logical interface on the imported OVA (NIC0, NIC1, etc.).
- Port Group: Destination network that interface should join when Mantle powers on the VM.
Use + to add additional interface mappings for multi-NIC appliances.
4.7 Playbooks section
In the Playbooks section, enable or disable playbook execution after VM and OVA deployment.
- Disable Playbooks: Skips playbook execution
- Select Playbook: Adds a playbook to the execution list
Additional notes:
- Use + to add multiple playbooks
- Playbooks execute after infrastructure deployment and post-install tasks complete
- Playbooks run in the order listed
4.8 Review and deploy
- Step through Review Build Steps to validate the workflow.
- Select Deploy to start the VMware build.
- Mantle transitions to the Track Build page.
- Expand the logs to monitor progress.
- Power on the target servers to begin the automated deployment.
4.9 What to expect during deployment
During the deployment:
- The target system console will display the NTS-customized ESXi installation flow
- Some installation warnings may appear and can be benign depending on the environment
- The server will reboot after ESXi installation, but the full VMware build is not complete until Mantle reports completion
- During later stages, the ESXi DCUI may display the DoD security banner
- In the Mantle logs, an important milestone is when each ESXi host shows Ready
4.10 Completion status
The build is complete when:
- The build status turns green and shows Success
- Build artifacts are available for download
- You can download the build configuration and build object
- You can run additional playbooks against the completed build
5. Enterprise Linux Datacenter Build Workflow
This section documents the Enterprise Linux Datacenter Build process in Mantle 2.0.
Kickstart template requirements
In the Kickstart Template Requirements section, provide the data points below so the Enterprise Linux PXE workflow can render a valid ks file for every host.
Asset references
platform: set toenterprise_linuxso Mantle runs the Linux build path.platform_ver: Enterprise Linux ISO asset ID used for PXE boot/repo paths.kickstart_ver: kickstart asset ID; Mantle validates it against the ISO's major version.
PXE and management networking
pxe_server: HTTP/TFTP endpoint used by%pre, PXELINUX, and GRUB.netmask: subnet mask applied in the kickstartnetwork --bootproto=staticstanza.gateway: default route for the management interface during installation.dns_servers[0]: first DNS server passed via--nameserver; add additional entries as needed.
Per-host variables
Mantle renders one kickstart per host and injects each entry's fields as top-level template variables (for example, {{ ip }} and {{ install_device }}).
ip: static IP each host should assume while installing.hostname: hostname injected into the kickstart network stanza.username: username for the non-root admin account created post-install.pw: password for that account; Mantle hashes it before injectingpwd.mgmt_interface: interface name used in thenetworkcommand and post-installnmcli.install_device: limitsignoredisk/bootloader targets; required when multiple disks exist.
Optional but recommended
serial_console: device and baud pair for GRUB/UEFI console redirection.extra_kernel_args: site-specific kernel parameters appended to the installer command line.
Build-level prerequisites
build_interface: interface Mantle uses during provisioning; required to pass pre-flight checks.bios_devices: either one EL8000 entry or a list matching the host count; enforces BIOS automation alignment.
5.1 Create a new Linux build
- Select + New Build.
- Under Select Build Type, choose Datacenter Build.
- Under Platform, choose Enterprise Linux, then select Next.
- Under Select Source, choose New, then select Submit.
5.2 Device and BIOS settings
- Select the target device in the Device drop-down
- Select the serial port in the Port drop-down
- If BIOS automation is not needed, enable Disable Bios
5.3 Enterprise Linux build settings form
Build settings
- Platform Version: Enterprise Linux version to install
- Kickstart Template: Kickstart template used for the automated install
Management network settings
- Netmask: Subnet mask applied to each Enterprise Linux host.
- Gateway: Default route used during installation and steady state.
- Domain: DNS suffix appended to hostnames.
- PXE Server: HTTP/TFTP endpoint Mantle references in the rendered kickstart.
- DNS Servers: Resolver list injected into the kickstart network stanza.
- NTP Servers: Time sources configured in the post-install tasks.
Use + to add additional DNS or NTP servers.
Hosts
- Host IP: Static IP assigned to the server during installation.
- Hostname: FQDN or short name injected into each kickstart file.
- Username: Local admin account created on the OS.
- Password: Credential associated with that account (Mantle hashes it before use).
- Mgmt Interface: NIC name (for example, eno1) that carries management traffic.
- Install Device: Disk or device node (such as /dev/sda or nvme0n1) targeted by the installer.
- Extra Kernel Args: Optional comma-separated kernel parameters for special hardware requirements.
Use + to add additional hosts to the build.
Virtual bridges
- Bridge Name: Label Mantle should assign to the Linux bridge.
- IPv4 Method: How the bridge receives its IP settings (Manual, DHCP, Disabled).
- VLAN Filtering: Enables trunk awareness for tagged interfaces.
- Autoconnect: Determines whether the bridge comes up automatically at boot.
- Autoconnect Ports: Specific interfaces that should join the bridge when it activates.
- Interfaces: Physical or virtual NICs participating in the bridge.
- Enable Spanning Tree Protocol: Turns STP on for layer-2 loops.
Use + to add additional interfaces to a bridge.
Serial console
- Enable Serial Console
5.4 Virtualization settings
The virtualization section allows the Linux host to be configured as a KVM virtualization platform.
Virtualization settings
- Enable Virtualization Settings
KVM hosts
- Host: Select the Linux system that will act as the KVM hypervisor
Use + to add additional KVM hosts.
KVM storage pools
- Pool Name: Friendly identifier for the storage pool.
- Pool Type: Backing technology (dir, logical, nfs, iscsi, etc.).
- Target Path: Filesystem path or device Mantle should mount for the pool.
- Owner: Linux user that owns the pool directory.
- Group: Group ownership for shared access.
- Permissions: Octal file mode applied to the pool path.
- Availability: Whether the pool is active by default.
- Autostart: Ensures libvirt activates the pool at boot time.
Use + to add additional storage pools.
Virtual machines
- VM Type: Template or image definition Mantle should deploy (KVM guest, container, etc.).
- VM Config: Asset or JSON definition describing CPU, memory, and disk layout.
- Availability: Whether the VM is created immediately or only when certain conditions are met.
- Autostart: Controls whether libvirt boots the VM automatically.
Use + to add additional virtual machines.
5.5 Playbooks
In the Linux build workflow, playbooks are used for post-deployment configuration.
- Select Playbook: Adds an automation playbook to the workflow
- Use + to add multiple playbooks
- Playbooks run after infrastructure deployment and post-install tasks complete
- Playbooks execute in listed order
5.6 Review and deploy
- Step through Review Build Steps to validate the build configuration.
- Select Deploy to start the Enterprise Linux build.
- Mantle transitions to the Track Build page.
- Expand the logs to monitor progress.
5.7 What to expect during deployment
During the Linux deployment:
- Once the server is powered on and PXE boot is selected, the PXE installation process begins
- The system will reboot after installation completes
- Mantle continues the workflow after reboot by connecting over SSH to complete the remaining installation steps
5.8 Completion status
The build is complete when:
- The build status turns green and shows Success
- Build artifacts are available for download
- You can download the build configuration and build object
- You can run additional playbooks against the completed build
6. Operational Notes
- Review network design inputs before deployment, especially VLANs, management addressing, and MTU values
- Confirm that PXE reachability exists for every target interface before launching a build
- For VMware environments using vSAN or vMotion, ensure network intent is clearly defined before deployment
- Use post-install tasks and playbooks to standardize day-2 configuration after the base infrastructure is complete
7. Result
At completion, Mantle has collected the build artifacts and the platform is ready for subsequent deployments or post-build automation actions.




























