Deploying the Nutanix Community Edition (CE) on nested ESXi can be slightly tricky, but in this blog post, I will guide you through the process in just a few easy steps with the help of a script I have produced.
Steps required:
- Download Nutanix CE image from the Nutanix NEXT website. The website usually offers both an ISO and a Disk Image-based Full Install. Please download the Disk Image-based Full Install file. The file will be named similar to ce-2019.02.11-stable.img.gz
- Unpack the ce-2019.02.11-stable.img.gz file (the expected result is the file ce-2019.02.11-stable.img)
- Rename the ce-2019.02.11-stable.img file to ce-2019.02.11-flat.vmdk
- Create missing disk descriptor file
- Create a virtual machine configuration (VMX) file with correct settings
- Minimum 4 virtual CPUs
- Enable Expose hardware-assisted virtualization to the guest OS on the CPU
- Minimum 16 GB memory (you must really have a minimum 16 GB physical memory available if not this can be solved:
- After deployment, do not start the installation. Instead, log in using root / nutanix/4u
- Edit the /home/install/phx_iso/phoenix/sysUtil.py file
- Search for the line custom_ram = 12 and edit this to custom_ram = 8
- This will allow the installation to complete even though you only have 12 GB of physical memory available. However, some features such as compression/deduplication will have insufficient memory so beware!
- Add the existing ce-2019.02.11.vmdk file as SCSI0:0
- Change the SCSI controller to type VMware Paravirtual SCSI Adapter (PVSCSI)
- Additional 200 GB VMDK as SCSI0:1
- Additional 500 GB VMDK as SCSI0:2
- Modify network adapter to VMXNET3 and connect it to the appropriate port group.
Now we have the VMDK disk data (-flat.vmdk) file, however, we are missing the disk descriptor file (.vmdk file). If you want to do this manually, VMware has a knowledge base article that describes the process here: https://kb.vmware.com/s/article/1002511.
If you have difficulties following the KB article or just prefer to have this automated, then you can use this script:
# Set variables to locate the Nutanix CE image you have uploaded and renamed to -flat.vmdk
cd /vmdkpath=$(find -name “*ce-*.*.*-flat.vmdk”)
if [ -z “$vmdkpath” ]
then
echo “Unable to locate Nutanix CE image… aborting”
else
echo % Located Nutanix CE image and setting up variables
vmdkflatname=$(find -name “*ce-*.*.*-flat.vmdk” -exec basename {} \;)
vmdkname=$(find -name “*ce-*.*.*-flat.vmdk” -exec basename {} \; | sed ‘s/.\{10\}$//’)
vmdksize=$(ls -nl $vmdkpath | awk ‘{print $5}’)
vmdktempfolder=$(find -name “*ce-*.*.*-flat.vmdk” -exec dirname {} \; | sed “s|^\.||”)/temp
vmdkfolder=$(echo $vmdktempfolder | sed ‘s/.\{4\}$//’)
vmname=$(basename $vmdkfolder)
vmdkdisk1=$(echo $vmname\_1.vmdk)
vmdkdisk2=$(echo $vmname\_2.vmdk)echo % Renaming Nutanix CE image
mv $vmdkpath ${vmdkfolder}${vmname}-flat.vmdkecho % Creating temporary working folder
mkdir $vmdktempfolderecho % Creating new disk descriptor file and temporary -flat.vmdk file
vmkfstools -c $vmdksize -d thin $vmdktempfolder/$vmname.vmdkecho % Deleting temporary -flat.vmdk file
rm $vmdktempfolder/${vmname}-flat.vmdkecho % Moving new disk descriptor file to same folder as the uploaded nutanix ce image
mv $vmdktempfolder/$vmname.vmdk $vmdkfolderecho % Removing temporary working folder
rmdir $vmdktempfolderecho % Creating 200 GB virtual SSD hot tier VMDK – thin provisioned
vmkfstools -c 214748364800 -d thin $vmdkfolder$vmname\_1.vmdkecho % Creating 500 GB HDD cold tier VMDK – thin provisioned
vmkfstools -c 536870912000 -d thin $vmdkfolder$vmname\_2.vmdkecho % Creating VMX file with customized settings
vmxnvram=$(echo $vmname.nvram)
cat <<EOT >> $vmdkfolder/$vmname.vmx
.encoding = “UTF-8”
config.version = “8”
virtualHW.version = “13”
nvram = “$vmxnvram”
pciBridge0.present = “TRUE”
svga.present = “TRUE”
pciBridge4.present = “TRUE”
pciBridge4.virtualDev = “pcieRootPort”
pciBridge4.functions = “8”
pciBridge5.present = “TRUE”
pciBridge5.virtualDev = “pcieRootPort”
pciBridge5.functions = “8”
pciBridge6.present = “TRUE”
pciBridge6.virtualDev = “pcieRootPort”
pciBridge6.functions = “8”
pciBridge7.present = “TRUE”
pciBridge7.virtualDev = “pcieRootPort”
pciBridge7.functions = “8”
vmci0.present = “TRUE”
hpet0.present = “TRUE”
floppy0.present = “FALSE”
numvcpus = “4”
memSize = “16384”
bios.bootRetry.delay = “10”
powerType.powerOff = “default”
powerType.suspend = “soft”
powerType.reset = “default”
tools.upgrade.policy = “manual”
sched.cpu.units = “mhz”
sched.cpu.affinity = “all”
sched.cpu.latencySensitivity = “normal”
vm.createDate = “1552034628071900”
scsi0.virtualDev = “pvscsi”
scsi0.present = “TRUE”
sata0.present = “TRUE”
svga.autodetect = “TRUE”
scsi0:0.deviceType = “scsi-hardDisk”
scsi0:0.fileName = “$vmname.vmdk”
sched.scsi0:0.shares = “normal”
sched.scsi0:0.throughputCap = “off”
scsi0:0.present = “TRUE”
scsi0:1.deviceType = “scsi-hardDisk”
scsi0:1.fileName = “$vmdkdisk1”
scsi0:1.virtualSSD = 1
sched.scsi0:1.shares = “normal”
sched.scsi0:1.throughputCap = “off”
scsi0:1.present = “TRUE”
scsi0:2.deviceType = “scsi-hardDisk”
scsi0:2.fileName = “$vmdkdisk2”
sched.scsi0:2.shares = “normal”
sched.scsi0:2.throughputCap = “off”
scsi0:2.present = “TRUE”
ethernet0.virtualDev = “vmxnet3”
ethernet0.networkName = “VM Network”
ethernet0.addressType = “generated”
ethernet0.wakeOnPcktRcv = “FALSE”
ethernet0.present = “TRUE”
sata0:0.deviceType = “atapi-cdrom”
sata0:0.fileName = “/vmfs/devices/cdrom/mpx.vmhba64:C0:T0:L0”
sata0:0.present = “TRUE”
displayName = “$vmname”
guestOS = “other-64”
vhv.enable = “TRUE”
bios.bootDelay = “5000”
toolScripts.afterPowerOn = “TRUE”
toolScripts.afterResume = “TRUE”
toolScripts.beforeSuspend = “TRUE”
toolScripts.beforePowerOff = “TRUE”
tools.syncTime = “FALSE”
uuid.bios = “56 4d fe 2a 63 f7 10 22-a1 84 e7 d8 2f 20 6e 23”
uuid.location = “56 4d fe 2a 63 f7 10 22-a1 84 e7 d8 2f 20 6e 23”
uuid.action = “create”
vc.uuid = “52 c0 0c 4c 3b a9 bb 9d-05 aa 9b ba 72 31 f0 80”
sched.cpu.min = “0”
sched.cpu.shares = “normal”
sched.mem.min = “0”
sched.mem.minSize = “0”
sched.mem.shares = “normal”
EOTvim-cmd solo/registervm $vmdkfolder/$vmname.vmx
fi
How does the script work?
- Create a folder on your target datastore with the same name as you would like the virtual machine to be named.
- Upload the ce-2019.02.11-flat.vmdk file to the newly created folder
- Connect to your ESXi host using SSH and execute the script.
- The script will:
- Locate the ce-2019.02.11-flat.vmdk file
- Rename it from ce-2019.02.11-flat.vmdk to <virtual machine name>-flat.vmdk
- Create a temporary working directory
- Creating the missing disk descriptor file and a temporary -flat.vmdk file
- Delete the temporary -flat.vmdk file
- Move the new disk descriptor file to the virtual machine folder
- Delete the temporary working directory
- Create a thin-provisioned 200 GB VMDK to be used for the Nutanix hot tier (this virtual disk will be marked as a virtual SSD)
- Create a thin-provisioned 500 GB VMDK to be used for the Nutanix cold tier
- Create a virtual machine configuration (VMX) file with the correct settings
- Register the new virtual machine with your ESXi host
Once the VM boots up, you will need to:
- Select the second option in the boot menu (rescue)
- Login using
- Username: root
- Password: nutanix/4u
- Change directory to /var/cache/libvirt/qemu/capabilities/
- Edit the only XML file inside the /var/cache/libvirt/qemu/capabilities/ folder using for example command “vi”
- Search for rhel7.3.0 by using /rhel7.3.0 then replace it with rhel7.2.0 and save the file.
- Edit the /home/install/phx_iso/phoenix/svm_template/kvm/default.xml file using for example command “vi”
- Look for the domain type = kvm section and find the <features> sub-section that looks like:
- <features>
- <acpi/>
- <apic eoi=’on’/>
- <pae/>
- </features>
- Edit this section so it looks like this:
- <features>
- <acpi/>
- <apic eoi=’on’/>
- <pae/>
- <pmu state=’off’/>
- </features>
- Save the file
- Reboot
- Once again, select the rescue option
- Once you get to the login prompt, simply enter as username: install and press enter.
- From here the process is straight forward.