In this post I cover how I tackled the automation for creating customized VM templates into my Proxmox VE 7.2 environment. These concepts can be used for any KVM installation and not just Proxmox. Additionally, you are not required to have an enterprise license or repositories.
I would like to give credit where credit is due. This post and the contents have been inspired by Austin @ AustinsNerdyThings.com and another article at WhatTheServer.com.
Like many SysAdmins in todays IT landscape, I too started down the rabbit hole that is DevOps. One of the many concepts that DevOps encompasses is automation of not just repetitive tasks but also extremely complex tasks. This post is going to cover the core scripts necessary to implement automation of deploying custom cloud-init VM templates to your Proxmox host. While I only go into a single node in my environment you can see that it is a simple task to expand this to cover multiple nodes. I will cover that in a deeper dive post at a later date.
The only prerequisite is that you need to have libguestfs-tools installed.
sudo apt update -y && sudo apt install libguestfs-tools -y
The first step in this project is to either clone the Github repository or copy/paste the full code below into the appropriate files.
In total there are 4 files:
- build-vars – contains all runtime variable definitions
- build-image.sh – script to make the magic happen
- build-info – empty file that gets updated and added to the image
- keyfile – paste your SSH keys into this file with 1 per line
You can download the resource files from my Github repo for this topic.
The next step is to modify the build-vars file to match your needs. For more information on all that you can do with your cloud-init image check their docs here.
Many how-to’s online will use a cloud-init user of ubuntu but you can use whatever user you like, just be consistent throughout your environments and scripts. This is the user in the VM that you are going to uses for everything else. This user is automagically a sudoer and has no password, so it uses the SSH keys exclusively.
You are also not restricted to solely Ubuntu. You can replace the cloud_img_url address with any valid cloud-init image url for any distro. I have tested this with Ubuntu and CentOS Stream 7.
build-vars:
# Change this line to reflect the VMID you would like to use for the template. # Select an ID such as 9999 that will be unique to the node. build_vm_id='ENTER-VMID-FOR-TEMPLATE' # What directory do you have all of the files in? Use a trailing / install_dir='/WHAT-DIRECTORY-ARE-WE-IN/' # Who are you? creator='YOURNAMEHERE' # Create this file and add your SSH keys 1 per line keyfile=${install_dir}keyfile # Enter the URL for the cloud-init image you would like to you. Below are Ubuntu Focal # and Ubuntu Kinetic. For Focal I like to refresh weekly and Kinetic daily. Uncomment # the distro you would like to use. cloud_img_url='https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img' #cloud_img_url='https://cloud-images.ubuntu.com/kinetic/current/kinetic-server-cloudimg-amd64.img' # Leave this variable alone image_name=${cloud_img_url##*/} # Enter the additional packages you would like in your template. package_list='cloud-init,qemu-guest-agent,curl,wget' # What storage location on your PVE node do you want to use for the template? (zfs-mirror, local-lvm, local, etc.) storage_location='zfs-mirror' # VM options # Your preferred DNS nameserver='ENTER-NS-IP-HERE' # Your domain (ie, domain.com, domain.local, domain) searchdomain='ENTER-SEARCH-DOMAIN-HERE' # Username for accessing the image cloud_init_user='CLOUD-INIT-USERNAME-HERE' # Default setting is the most common scsihw='virtio-scsi-pci' # What to name your template. This is free form with no spaces and will be used for automation/deployments. template_name='YOUR-TEMPLATE-NAME' # Memory and CPU cores. These are overridden with image deployments or through the PVE interface. vm_mem='2048' vm_cores='2' # Where to store the build-info file in the template for easy identification. build_info_file_location='/etc/DIRNAME-HERE'
Now that you have modified the variables it is time to run the script. The script requires root privs to work its magic so ensure that you are logged in as root or a user that can su.
Either:
chmod +x build-image.sh ./build-image.sh
Or:
sh ./build-image.sh
The script below is what you are executing. If you are using any kind of advanced or non-default networking on your node then you may need to modify the –net0 parameter to match your config.
build-image.sh
#! /bin/sh . ./build-vars # Clean up any previous build rm ${install_dir}${image_name} rm ${install_dir}build-info # Grab latest cloud-init image for your selected image wget ${cloud_img_url} # Insert commands to populate the currently empty build-info file touch ${install_dir}build-info echo "Base Image: "${image_name} > ${install_dir}build-info echo "Packages added at build time: "${package_list} >> ${install_dir}build-info echo "Build date: "$(date) >> ${install_dir}build-info echo "Build creator: "${creator} >> ${install_dir}build-info # Customize the image virt-customize --update -a ${image_name} virt-customize --install ${package_list} -a ${image_name} virt-customize --mkdir ${build_info_file_location} --copy-in ${install_dir}build-info:${build_info_file_location} -a ${image_name} # Deploy Template qm destroy ${build_vm_id} qm create ${build_vm_id} --memory ${vm_mem} --cores ${vm_cores} --net0 virtio,bridge=vmbr0 --name ${template_name} qm importdisk ${build_vm_id} ${image_name} ${storage_location} qm set ${build_vm_id} --scsihw ${scsihw} --scsi0 ${storage_location}:vm-${build_vm_id}-disk-0 qm set ${build_vm_id} --ide0 ${storage_location}:cloudinit qm set ${build_vm_id} --nameserver ${nameserver} --ostype l26 --searchdomain ${searchdomain} --sshkeys ${keyfile} --ciuser ${cloud_init_user} qm set ${build_vm_id} --boot c --bootdisk scsi0 #qm set ${build_vm_id} --serial0 socket --vga serial0 qm set ${build_vm_id} --agent enabled=1 qm template ${build_vm_id}
You will see a lot of output as the script downloads the image then modifies and ultimately deploys the image to a template with a name and ID of your choosing.
Your output should be similar to this:
--2022-07-23 20:22:30-- https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img Resolving cloud-images.ubuntu.com (cloud-images.ubuntu.com)... 185.125.190.37, 185.125.190.40, 2620:2d:4000:1::17, ... Connecting to cloud-images.ubuntu.com (cloud-images.ubuntu.com)|185.125.190.37|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 600309760 (572M) [application/octet-stream] Saving to: ‘focal-server-cloudimg-amd64.img’ focal-server-cloudimg-amd64.img 100%[===================================================================================================================>] 572.50M 2.11MB/s in 35m 24s 2022-07-23 20:57:57 (276 KB/s) - ‘focal-server-cloudimg-amd64.img’ saved [600309760/600309760] [ 0.0] Examining the guest ... [ 2.2] Setting a random seed virt-customize: warning: random seed could not be set for this type of guest [ 2.2] Setting the machine ID in /etc/machine-id [ 2.2] Updating packages [ 15.9] Finishing off [ 0.0] Examining the guest ... [ 2.1] Setting a random seed virt-customize: warning: random seed could not be set for this type of guest [ 2.1] Installing packages: cloud-init qemu-guest-agent curl wget [ 6.3] Finishing off [ 0.0] Examining the guest ... [ 2.1] Setting a random seed virt-customize: warning: random seed could not be set for this type of guest [ 2.1] Making directory: /etc/geektgb.dev [ 2.1] Copying: /root/build-info to /etc/geektgb.dev [ 2.1] Finishing off Configuration file 'nodes/yourpvenode/qemu-server/9995.conf' does not exist importing disk 'focal-server-cloudimg-amd64.img' to VM 9995 ... transferred 0.0 B of 2.2 GiB (0.00%) transferred 22.5 MiB of 2.2 GiB (1.00%) transferred 45.3 MiB of 2.2 GiB (2.01%) transferred 67.8 MiB of 2.2 GiB (3.01%) transferred 90.3 MiB of 2.2 GiB (4.01%) ###Truncated### transferred 2.2 GiB of 2.2 GiB (99.43%) transferred 2.2 GiB of 2.2 GiB (100.00%) transferred 2.2 GiB of 2.2 GiB (100.00%) Successfully imported disk as 'unused0:zfs-mirror:vm-9995-disk-0' update VM 9995: -scsi0 zfs-mirror:vm-9995-disk-0 -scsihw virtio-scsi-pci update VM 9995: -ide0 zfs-mirror:cloudinit ide0: successfully created disk 'zfs-mirror:vm-9995-cloudinit,media=cdrom' update VM 9995: -ciuser myciuser -nameserver x.x.x.x -ostype l26 -searchdomain mydomain -sshkeys ###SSHKEYSHERE### update VM 9995: -boot c -bootdisk scsi0 update VM 9995: -agent enabled=1 root@yourpvenode:~#
If everything worked properly then you should now be able to deploy the template through manually or through further automation with a tool such as Terraform. Watch for a dive into Terraform in a coming post. To manually deploy a template the steps are:
- log into Proxmox
- Right click on template and select clone
- Give it a name and select Full Clone for the mode and hit next.
- VM is ready in about 5 seconds
- Select the new VM and go to the cloud-init tab.
- Double click on the IP Config entry
- Enter IP as x.x.x.x/xx (ie, 192.168.0.10/24)
- Enter gateway as x.x.x.x (ie, 192.168.0.1)
- Enter IPv6 info if you know it otherwise leave empty
- Hit OK
- Click on Regenerate Image
- Click Start
- ssh into the new VM
You can change the hostname, IP, user, etc on the cloud-init screen but you must regenerate the image for it to take effect. If you have any startup scripts that rely on these settings, then you have to re-execute those after changes or reboot.
When using a LTS version of a distro I recommend updating your image at least once weekly and for non-LTS I recommend daily image updates. The reason for this is due to forced initial boot critical updates. These updates will interfere with other DevOps related tutorials using local VM’s as they require a reboot before any other packages can be installed.
I hope you have enjoyed my breeze through of a how-to and that this kicks off your amazing journey as well.
This is effin fantastic… wow!
Thank you, I am glad you find it useful. -GTGB
This is great – thanks.
It’d be nice if you could potentially add some options to allow root login mode (with key) instead of having to default down to a sub-user.
Can be done in the cloud-init config but is something we get a few requests for to allow and we certainly prefer in our testing of images for dummy/throw away use.
Being able to actually login as root is a trivial step but I don’t do that so I maintain best/safe practices as much as possible. Scripting, using a “sub-user”, is just as easy as if you were using the root user. Sure, there are a few small hurdles to overcome but they aren’t showstoppers and are definitely not a good reason to expose root to potentially being exploited. Another reason is that many of the write ups/posts I have planned for this series do not use root and doing so will cause bigger security concerns with those environments.
In my opinion and experience the benefits of not using root over a user account far outweigh the simplicity of using root.
– GTGB