Skip to content
Snippets Groups Projects
Forked from Operating Systems / Assignment Parallel Firewall
Source project has a limited visibility.

Packer configuration files and scripts for cloud courses

This repository contains Packer configuration files, autoconfiguration scripts and Ansible playbooks for various virtual machines used for cloud-related labs. A makefile is also provided for convenience, to provide some parameters for the build process and define dependencies for the cloud and lab images.

The Packer configuration files are used to define how qcow2 virtual machine disk images should be created. The disk creation process depends on the type of virtual machine that is created. Depending on the purpose of the virtual machine, we define a few categories which are detailed in the sections below:

Dependencies

In order to build the virtual machines, the following packages are required (tested using Fedora 34; names may differ on other systems):

  • packer (follow the installation instructions on the Packer site)
  • pykickstart (required to syntax check kickstart scripts)
  • qemu / qemu-img
  • qemu-kvm
  • ansible

Base images

For base images (Centos Stream 8, Ubuntu 20.04, Debian 11) an installation media (ISO) is used to start the operating system installation process.

Packer automates the installation process by performing the following:

  1. starts an HTTP server to serve templated files (is used to serve auto-configuration files);
  2. starts a virtual machine using qemu, with the installation media, at least a virtual machine disk attached and a pre-configured network that allows the virtual machine to access the internet and the local HTTP server;
  3. enters a pre-defined input sequence, some time after the virtual machine is created, in order to make the operating system installer use the auto-configuration files;
  4. waits for the installation to finish and checks that the guest is able to finish rebooting by connecting to it using SSH;
  5. configures the guest using Ansible playbooks;
  6. performs some post-install steps, such as creating a qcow2 disk snapshot, computing various types of disk image checksums and compressing the disk image if requested.

To add a new base image, you must create a new Packer configuration file, similar to the existing ones, and add its name to the baseimgs variable in the makefile.

Cloud image

The cloud image is derived from one of the base images. The name of the base is defined as a variable, named base_img.cloudimg in the makefile. If you wish to change the image's base, edit this variable.

Packer automates the installation process by performing the following:

  1. creates a copy-on-write copy of the base image;
  2. starts the virtual machine using the new disk image
  3. waits for the virtual machine to fully start and checks SSH connection;
  4. configures the guest using Ansible playbooks. Ansible is configured to also copy the base images into the cloud image, so VMs for labs can run inside VMs spawned from the cloud image. The list of base disk images is passed from makefile to Packer, and then to Ansible using the include_disks variable;
  5. performs some post-install steps, such as creating a qcow2 disk snapshot, computing various types of disk image checksums and compressing the disk image if requested.

The cloud disk image is meant to run on top of OpenStack, so the cloud-init service is installed and configured to pull various parameters from the cloud platform. Consequently, some configuration files, such as /etc/hosts are emptied since they are expected to be automatically populated at startup. Additionally, the cloud image is stand-alone, so the base file dependency is dropped.

Lab images

Similar to the cloud image, lab disk images are derived from one of the base images. The base of each image is defined through a variable that follows the base_img.lab-name pattern (e.g., base_img.lab-dns for the DNS lab) in the makefile. If you wish to change the base image for a lab, edit its respective variable. It is possible that some of the Ansible playbooks may need editing if they use modules that only work in a specific environment or operating system.

Packer automates the installation process by performing the following:

  1. creates a copy-on-write copy of the base image;
  2. starts the virtual machine using the new disk image;
  3. waits for the virtual machine to fully start and checks SSH connection;
  4. configures the guest using Ansible playbooks;
  5. performs some post-install steps, such as creating a qcow2 disk snapshot, computing various types of disk image checksums and compressing the disk image if requested.

Since the lab images are meant to be as compact as possible, the base images are already copied in the cloud image and the lab images keep their base image dependency. The conversion (with the optional compression) is performed on top of the base image. This helps with keeping the disk images small - e.g., the DNS lab image, with the DNS server package installed is under 20MB.

If you wish to add a new lab image, you should do the following:

  • add the name of the lab to the labs variable (not rule) in the makefile. Follow the lab-name pattern, as for the other labs;
  • define a new base_img.lab-name variable in the makefile, that defines the base image for the lab;
  • optionally define a hostname.lab-name variable to set a custom hostname in the lab virtual machine. By default the hostname will be the same as the name of the image (i.e., lab-name).
  • optionally define a extra_disks.lab-name variable to add additional disks to the virtual machine. The variable consists of a list of dictionaries in JSON format, excepting the beginning and ending array brackets. Each entry must contain the following fields:
    • name that identifies the disk in the virtual machine (will be used to identify the disk in the VM);
    • size that defines the total size of the disk image (must be a size that is recognised by qemu-img);
    • parts which is a list of partitions - every partition must have a start and end position, and can be defined in XiB multiples (e.g. 1GiB), or percents (e.g., 100%), which are relative to the disk size.
  • create a new Ansible playbook, under scripts/ansible, named the same as the lab name in the makefile (i.e., follows the lab-name.yml pattern). You can use the existing playbooks as examples.
  • create a new file in the configs directory with the same name as the entry in the makefile (i.e., follows the lab-name pattern). You can use the existing configurations as examples. Please make sure that MAC addresses and host IPs (the last part of the IP address) do not overlap. A part of both of these addresses is meant to be different between labs (i.e., the first digit(s) of the host IP and the second to last group in the MAC address are meant to be different between labs), so no two values are the same between VMs. However, multiple VMs of a single lab may share the aforementioned component - e.g., the first VM may have an IP ending in 11, the second VM may have the IP ending in 12, and so on.

Notes

The installation / configuration time will vary greatly depending on the storage type of the machine used to create the virtual machines. From local testing, the installation time in a ramdisk is around 12-13 minutes for Centos 8, 9 minutes for Debian 11, and 20 minutes for Ubuntu 20.04. By contrast, when building directly on a hard disk, the process takes around 22 minutes for Centos 8, and surpasses the 30 minutes timeout for Debian 11 (Ubuntu 20.04 has not been tested). An SSD is a great middle ground from a speed point of view, but repeated virtual machine builds will likely wear down the SSD.

For convenience, the makefile contains a rule to create a ramdisk that can be used to store the disk images as the output directory. A virtual machine without GUI (i.e., normal memory usage is around 400MB without any VMs started) and 8GB of RAM should be able to build the images in a ramdisk (or 10GB with uncompressed base images).

NOTE: Regardless of whether a ramdisk is used, the makefile copies the virtual machine disks to the vms directory where they are properly named, so the disk images will be duplicated.