Usecase: CS Course with individual student instances (Linux)

Last changed: 2024-10-18

This document describes and tries to offer a solution on how to spin up an arbitrary number of Linux Server instances. The idea is that each student is given their own pre-configured instance with a set of credentials.

Overview

In this usecase study, we will demonstrate with examples and code the entire process:

  1. Install a master Linux instance, which we will later use as a template for new instances

  2. Do whatever configuration changes, install software etc. that is required. All changes that should be identical on the students’ instances should be done in this step

  3. Make a snapshot of the master instance. This is the template template that will be used in the next step

  4. Use Terraform to spin up a number of instances for students based on the template created in the previous step

  5. Use Ansible to make individual configuration on each of the student instances. I our case we add an individual user with an autogenerated password for each instance

Prerequisites

This guide assumes that you have installed and know how to use Terraform and Ansible. For more information, see

You also need to have the OpenStack CLI tools installed.

Preparing the master Linux instance

For this step, consult the documentation available here:

Step by step example

  1. If you haven’t already, create an SSH key

    $ ssh-keygen -t ed25519 -a 100 -f ~/.ssh/id_ed25519_in8888
    
  2. Import the public key ~/.ssh/id_ed25519_in8888.pub into openstack

  3. Create a Linux instance. In this demo, we have chosen:

    • Name: in8888-master

    • Image: GOLD Alma Linux 9

    • Flavor: m1.small

    • Network: IPv6

    • Security Groups: default and others

    • Key Pair: id_ed25519_in8888 (created above)

    You should add security groups that allow SSH from your current IP address.

  4. Wait for the instance to be ready. With Linux it only takes a few seconds:

    Master instance

    When the instance responds to SSH logins, you can proceed:

    $ ssh 2001:700:2:8201::10ac -l almalinux -i ~/.ssh/id_ed25519_in8888
    Last login: Mon Sep 30 01:33:18 2024 from 158.39.75.247
    [almalinux@in8888-master ~]$
    
  5. Install software and make any changes as required. For the purposes of this demonstration, we install Visual Studio Code

    [almalinux@in8888-master ~]$ sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
    [almalinux@in8888-master ~]$ echo -e "[code]\nname=Visual Studio Code\nbaseurl=https://packages.microsoft.com/yumrepos/vscode\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" | sudo tee /etc/yum.repos.d/vscode.repo > /dev/null
    [almalinux@in8888-master ~]$ sudo dnf install code
    AlmaLinux 9 - AppStream                      17 MB/s |  14 MB     00:00
    AlmaLinux 9 - BaseOS                        6.6 MB/s |  15 MB     00:02
    AlmaLinux 9 - Extras                         37 kB/s |  20 kB     00:00
    Visual Studio Code                           16 MB/s | 5.2 MB     00:00
    Dependencies resolved.
    (...output omitted...)
    

Take a snapshot

Note

When cloning an instance like we do here, each clone will have the same machine ID as the parent. Some applications rely on the machine ID to uniquely identify hosts. If this applies to any applications that you plan to run, remove the machine ID before taking a snapshot:

rm /etc/machine-id
rm /var/lib/dbus/machine-id

A new machine ID will be generated automatically during first boot.

Shut down the master instance:

[almalinux@in8888-master ~]$ sudo poweroff

Proceed when the instance is properly shut down:

$ openstack server show in8888-master -c status -f value
SHUTOFF

Make a snapshot of the instance while it is shut off

Master instance snapshot (1)

We name the snapshot «master-snap-01»:

Master instance snapshot (2)

We are now ready to proceed with creating student instances.

Create student instances

This next step uses Terraform to create a number of instances for students. First, create an empty directory and cd into it, e.g.:

$ mkdir ~/in8888-h2024
$ cd ~/in8888-h2024

Copy the following files info this directory:

These files are from Terraform and NREC: Part IV - Pairing with Ansible, but with adjustments for this usecase. Edit these files to suit your needs. You should most likely want a lot of changes in variables.tf and terraform.tfvars.

Run terraform init:

$ terraform init
(...output omitted...)
Terraform has been successfully initialized!

Run terraform plan:

$ terraform plan
(...output omitted...)
Plan: 29 to add, 0 to change, 0 to destroy.

Fix any errors from the plan command, then run terraform apply:

$ terraform apply
(...output omitted...)
Apply complete! Resources: 29 added, 0 changed, 0 destroyed.

The instances are now created, we are ready to make the final configuration with Ansible. The end result is:

$ openstack server list --name in8888 --sort-column Name -c Name -c Status -c Image
+--------------------+---------+-------------------+
| Name               | Status  | Image             |
+--------------------+---------+-------------------+
| in8888-h2024-lab-0 | ACTIVE  | master-snap-01    |
| in8888-h2024-lab-1 | ACTIVE  | master-snap-01    |
| in8888-h2024-lab-2 | ACTIVE  | master-snap-01    |
| in8888-h2024-lab-3 | ACTIVE  | master-snap-01    |
| in8888-h2024-lab-4 | ACTIVE  | master-snap-01    |
| in8888-master      | SHUTOFF | GOLD Alma Linux 9 |
+--------------------+---------+-------------------+

Configure student instances

Download the following files into the same directory as the Terraform files:

Edit these files as necessary. At minimum you need to edit the terraform.yaml file.

Test that ansible works:

$ ansible -i terraform.yaml all -m ping
[WARNING]: Collection cloud.terraform does not support Ansible version 2.14.14
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
in8888-h2024-lab-3 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
in8888-h2024-lab-2 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
in8888-h2024-lab-0 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
in8888-h2024-lab-1 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
in8888-h2024-lab-4 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

Run the add-labuser.yaml playbook:

$ ansible-playbook -i terraform.yaml add-labuser.yaml
(...output omitted...)

The SSH pass phrases are saved in a file called labusers.csv, which is located in the same directory as the playbook. Example contents:

HOST,IPADDR,USERNAME,SSH_PASSPHRASE
in8888-h2024-lab-0,2001:700:2:8201::100e,labuser,Msho!nLKCCo)yIAvB$UC
in8888-h2024-lab-1,2001:700:2:8201::1270,labuser,cXhm_q%xvwvBmLM6rPF6
in8888-h2024-lab-2,2001:700:2:8201::1485,labuser,sGecTMBp0u11x0.OpEGn

The SSH keys are collected from the instances and placed in this directory:

labuser_ssh_keys

The keys are named:

  • Private key: id_ed25519_<instance_name>

  • Public key: id_ed25519_<instance_name>.pub

Example:

$ ls -l labuser_ssh_keys/
-rw-------. 1 user group 509 Oct 10 15:06 id_ed25519_in8888-h2024-lab-0
-rw-r--r--. 1 user group 131 Oct 10 15:06 id_ed25519_in8888-h2024-lab-0.pub
-rw-------. 1 user group 509 Oct 10 15:06 id_ed25519_in8888-h2024-lab-1
-rw-r--r--. 1 user group 131 Oct 10 15:06 id_ed25519_in8888-h2024-lab-1.pub
-rw-------. 1 user group 509 Oct 10 15:06 id_ed25519_in8888-h2024-lab-2
-rw-r--r--. 1 user group 131 Oct 10 15:06 id_ed25519_in8888-h2024-lab-2.pub
-rw-------. 1 user group 509 Oct 10 15:06 id_ed25519_in8888-h2024-lab-3
-rw-r--r--. 1 user group 131 Oct 10 15:06 id_ed25519_in8888-h2024-lab-3.pub

The SSH private/public key pair can be distributed to the individual students along with the pass phrase which is stored in labusers.csv.

The students will access their lab clone using the SSH key (example):

$ ssh 2001:700:2:8201::138b -l labuser -i path/to/id_ed25519_in8888-h2024-lab-0
Enter passphrase for key 'path/to/id_ed25519_in8888-h2024-lab-0':
[labuser@in8888-h2024-lab-0 ~]$

Adding or removing instances

In order to to increase or decrease the number of instances, change the number in variables.tf:

variables.tf
1# Mapping between role and number of instances (count)
2variable "role_count" {
3  type = map(string)
4  default = {
5    "students" = 5
6  }
7}

Then run:

terraform plan
terraform apply

Create users and SSH keys as before with:

ansible-playbook -i terraform.yaml add-labuser.yaml

The credentials file labusers.csv will be updated to reflect the changes, and the directory labuser_ssh_keys will contain any new SSH key pairs. Note that the pass phrases are randomly generated but idempotent, thus changing the number of instances will not change pass phrases for existing instances.


File listing

A complete listing of the example files used in this document is provided below.

terraform.yaml
1plugin: cloud.terraform.terraform_provider
2project_path: /path/to/project
3# Terraform binary (available in the $PATH) or full path to the binary.
4binary_path: terraform
main.tf
 1# Define required providers
 2terraform {
 3  required_version = ">= 1.0"
 4  required_providers {
 5    openstack = {
 6      source  = "terraform-provider-openstack/openstack"
 7    }
 8    ansible = {
 9      version = "~> 1.3.0"
10      source  = "ansible/ansible"
11    }
12  }
13}
14provider "openstack" {}
15
16# SSH key
17resource "openstack_compute_keypair_v2" "keypair" {
18  region     = var.region
19  name       = "${var.name}-key"
20  public_key = file(var.ssh_public_key)
21}
22
23# Student servers
24resource "openstack_compute_instance_v2" "student_instance" {
25  region      = var.region
26  count       = lookup(var.role_count, "students", 0)
27  name        = "${var.name}-lab-${count.index}"
28  image_name  = lookup(var.role_image, "snapshot", "unknown")
29  flavor_name = lookup(var.role_flavor, "flavor", "unknown")
30
31  key_pair = "${var.name}-key"
32  security_groups = [
33    "${var.name}-icmp",
34    "${var.name}-ssh",
35  ]
36
37  network {
38    name = "${var.network}"
39  }
40
41  lifecycle {
42    ignore_changes = [image_name,image_id]
43  }
44
45  depends_on = [
46    openstack_networking_secgroup_v2.instance_icmp_access,
47    openstack_networking_secgroup_v2.instance_ssh_access,
48  ]
49}
50
51# Ansible student hosts
52resource "ansible_host" "student_instance" {
53  count  = lookup(var.role_count, "students", 0)
54  name   = "${var.name}-lab-${count.index}"
55  groups = ["${var.name}"] # Groups this host is part of
56
57  variables = {
58    ansible_host = trim(openstack_compute_instance_v2.student_instance[count.index].access_ip_v6, "[]")
59  }
60}
61
62# Ansible student group
63resource "ansible_group" "student_instances_group" {
64  name     = "student_instances"
65  children = ["${var.name}"]
66  variables = {
67    ansible_user = "almalinux"
68    ansible_connection = "ssh"
69  }
70}
secgroup.tf
 1# Security group ICMP
 2resource "openstack_networking_secgroup_v2" "instance_icmp_access" {
 3  region      = var.region
 4  name        = "${var.name}-icmp"
 5  description = "Security group for allowing ICMP access"
 6}
 7
 8# Security group SSH
 9resource "openstack_networking_secgroup_v2" "instance_ssh_access" {
10  region      = var.region
11  name        = "${var.name}-ssh"
12  description = "Security group for allowing SSH access"
13}
14
15# Allow ssh from IPv4 net
16resource "openstack_networking_secgroup_rule_v2" "rule_ssh_access_ipv4" {
17  region            = var.region
18  count             = length(var.allow_ssh_from_v4)
19  direction         = "ingress"
20  ethertype         = "IPv4"
21  protocol          = "tcp"
22  port_range_min    = 22
23  port_range_max    = 22
24  remote_ip_prefix  = var.allow_ssh_from_v4[count.index]
25  security_group_id = openstack_networking_secgroup_v2.instance_ssh_access.id
26}
27
28# Allow ssh from IPv6 net
29resource "openstack_networking_secgroup_rule_v2" "rule_ssh_access_ipv6" {
30  region            = var.region
31  count             = length(var.allow_ssh_from_v6)
32  direction         = "ingress"
33  ethertype         = "IPv6"
34  protocol          = "tcp"
35  port_range_min    = 22
36  port_range_max    = 22
37  remote_ip_prefix  = var.allow_ssh_from_v6[count.index]
38  security_group_id = openstack_networking_secgroup_v2.instance_ssh_access.id
39}
40
41# Allow icmp from IPv4 net
42resource "openstack_networking_secgroup_rule_v2" "rule_icmp_access_ipv4" {
43  region            = var.region
44  count             = length(var.allow_icmp_from_v4)
45  direction         = "ingress"
46  ethertype         = "IPv4"
47  protocol          = "icmp"
48  remote_ip_prefix  = var.allow_icmp_from_v4[count.index]
49  security_group_id = openstack_networking_secgroup_v2.instance_icmp_access.id
50}
51
52# Allow icmp from IPv6 net
53resource "openstack_networking_secgroup_rule_v2" "rule_icmp_access_ipv6" {
54  region            = var.region
55  count             = length(var.allow_icmp_from_v6)
56  direction         = "ingress"
57  ethertype         = "IPv6"
58  protocol          = "ipv6-icmp"
59  remote_ip_prefix  = var.allow_icmp_from_v6[count.index]
60  security_group_id = openstack_networking_secgroup_v2.instance_icmp_access.id
61}
variables.tf
 1# Variables
 2variable "region" {
 3}
 4
 5variable "name" {
 6  default = "in8888-h2024"
 7}
 8
 9variable "ssh_public_key" {
10  default = "~/.ssh/id_ed25519_in8888.pub"
11}
12
13variable "network" {
14  default = "IPv6"
15}
16
17# Security group defaults
18variable "allow_icmp_from_v6" {
19  type    = list(string)
20  default = []
21}
22
23variable "allow_icmp_from_v4" {
24  type    = list(string)
25  default = []
26}
27
28variable "allow_ssh_from_v6" {
29  type    = list(string)
30  default = []
31}
32
33variable "allow_ssh_from_v4" {
34  type    = list(string)
35  default = []
36}
37
38# Mapping between role and image
39variable "role_image" {
40  type = map(string)
41  default = {
42    "snapshot" = "master-snap-01"
43  }
44}
45
46# Mapping between role and flavor
47variable "role_flavor" {
48  type = map(string)
49  default = {
50    "flavor" = "m1.small"
51  }
52}
53
54# Mapping between role and number of instances (count)
55variable "role_count" {
56  type = map(string)
57  default = {
58    "students" = 5
59  }
60}
terraform.tfvars
 1# Region
 2region = "osl"
 3
 4# This is needed for ICMP access
 5allow_icmp_from_v6 = [
 6  "2001:700:100:8070::/64",
 7  "2001:700:100:8071::/64",
 8]
 9allow_icmp_from_v4 = [
10  "129.240.114.32/28",
11  "129.240.114.48/28",
12]
13
14# This is needed to access the instance over ssh
15allow_ssh_from_v6 = [
16  "2001:700:100:8070::/64",
17  "2001:700:100:8071::/64",
18]
19allow_ssh_from_v4 = [
20  "129.240.114.32/28",
21  "129.240.114.48/28",
22]
add-labuser.yaml
  1- hosts: "{{ myhosts | default('all') }}"
  2  gather_facts: no
  3
  4  vars:
  5    csvfile: "{{ playbook_dir }}/labusers.csv"
  6    username: "labuser"
  7    ssh_dir: "{{ playbook_dir }}/labuser_ssh_keys"
  8
  9  tasks:
 10    - name: Create random but idempotent password
 11      ansible.builtin.set_fact:
 12        password: "{{ lookup('ansible.builtin.password', '/dev/null',
 13                              seed=inventory_hostname+ansible_host,
 14                              chars=['ascii_letters', 'digits', '().@%!-_']) }}"
 15
 16    - name: Ensure lab user is present
 17      become: true
 18      ansible.builtin.user:
 19        name: "{{ username }}"
 20        comment: "Labuser for {{ inventory_hostname }}"
 21        create_home: true
 22        generate_ssh_key: true
 23        ssh_key_type: ed25519
 24        ssh_key_passphrase: "{{ password }}"
 25        ssh_key_file: ".ssh/id_ed25519"
 26      register: labuser
 27
 28    - name: Give labuser general sudo access
 29      become: true
 30      community.general.sudoers:
 31        name: 10-labuser
 32        state: present
 33        user: labuser
 34        runas: root
 35        commands: ALL
 36        nopassword: true
 37
 38    - name: Create directory to store ssh keys
 39      ansible.builtin.file:
 40        path: "{{ ssh_dir }}"
 41        state: directory
 42        mode: '0700'
 43      delegate_to: localhost
 44      run_once: true
 45
 46    - name: Copy private ssh keys to localhost
 47      become: true
 48      ansible.builtin.fetch:
 49        src: /home/labuser/.ssh/id_ed25519
 50        dest: "{{ ssh_dir }}/id_ed25519_{{ inventory_hostname }}"
 51        flat: yes
 52
 53    - name: Protect private keys
 54      ansible.builtin.file:
 55        path: "{{ ssh_dir }}/id_ed25519_{{ inventory_hostname }}"
 56        mode: '0600'
 57      delegate_to: localhost
 58
 59    - name: Copy public ssh keys to localhost
 60      become: true
 61      ansible.builtin.fetch:
 62        src: /home/labuser/.ssh/id_ed25519.pub
 63        dest: "{{ ssh_dir }}/id_ed25519_{{ inventory_hostname }}.pub"
 64        flat: yes
 65
 66    - name: Create authorized_keys from file
 67      become: true
 68      ansible.posix.authorized_key:
 69        user: labuser
 70        state: present
 71        key: "{{ lookup('file', '{{ ssh_dir }}/id_ed25519_{{ inventory_hostname }}.pub') }}"
 72
 73    - name: Print credentials if new/changed
 74      ansible.builtin.debug:
 75        msg: "NEW credential: {{ username }}:{{ password }}"
 76      when: labuser.changed
 77
 78    - name: Create temporary directory
 79      ansible.builtin.tempfile:
 80        state: directory
 81      register: tmpdir
 82      delegate_to: localhost
 83      run_once: true
 84
 85    - name: Create CSV header
 86      ansible.builtin.lineinfile:
 87        line: "HOST,IPADDR,USERNAME,SSH_PASSPHRASE"
 88        path: "{{ tmpdir.path }}/000.csv"
 89        create: yes
 90      delegate_to: localhost
 91      run_once: true
 92
 93    - name: Save credentials in individual files
 94      ansible.builtin.lineinfile:
 95        line: "{{ inventory_hostname }},{{ ansible_host }},{{ username }},{{ password }}"
 96        path: "{{ tmpdir.path }}/{{ inventory_hostname }}.csv"
 97        create: yes
 98      delegate_to: localhost
 99
100    - name: Assemble CSV file from fragments
101      ansible.builtin.assemble:
102        src: "{{ tmpdir.path }}"
103        dest: "{{ csvfile }}"
104      delegate_to: localhost
105      run_once: true
106
107    - name: Remove temporary dir
108      ansible.builtin.file:
109        path: "{{ tmpdir.path }}"
110        state: absent
111      when: tmpdir.path is defined
112      delegate_to: localhost
113      run_once: true