Tail wags through the air,
butt follows closely.
Pure joy.
I’ve been having a rough few days. My partner thinks it’s because of the Daylight Savings switch, and maybe she’s right. It’s true that I’ve had similar rough patches at around the start of spring, and historically I’ve attributed them to the sudden change in the pace of life around me. Being an introvert partnered with an extrovert, March usually feels like a whirlwind as my partner seemingly emerges out of her winter hibernation and starts going out everyday, either maintaining old relationships or building new ones. And I usually stick with what I’d been doing during the winter, which is either games or hobbies. But this year has been different - I’ve been just as outgoing (in my own way) this winter, skiing at least one day every week, meeting with my close friends, and deepening some of my shallower friendships. And yet, here I am in the second week of March, feeling like life’s been beating me up and stealing my lunch money.
Maybe part of this is the general state of the world. We’re going through times that feel historical, and in a sense they are. If this book is to be believed, we’re simply in the crysis part of a cycle that’s been happening for several centuries, and while it’s not the “end of times”, there are tough times looming for all of us. Every conversation I’ve had recently has visited this topic at some point, whether through the lens of family of my Canadian friends that are boycotting American goods, or friends whose investments continue to lose value, or the dismantling of various government apparatus and bring independent government organizations under the president’s control, or the efforts the Washington state government is undertaking to undermine the efforts of ICE and border control. On a more personal note, I’m on a paradoxically temporary permanent resident status, and cannot apply to make that actually permanent for another year at least. For a while I had labored under the impression that I could always fall back on my Indian citizenship in the worst case. However, I visited my family in India recently and spent six months there, talking with friends and family. Over that period, I witnessed all the ways in which a country that used to be fully secular on paper and (mostly) in practice has turned into a Hindu nation. I had a cousin I used to like comment on how dark my skin has gotten (being dark in India is read as being born into one of the “lower” castes) and then go on a long rant about why he hates his Muslim coworkers. I had a yoga teacher tell me proudly that he does “Savarkar’s work” everyday. I later learnt that Savarkar - a figure who was only briefly mentioned in my history textbooks in school - was a prominent Hindu Nationalist. I had my parents say things to me that I won’t repeat here. All things considered, moving back to India is no longer an option I hold in high regard. And so I’m doubly concerned about the length of time till I can make my temporary status more permanent.
Today felt especially rough for several reasons. At the end of the work day, I set out to go night skiing, only to end up being forced to turn back because the pass to the resort was closed due to inclement weather. After returning home, I decide to take our dog on a walk, hoping to tire him out a bit more. However, as soon as we came back home from the walk, he bounded up the stairs, picked up one of his squeaky toys and dropped it in my general direction, his tail wagging furiously in an unmistakeable invitation to “Play!”. Annoyed by his distinct lack of tiredness, I snapped “I’m not playing with you” as I walked past him upstairs, and out the corner of my eye I saw his tail stop wagging. I immediately felt sad that I’d made the dog sad but also resentful that he didn’t want to leave me alone. Couldn’t he see that I was feeling bummed? And then I thought - maybe he does see that I’m feeling down and that’s exactly why he’s inviting me to play with him. Maybe he recognizes that there is no better balm for a bruised soul than a vigorous game of tug and keep-away. So that’s exactly what I did; I came back downstairs, play-bowed to him and played with him for several minutes. And you know what? I think he was right. I do feel better. I’ll make an effort to listen to his advice more often.
One of my favorite quotes of all time is this definition of insanity:
Insanity is doing the same thing again and again and expecting different results. It’s thinking - “this time it’s going to be different”.
I don’t know where the original quote comes from (Benjamin Franklin? Gandhi?) but I heard it for the first time from a villain in the game “Far Cry 3”. Here’s the clip if you want to watch it, I still get goosebumps from the voiceacting and motion capture: https://www.youtube.com/watch?v=rKMMCPeiQoc.
I used to be a prolific blogger. And then I somehow fell out of the habit of writing regularly, and instead got in the habit of tinkering with my blog.
- The blog started over on https://blogger.com
- I moved it over to Wordpress
- I moved it from Wordpress to Poet
- I moved it from Poet to Jekyll hosted on Github Pages
- I moved it from Jekyll to Hugo, hosted on S3
- I changed the deployment mechanism from Codeship to AWS CodePipeline
- I changed the deployment mechanism to Cloudflare Pages
There was a long break between each of these and the previous post. I did not write anything for a long time, realized that I hadn’t written anything for a long time, made a long blog post about how I’d made the blog better and easier to write, and then didn’t write anything else for another long stretch.
With that out of the way, let me tell you about how I’ve now made the blog much better and a lot less trouble to write…
I’m only half kidding. The blog’s backend is mostly unchanged - it still uses Hugo, it’s still hosted on Cloudflare Pages. I did try to use ox-hugo to export subtrees out of a single large org-mode
file into individual posts, but inter-post links via that method have apparently been broken for at least a year. I then opted to use the method outlined in https://yejun.dev/posts/blogging-using-denote-and-hugo/ with a few modifications to roll my own solution that exports from specially-marked Denote notes to Hugo markdown posts. The elisp code I hacked together with some help from https://claude.ai isn’t particularly well-written or generalizable, but it works well enough to do what I need it to do (mark a note as a blog post, export it, export all marked notes from directory) and stays out of my way otherwise. Here’s all of the code to make that happen:
;; Adapted from https://yejun.dev/posts/blogging-using-denote-and-hugo/
;; Converts denote links to hugo's relref shortcodes to generated files.
(advice-add 'denote-link-ol-export :around
(lambda (orig-fun link description format)
(if (and (eq format 'md)
(eq org-export-current-backend 'hugo))
(let* ((path (denote-get-path-by-id link))
(export-file-name (ameyp/denote-generate-hugo-export-file-name path)))
(format "[%s]({{< relref \"%s\" >}})"
description
export-file-name))
(funcall orig-fun link description format))))
;; Add advice around org-export-output-file-name so that I can generate the filename from denote frontmatter
;; rather than needing to add an explicit export_file_name property.
(advice-add 'org-export-output-file-name :around
(lambda (orig-fun extension &optional subtreep pub-dir)
(if (and (string-equal extension ".md")
(ameyp/denote-should-export-to-hugo))
(let ((base-name (concat
(ameyp/denote-generate-hugo-export-file-name (buffer-file-name))
extension)))
(cond
(pub-dir (concat (file-name-as-directory pub-dir)
(file-name-nondirectory base-name)))
(t base-name)))
(funcall orig-fun extension subtreep pub-dir))))
(defvar ameyp/denote--hugo-export-regexp "hugo_export[[:blank:]]*:[[:blank:]]*"
"The frontmatter property for indicating that the note should be exported to a hugo post.")
(defun ameyp/denote-generate-hugo-export-file-name (filename)
"Generates a hugo slug from the supplied filename."
(let* ((title (denote-retrieve-filename-title filename))
(date (denote--id-to-date (denote-retrieve-filename-identifier filename))))
(concat date "-" title)))
(defun ameyp/denote-should-export-to-hugo ()
"Check whether the current buffer should be exported to a published hugo post."
(save-excursion
(save-restriction
(widen)
(goto-char (point-min))
(if (re-search-forward ameyp/denote--hugo-export-regexp nil t 1)
(progn
(let ((value (buffer-substring (point) (line-end-position))))
(or (string-equal value "t")
(string-equal value "true"))))))))
(defun ameyp/goto-last-consecutive-denote-property-line ()
"Move point to the last consecutive line at the beginning of the buffer that starts with '#+'"
(interactive)
(goto-char (point-min))
(let ((last-prop-line (point-min)))
(while (looking-at "^#+")
(setq last-prop-line (point))
(forward-line 1))
(goto-char last-prop-line)
(if (looking-at "^#+")
(beginning-of-line)
(message "No property line found"))))
(defun ameyp/org-hugo-mark-for-export()
"Inserts a frontmatter property to mark the denote file for export to a hugo post."
(interactive)
(save-excursion
(save-restriction
(widen)
(goto-char (point-min))
(if (re-search-forward ameyp/denote--hugo-export-regexp nil t 1)
;; Found an existing property, set it to t.
(progn
(delete-region (point) (line-end-position))
(insert "t"))
;; No existing property found, go to the end of the frontmatter and insert the property.
(ameyp/goto-last-consecutive-denote-property-line)
(goto-char (line-end-position))
(insert "\n#+hugo_export: t")
)
))
)
(defun ameyp/org-hugo-export ()
"Export current buffer to a hugo post."
(interactive)
(if (ameyp/denote-should-export-to-hugo)
(let ((org-hugo-section "post")
(org-hugo-base-dir "~/Developer/wirywolf.com")
(org-hugo-front-matter-format "yaml"))
(org-hugo-export-wim-to-md))
(message (format "Not exporting %s" (buffer-file-name)))))
(defun ameyp/org-hugo-export-marked-files ()
"Export all marked files in dired buffer to hugo posts."
(interactive)
(let ((org-hugo-section "post")
(org-hugo-base-dir "~/Developer/wirywolf.com")
(org-hugo-front-matter-format "yaml"))
(save-window-excursion
(mapc (lambda (filename)
(find-file filename)
(ameyp/org-hugo-export))
(dired-get-marked-files))
)))
It mostly works, except if I need to embed hugo shortcodes inside a source code block, like in the above block. I haven’t figured out how to escape them automatically during the conversion process (I rely on ox-hugo’s markdown exporter) so for now, any hugo shortcodes must be manually escaped in the generated markdown as described in this post: https://liatas.com/posts/escaping-hugo-shortcodes/
I ran into several issues when trying to create a Fedora CoreOS template on Proxmox using Packer, so here’s how I got it working.
Overview
- Using Packer, we will boot a VM with a CoreOS live cd ISO attached. We will then run
coreos-installer
to install CoreOS to a disk attached to that VM, and finally Packer will convert that VM to a template.
- The Packer VM needs to boot up with
qemu-guest-agent
installed for Packer to read its IP and run commands on it for further setup.
- The Packer VM needs SSH public keys for Packer to be able to SSH to it.
Ignition configs
CoreOS uses a tool called Ignition to perform first-boot configuration. While you can write Ignition configurations by hand (it’s JSON after all), Fedora recommends writing your configs in Butane and using the eponymous CLI to convert them to the Ignition format. We need our Ignition/Butane config to do two things:
- Run
qemu-guest-agent
on startup
- Create a user with
sudo
privileges and our SSH public key
Fedora CoreOS does not have a package manager, therefore qemu-guest-agent
is not easily obtainable. The only way I could think of was to run it inside a docker container with --network host
so that it would report the host VM’s IP address and not the container’s internal one. There is no official docker image for qemu-guest-agent
; I opted to use this one. You can either use it (the Dockerfile looks trustworthy) or build your own.
Additionally, I ran into a third thing that’s needed specifically for the Packer VM: docker needs to be given a different data-root
. By default, the docker daemon writes data to a subdirectory in /var
. However, the qemu-guest-agent
docker container proved too large for whatever storage device the ISO mounted at /var
. After logging in, I noticed that /tmp
had nearly a gigabyte of free space, so I changed the config to configure docker to use /tmp/docker
as its data directory specifically for the Packer VM.
All that said, here’s the butane config file for the Packer VM:
<a id=“code-snippet–installer.bu”></a>
variant: fcos
version: 1.4.0
storage:
files:
- path: /etc/docker/daemon.json
mode: 0600
contents:
inline: |
{
"data-root": "/tmp/docker"
}
passwd:
users:
- name: core
ssh_authorized_keys:
- ssh-rsa AAAAB3...
groups:
- wheel
- sudo
systemd:
units:
- name: qemu-guest-agent.service
enabled: true
contents: |
[Unit]
Description=Runs qemu guest agent inside docker
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=docker run -d --privileged --network host -v /dev/virtio-ports/org.qemu.guest_agent.0:/dev/virtio-ports/org.qemu.guest_agent.0 eleh/qemu-guest-agent
[Install]
WantedBy=multi-user.target
Next up, one final VM (the one Packer will convert into a template) needs its own Ignition config file. This will be usesd by Ignition on first boot of any VM you create by cloning the template. It is largely identical to the config file used for installation, minus the changes to Docker’s data root.
<a id=“code-snippet–template.bu”></a>
variant: fcos
version: 1.4.0
storage:
files:
- path: /etc/docker/daemon.json
mode: 0600
contents:
inline: |
{
"data-root": "/tmp/docker"
}
passwd:
users:
- name: core
ssh_authorized_keys:
- ssh-rsa AAAAB3...
groups:
- wheel
- sudo
systemd:
units:
- name: qemu-guest-agent.service
enabled: true
contents: |
[Unit]
Description=Runs qemu guest agent inside docker
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=docker run -d --privileged --network host -v /dev/virtio-ports/org.qemu.guest_agent.0:/dev/virtio-ports/org.qemu.guest_agent.0 eleh/qemu-guest-agent
[Install]
WantedBy=multi-user.target
You can convert a butane file to the ignition format by running
butane --pretty --strict config/input.bu > output.ign
Packer build
We’re going to use a few features of Packer to achieve our goal:
- The Proxmox ISO builder provided by Packer
- The HTTP server Packer provides to serve our Packer VM’s Ignition config
additional_iso_files
provided by the Proxmox builder to serve our template’s Ignition config
- An additional ISO attached to the Packer VM with our template’s Ignition config
- The Shell provisioner provided by Packer to install CoreOS
Create a directory with the following files:
packer-root
.
|- config
| |- installer.bu
| |- template.bu
|- proxmox-coreos.pkr.hcl
In the below config, I’ve embedded secrets inline for readability. You should use Variables to store your secrets (such as your Proxmox token/password) and keep those files private (aka off GitHub). Change the values of fields like the vm_id
and iso_file
to taste, and note that you will have to manually download that ISO to your Proxmox node first. If your storage pools on Proxmox have different names, you’ll have to change the local
to your pool’s name in several places too.
<a id=“code-snippet–proxmox-coreos.pkr.hcl”></a>
packer {
required_plugins {
proxmox = {
version = ">= 1.1.0"
source = "github.com/hashicorp/proxmox"
}
}
}
source "proxmox" "coreos" {
// proxmox configuration
insecure_skip_tls_verify = true
node = var.proxmox.node
username = var.proxmox.username
token = var.proxmox.token
proxmox_url = var.proxmox.api_url
# Commands packer enters to boot and start the auto install
boot_wait = "2s"
boot_command = [
"<spacebar><wait><spacebar><wait><spacebar><wait><spacebar><wait><spacebar><wait>",
"<tab><wait>",
"<down><down><end>",
" ignition.config.url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/installer.ign",
"<enter>"
]
# This supplies our installer ignition file
http_directory = "config"
# This supplies our template ignition file
additional_iso_files {
cd_files = ["./config/template.ign"]
iso_storage_pool = "local"
unmount = true
}
# CoreOS does not support CloudInit
cloud_init = false
qemu_agent = true
scsi_controller = "virtio-scsi-pci"
cpu_type = "host"
cores = "2"
memory = "2048"
os = "l26"
vga {
type = "qxl"
memory = "16"
}
network_adapters {
model = "virtio"
bridge = "vmbr0"
}
disks {
disk_size = "45G"
storage_pool = "local-lvm"
storage_pool_type = "lvm"
type = "virtio"
}
iso_file = "local:iso/fedora-coreos-37.20221106.3.0-live.x86_64.iso"
unmount_iso = true
template_name = "coreos-37.20221106.3.0"
template_description = "Fedora CoreOS"
ssh_username = "core"
ssh_private_key_file = "~/.ssh/id_rsa"
ssh_timeout = "20m"
}
build {
sources = ["source.proxmox.coreos"]
provisioner "shell" {
inline = [
"sudo mkdir /tmp/iso",
"sudo mount /dev/sr1 /tmp/iso -o ro",
"sudo coreos-installer install /dev/vda --ignition-file /tmp/iso/template.ign",
# Packer's shutdown command doesn't seem to work, likely because we run qemu-guest-agent
# inside a docker container.
# This will shutdown the VM after 1 minute, which is less than the duration that Packer
# waits for its shutdown command to complete, so it works out.
"sudo shutdown -h +1"
]
}
}
Bringing it all together
Go to packer-root
and generate the ignition configs:
butane --pretty --strict config/installer.bu > config/installer.ign
butane --pretty --strict config/template.bu > config/template.ign
Build the template using packer:
packer build --on-error=ask .
Assuming the build succeeds, you should see your new VM template on your Proxmox node. If it fails, check the console of the VM created by Packer for errors. You can also get detailed Packer logs by exporting the following environment variables before running packer build
:
export PACKER_LOG_PATH="/tmp/packer.log"
export PACKER_LOG=10
Running packer after exporting these variables will create a detailed log file at /tmp/packer.log
.
Alternate approach: static IPs and rpm-ostree {#alternate-approach-static-ips-and-rpm-ostree}
If you don’t want to use an unofficial docker image for qemu-guest-agent
, you can assign static IPs to the Packer VM and to the template. Additionally, you can install qemu-guest-agent
in the template (but not in the Packer VM because its root filesystem is mounted as read-only) using rpm-ostree.
Static IP for Packer VM
To boot the Packer VM with a static IP, change the boot_command
line that starts with ignition.config.url
to:
" ignition.config.url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/installer.ign net.ifnames=0",
and change your installer.bu
file to:
variant: fcos
version: 1.4.0
storage:
files:
- path: /etc/NetworkManager/system-connections/eth0.nmconnection
mode: 0600
contents:
inline: |
[connection]
id=eth0
type=ethernet
interface-name=eth0
[ipv4]
address1=192.168.1.200/24,192.168.1.1
dhcp-hostname=k3s-test-controller
dns=192.168.1.1;
dns-search=
may-fail=false
method=manual
passwd:
users:
- name: core
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkayHzoWIWE4P1z3+qOoyfdnapU8ATcYUriXDsdGkyncEZpnz4jHqZsp0EVZhtSg668H8+aEDd4RSYHvmprXWZJQe+CUIQRIfazch8mCmlVYpRVqtjms3ya7S6WWl96+jwecEwQf0eDYojFry+S5A8+cZmIZfsQb6PkRr350OxzufH2dii96zS9aIOFz7NiVn/qB+mhyMuicrPqzx0HJjK4t8p2WFMAQsPrFqWwWlX/nDr0xFDmPUZlh4SEhznSB+ai99B0FFsjaHyhlSGBL56Sy0TL3CGXWcaW5kwQhzf9P1n/WK+83j8CLkD/xwxhB5MdhNUWIY7c02QWIeU9RPOU6Y8Qf4sgKpd6/CKROJC/SkBDFpE6MMX24/UejR1PPFP+qwg6XnX2g08gIonfI9tKBTsMAPib2D13ZSUK/QgxmOV33hfbiDPXmyXFeLuzW/GIuP9PWbe6qNYoDL2ZUk/BK3kgLWd4gXtVS3Gtu/DEiw+3kCwjP85VBW0NUx7GbM= amey@ubuntu
groups:
- wheel
- sudo
Static IP for template VM
Change your template.bu
file to:
variant: fcos
version: 1.4.0
kernel_arguments:
should_exist:
- net.ifnames=0
storage:
files:
- path: /etc/NetworkManager/system-connections/eth0.nmconnection
mode: 0600
contents:
inline: |
[connection]
id=eth0
type=ethernet
interface-name=eth0
[ipv4]
address1=192.168.1.201/24,192.168.1.1
dhcp-hostname=k3s-test-controller
dns=192.168.1.1;
dns-search=
may-fail=false
method=manual
passwd:
users:
- name: core
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkayHzoWIWE4P1z3+qOoyfdnapU8ATcYUriXDsdGkyncEZpnz4jHqZsp0EVZhtSg668H8+aEDd4RSYHvmprXWZJQe+CUIQRIfazch8mCmlVYpRVqtjms3ya7S6WWl96+jwecEwQf0eDYojFry+S5A8+cZmIZfsQb6PkRr350OxzufH2dii96zS9aIOFz7NiVn/qB+mhyMuicrPqzx0HJjK4t8p2WFMAQsPrFqWwWlX/nDr0xFDmPUZlh4SEhznSB+ai99B0FFsjaHyhlSGBL56Sy0TL3CGXWcaW5kwQhzf9P1n/WK+83j8CLkD/xwxhB5MdhNUWIY7c02QWIeU9RPOU6Y8Qf4sgKpd6/CKROJC/SkBDFpE6MMX24/UejR1PPFP+qwg6XnX2g08gIonfI9tKBTsMAPib2D13ZSUK/QgxmOV33hfbiDPXmyXFeLuzW/GIuP9PWbe6qNYoDL2ZUk/BK3kgLWd4gXtVS3Gtu/DEiw+3kCwjP85VBW0NUx7GbM= amey@ubuntu
groups:
- wheel
- sudo
Install qemu-guest-agent using rpm-ostree
Full credit for this approach goes to the author of this blog post. Add the following to the relevant sections of your template.bu
file:
storage:
files:
- path: /usr/local/bin/install-qemu-guest-agent
mode: 0755
contents:
inline: |
#!/usr/bin/env bash
set -euo pipefail
rpm-ostree install qemu-guest-agent
systemd:
units:
- name: install-qemu-guest-agent.service
enabled: true
contents: |
[Unit]
After=network-online.target
Wants=network-online.target
Before=systemd-user-sessions.service
OnFailure=emergency.target
OnFailureJobMode=replace-irreversibly
ConditionPathExists=!/var/lib/qemu-guest-agent-installed
[Service]
RemainAfterExit=yes
Type=oneshot
ExecStart=/usr/local/bin/install-qemu-guest-agent
ExecStartPost=/usr/bin/touch /var/lib/qemu-guest-agent-installed
ExecStartPost=/usr/bin/systemctl --no-block reboot
StandardOutput=kmsg+console
StandardError=kmsg+console
[Install]
WantedBy=multi-user.target
The main terraform provider for proxmox is not very polished. It works, but doesn’t appear to perform any validation of your inputs, and also does a terrible job of communicating errors thrown by proxmox. Consequently, any time you make a mistake in your resource, you’re likely to see an extremely unhelpful message that says:
400 Parameter Validation failed
When this happens, add pm_debug = true
to your provider configuration:
provider "proxmox" {
...
pm_debug = true
}
and run TF_LOG_TRACE=TRACE terraform apply
to get detailed logs from terraform. The actual problem with your resource will be somethere in the output.
While attempting to build an Ubuntu 22.04 image using Packer, with the build running on a Proxmox VM, I got the following error:
==> proxmox.ubuntu_k3s: Post "https://<ip>/api2/json/nodes/proxmox/storage/local/upload": write tcp <local ip>-><ip>: write: broken pipe
==> proxmox.ubuntu_k3s: delete volume failed: 501 Method 'DELETE /nodes/proxmox/storage/local/content/' not implemented
Build 'proxmox.ubuntu_k3s' errored after 20 milliseconds 690 microseconds: 501 Method 'DELETE /nodes/proxmox/storage/local/content/' not implemented
After a few searches, I found an open issue on GitHub pointing to permissions being lacking for the user configured for Packer. The fix was to add the correct Datastore
permission. I probably added too many because I wasn’t sure which one to add, but here’s the set that worked for me:
pveum role modify <role-name-here> -privs "VM.Allocate VM.Clone VM.Config.CDROM VM.Config.CPU VM.Config.Cloudinit VM.Config.Disk VM.Config.HWType VM.Config.Memory VM.Config.Network VM.Config.Options VM.Monitor VM.Audit VM.PowerMgmt Datastore.AllocateSpace Datastore.Allocate Datastore.AllocateSpace Datastore.AllocateTemplate Datastore.Audit Sys.Audit VM.Console"