<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"><channel><title>RSS Feed</title><link>https://blog.jjgaming.net/blog/feed</link><description/><pubDate>Tue, 29 Apr 2025 00:00:00 +1000</pubDate><item><title>ZFS Root on Ubuntu</title><link>https://blog.jjgaming.net/blog/zfs-root-ubuntu</link><guid>https://blog.jjgaming.net/blog/zfs-root-ubuntu</guid><pubDate>Tue, 29 Apr 2025 00:00:00 +1000</pubDate><description><![CDATA[<h1>ZFS, Native Encryption, root's and you</h1>
<h2>About</h2>
<p>Alike my existing article for a ZFS-rootfs on Archlinux I would also like to provide instructions for achieving the same thing on Ubuntu Server 24.04.2</p>
<p>Please see my main article <a href="https://blog.jjgaming.net/blog/zfs-root">https://blog.jjgaming.net/blog/zfs-root</a> regarding the use of a ZFS rootfs, reasons to use one and more.</p>
<p>This example makes some assumptions:</p>
<ol>
<li>It will create an EFI partition and a boot partition before the zfs partition on each disk</li>
<li>It also stores the decryption passphrase key file in the initramfs for automatic unlocking of the zpool at boot time. By omitting this step the host will instead prompt for the passphrase.</li>
<li>Only the first disk will be used for its EFI and boot partitions (The others are also provisioned this way for future proofing but are not used in this example)</li>
</ol>
<h3>Instructions</h3>
<pre><code>
#!/bin/bash

set -e # Stop on any error

# Set up some variables
export disks=(/dev/disk/by-path/virtio-pci-0000:00:02.0) # Some disk path, works with /dev/ paths and /dev/disk paths

export zpoolType='' # Option examples: raidz2, raidz3 # mirror #'' &lt; blank for stripe/single disk
export rootfsPassphrase=UbuntU1324
export rootPassword=${rootfsPassphrase}
export hostName=myUbuntuServer
export zpoolName=${hostName}
export timezoneFile=/usr/share/zoneinfo/UTC

# Check if we're using -partX or numbered partition suffixes suffixes
if [[ ${disks[0]} =~ /dev/disk/by-path ]]
then
  partPrefix='-part' # /dev/disk/by- partition suffixes
else
  partPrefix='' # No partition suffixes
fi

# Partition the disks with an EFI, boot and ZFS partition
for disk in ${disks[*]} ; do sgdisk "${disk}" -o \
-n 1:2M:+1G -t 1:EF00 \
-n 2:0:+2G -t 2:8300  \
-n 3 -t 3:BF01 \
; done

sleep 1 # Wait a moment for partition tables to appear

# mkfs.vfat on all first partitions
for part in ${disks[*]/%/${partPrefix}1} ; do ls "${part}" ; mkfs.vfat -F 32 -n EFI "${part}" ; done

# Install ZFS in the live environment
apt install -y zfsutils-linux

# Create our passphrase key and protect it
echo "${rootfsPassphrase}" &gt; /etc/zfs/${zpoolName}.key
chmod 000 /etc/zfs/${zpoolName}.key

# Create our zpool with however many disks
zpool create -f -o ashift=12 \
 -O compression=lz4 \
 -O normalization=formD \
 -O acltype=posixacl \
 -O xattr=sa \
 -O relatime=on \
 -O encryption=aes-256-gcm \
 -O keylocation=file:///etc/zfs/${zpoolName}.key \
 -O keyformat=passphrase \
 -O canmount=off \
 -O mountpoint=none \
 -o autotrim=on \
 -o compatibility=openzfs-2.1-linux \
 -R /${zpoolName} \
 ${zpoolName} ${zpoolType} "${disks[*]/%/${partPrefix}3}"

# Create some additional mountpoints and mount them (Optional but often preferred)
zfs create -o mountpoint=/ -o canmount=noauto ${zpoolName}/root

mkdir -p /${zpoolName}
mount -t zfs -o zfsutil ${zpoolName}/root    /${zpoolName}
mkdir -p /${zpoolName}/home /${zpoolName}/var    /${zpoolName}/var/log

# Top level
#zfs create -o mountpoint=/home ${zpoolName}/home
#zfs create -o mountpoint=/var ${zpoolName}/var
#zfs create -o mountpoint=/var/log ${zpoolName}/var/log

# Nested under rootfs
zfs create ${zpoolName}/root/home
zfs create ${zpoolName}/root/var
zfs create ${zpoolName}/root/var/log

# zfs mount -a

# Set our bootfs as a hint to the startup environment
zpool set bootfs=${zpoolName}/root ${zpoolName}

# Top level
#mkdir -p /${zpoolName}/home /${zpoolName}/var    /${zpoolName}/var/log
#mount -t zfs -o zfsutil ${zpoolName}/home    /${zpoolName}/home
#mount -t zfs -o zfsutil ${zpoolName}/var     /${zpoolName}/var
#mount -t zfs -o zfsutil ${zpoolName}/var/log /${zpoolName}/var/log

# mkfs.ext4 on the second partition of the first disk and begin mounting
mkfs.ext4 ${disks[0]}${partPrefix}2
mkdir /${zpoolName}/boot
mount ${disks[0]}${partPrefix}2 /${zpoolName}/boot
mkdir /${zpoolName}/boot/efi
###mkfs.msdos -F 32 -n EFI ${disks[0]}${partPrefix}1
mount ${disks[0]}${partPrefix}1 /${zpoolName}/boot/efi

# Install debootstrap for our installation and begin the installation
apt install -y debootstrap
debootstrap $(basename `ls -d /cdrom/dists/*/ | grep -v stable | head -1`) /${zpoolName}

# Set up the new environmnet
echo ${hostName} &gt; /${zpoolName}/etc/hostname
sed "s/ubuntu/${hostName}/g" /etc/hosts &gt; /${zpoolName}/etc/hosts
echo 'deb https://ubuntu.mirror.serversaustralia.com.au/ubuntu/ noble main' &gt; /etc/apt/sources.list # Overwrites
sed '/cdrom/d' /etc/apt/sources.list &gt; /${zpoolName}/etc/apt/sources.list
cp /etc/netplan/*.yaml /${zpoolName}/etc/netplan/
mkdir -p /${zpoolName}/install/etc/NetworkManager/system-connections/

if [ -d /etc/NetworkManager/system-connections ]
then
  cp /etc/NetworkManager/system-connections/* /${zpoolName}/install/etc/NetworkManager/system-connections/
fi

mkdir -p /${zpoolName}/etc/zfs
cp -nv /etc/zfs/${zpoolName}.key /${zpoolName}/etc/zfs

# Copy timezoneFile information

#if [ -n "${timezoneFile}" ]
#then
#  rm /etc/localtime
#  ln -s "${timezoneFile}" /etc/localtime
#  mkdir -p /{zpoolName}/etc
#  ln -s ln -s "/${zpoolName}/${timezoneFile}" /${zpoolName}/etc/localtime
#fi

# Prepare to and chroot into the new environment
mount --make-private --rbind /dev  /${zpoolName}/dev
mount --make-private --rbind /proc /${zpoolName}/proc
mount --make-private --rbind /sys  /${zpoolName}/sys

# Next we run some setup commands in the chroot

yes "${rootPassword}" | chroot /${zpoolName} passwd root

chroot /${zpoolName} locale-gen --purge "en_US.UTF-8"
chroot /${zpoolName} update-locale LANG=en_US.UTF-8 LANGUAGE=en_US
chroot /${zpoolName} dpkg-reconfigure --frontend noninteractive locales

chroot /${zpoolName} dpkg-reconfigure --frontend noninteractive tzdata
chroot /${zpoolName} apt update
chroot /${zpoolName} apt install --yes --no-install-recommends linux-image-generic linux-headers-generic
chroot /${zpoolName} apt install --yes zfs-initramfs grub-efi-amd64-signed shim-signed

echo "PARTUUID=$(blkid -s PARTUUID -o value ${disks[0]}${partPrefix}2) \
    /boot ext4 noatime,nofail,x-systemd.device-timeout=5s 0 1" &gt;&gt; /${zpoolName}/etc/fstab
echo "PARTUUID=$(blkid -s PARTUUID -o value ${disks[0]}${partPrefix}1) \
    /boot/efi vfat noatime,nofail,x-systemd.device-timeout=5s 0 1" &gt;&gt; /${zpoolName}/etc/fstab
#cat /etc/fstab

KERNEL=`ls /${zpoolName}/usr/lib/modules/ | cut -d/ -f1 | sed 's/linux-image-//'`
chroot /${zpoolName} update-initramfs -u -k ${KERNEL}

chroot /${zpoolName} update-grub
chroot /${zpoolName} grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=${hostName}     --recheck --no-floppy

# If we made it here we've finished

mount | grep -v zfs | tac | awk "/\/${zpoolName}/ {print \$3}" | xargs -i{} umount -lf {}
zpool export -a
echo Done. Ready to be rebooted into.
</code></pre>]]></description></item><item><title>ZFS Root on Archlinux</title><link>https://blog.jjgaming.net/blog/zfs-root-archlinux</link><guid>https://blog.jjgaming.net/blog/zfs-root-archlinux</guid><pubDate>Mon, 05 Jun 2023 00:00:00 +1000</pubDate><description><![CDATA[<h1>ZFS, Native Encryption, root's and you</h1>
<h2>About</h2>
<p>I see repeated questions and general confusion surrounding zpool's and datasets for a zfs-on-root configuration so I figured that would make a good first post.</p>
<p>ZFS is a fully featured solution providing disk array management, its own filesystem and support for virtual block devices backed by the pool. It has a ton of transparent features for optimizing performance and efficiency for tuning in professional deployments.</p>
<p>Data integrity is verified through checksumming of each written record. With this zpool on a single disk is capable of detecting bitrot caused by a failing drive or a transient problem on the host allowing for an early reaction to failure. Redundant arrays can repair the corrupted data.</p>
<p>It also uses a Copy-on-Write methodology avoiding the write-hole problem in traditional hardware RAID5 arrays among other potential write truncation events.</p>
<h2>Native Encryption, key notes and comparisons</h2>
<p>ZFS's Native Encryption works 'at rest' meaning it encrypts data as it is written to the disks. This happens transparently alongside other features such as compression with advantages and other considerations.</p>
<p>ZSF Native Encryption uses modern encryption standards such as AES128/256 with either in Cipher Block Chaining (CBC) mode or Galoris/Counter Mode (GCM). GCM supports multithreading and is the recommended choice (<code>aes-256-gcm</code>). It also happens to be the default when encryption is set to <code>on</code>.</p>
<p>As encryption is enabled per-dataset (or volume) it doesn't encrypt metadata regarding the zpool itself. This means running <code>zfs list</code> will still show an encrypted dataset's name and properties though its contents cannot be read without unlocking it with a passphrase or keyfile. Some consider this unacceptable though this model poses no security risk for the data inside.</p>
<p>This design choice comes with powerful benefits such as being able to verify the integrity of the data in a scrub operation without having to unlock the dataset. This also means you can also send a raw snapshot of an encrypted dataset to another zpool or remote zpool without decrypting it. The destination can safely store a copy and even check it for errors during a scrub without ever having access to the content inside.</p>
<p>OpenZFS have implemented this feature from the ground up. Its capable of achieving great speeds on modern CPUs with relevant acceleration though could prove to be an undercut for IO performance on older systems. I've personally found that older model laptops achieve significantly slower throughput with native encryption compared with today's desktop CPUs of many cores/threads to do work on.</p>
<h3>LUKS (Linux Unified Key Setup)</h3>
<p>LUKS is the tried and tested go-to for Linux disk encryption featuring great performance and compatibility with any filesystem as it presents the underlying storage as a virtual block device ready to be formatted with anything. It can be used directly on a disk or partition before putting a filesytem on top. It is often used in conjunction with other software array management tools such as <code>mdadm</code> or <code>lvm</code> to encrypt a virtual block device such as LVM's Logical Volumes. Some even run LUKS on top of a ZFS zvol.</p>
<p>In difference to Native Encryption on ZFS where the dataset metadata is known and accessible on a zpool, LUKS encrypts all writes to the virtual block device it exposes. Instead of unlocking a ZFS dataset at boot time one could opt to unlock a LUKS-encrypted whole-disk and mount the underlying partitioned filesystem. Or one could make your zpool on top of disks encrypted with LUKS if desired.</p>
<p>A combination of ZFS and LUKS (disk&gt;Luks&gt;zpool&gt;dataset / zfs&gt;zvol&gt;luks&gt;filesystem) can eliminate the metadata concern some have with luks encrypting everything. Though the metadata of zfs datasets and volumes pose no threat following a sound encryption implementation with a securely stored passphrase or keyfile for unlocking them.</p>
<h2>Using ZFS for a root filesystem</h2>
<p>There are good reasons to use ZFS as your rootfs. Not only is a rootfs well compressible but the use of ZFS is transparent to the operating system allowing us to take advantage of many relevant features for workstation setup such as native encryption, transparent compression and snapshotting for easy rollbacks or emergency retrieval of something which has been deleted.</p>
<p>I intend to cover some popular ZFS root EFI configurations in this article using Archlinux and to hope to clear up any confusion regarding commonly included flags and trends from the web.</p>
<p>Some mainstream distributions already bundle support for ZFS-root configurations however I find they clutter the topology and naming conventions quite often when it really doesn't have to be complicated in the slightest. This guide will be focusing on a tidy configuration example where one could easily recursively send all zpool datasets elsewhere as part of a backup strategy.</p>
<h3>Getting started</h3>
<p>If you're unsure about configuring a ZFS root or whether it's for you - you could also follow this guide zero risk using a virtual machine with a throwaway virtual disk.</p>
<p>With Archlinux there are many ways to bootstrap a new system or VM. You can:</p>
<ol>
<li>Start a VM with both the intended installation disk and ISO attached to install before booting on real hardware</li>
<li>Create a <code>/boot</code> and <code>zfs</code> partition+zpool right on the host, <code>pacstrap</code> directly into them, chroot and generate initramfs images for the new installation </li>
<li>Boot the ISO on real hardware and do the installation traditionally.</li>
</ol>
<p>We will be following the most accessible example here using #3.</p>
<p>First off, grab the latest archiso from <a href="https://archlinux.org/download/">Archlinux.org</a> and you can either block-copy it to a USB drive, write a CD, or put it on a PXE server. While this is traditionally done with a block level copying tool such as <code>dd</code> or graphical utilities in Windows such as Rufus - this can also be done by piping from <code>pv</code>, or <code>cat</code> into the usb's block device path. Even calling <code>cp</code> directly from the iso file to the usb stick block device works given these tools all write sequentially without modifying or truncating the output.</p>
<p>To make sure you don't write the ISO to the the wrong disk it's highly recommended to use the <code>/dev/disk/by-id</code> disk paths which contain valuable information for identifying your disks by model, brand and serial numbers. This is also key information for setting up a zfs zpool's as well for the same reasons.</p>
<h3>Preparing the live environment</h3>
<p>To create a zpool you'll need the ZFS driver and userspace utilities in the live environment. You will have to expand your  livecd's cowspace mountpoint in-memory for more room to work with and then source ZFS by either building its source, installing a package built to your kernel version or the easiest solution: Installing a zfs-dkms package.</p>
<p>Luckily this is a common experience and projects already exist to set this up automatically. <a href="https://github.com/eoli3n/archiso-zfs">eoli3n's project archiso-zfs</a> handles exactly this and will be used for this example to skip all the manual work. In the live environment run:</p>
<ol>
<li><code>git clone https://github.com/eoli3n/archiso-zfs</code></li>
<li>bash archiso-zfs/init</li>
</ol>
<p>This script will prepare the ZFS module and its userspace utilities in the live environment automatically. If your kernel version doesn't match the latest archzfs repo builds (Common with the arch ISOs) it will build one for itself using dkms.</p>
<p>Once it says &quot;ZFS is ready&quot; you can proceed to configure your disk.</p>
<h3>Disk configuration</h3>
<h4>Partitioning</h4>
<p>Run &quot;gdisk&quot; against the new disk you wish to install on and configure an EFI partition plus a ZFS partition of the remaining space:</p>
<p><code>sgdisk /dev/disk/by-id/ata-diskPathGoesHere-verifyPathFirst -n 1:2M:+1G -t 1:EF00 -n 2 -t 2:BF01</code></p>
<p>I would recommend partitioning additional disks identically so they're prepared for any disk failures later on.</p>
<h4>Formatting and creating a zpool.</h4>
<p>Create a vfat partition on the first partition:</p>
<p><code>mkfs.vfat /dev/disk/by-id/ata-diskPathGoesHere-verifyPathFirst-part1</code></p>
<p>And create a zpool on the second/final partition using the system hostname as the zpool name for simplicity:</p>
<p><code>zpool create my-pc mkfs.vfat /dev/disk/by-id/ata-diskPathGoesHere-verifyPathFirst-part2 -o ashift=12 -O normalization=formD -O compression=lz4 -O xattr=sa -O acltype=posixacl -O mountpoint=none</code></p>
<p>This example sets some common helpful flags for a new zpool:</p>
<p><code>ashift=12</code> to create the zpool assuming a 4096b sector size of the drive. Modern SSDs often lie about being 512b and setting this value too low can cause performance problems if you switch from a 512b sector-sized disk to a 4096b one in future. Having this value set too high is fine, but too low can amplify IO operations unintentionally when switching to differently sector-sized disks in future. <code>ashift=12</code> (2^12=4096b) is a safe assumption for most drives a person will encounter.</p>
<p><code>xattr=sa</code> to use System-attribute-based xattrs which improves performance by reducing the amount of IO required to read them separately from a file. This is not critical to most situations but you wouldn't need to find that out later. It primarily improves performance in workloads workload which extensively reference extended attributes of files.</p>
<p><code>-O mountpoint=none</code> This just tells the new zpool to not mount itself to <code>/zpoolName</code> after creation nor any new child datasets.</p>
<p><code>normalization=formD</code> avoids potential filename confusion by the system in the event that two different files with their own names may look identical to the system. This prevents that by squashing their filename into a unicode representation to avoid this.</p>
<p><code>compression=lz4</code> uses LZ4 compression which is a common general recommendation given its great performance and resulting compression while being fast.</p>
<p>LZ4 compression on ZFS also skips the operation if ZFS detects something cannot be compressed such as media (which is already encoded to be as small as can be and will not compress). This prevents wasting cycles later trying to read the data and having to decompress it to no IO benefit.</p>
<p>During zpool creation if you want to use multiple disks for your host's zpool you can do so during the creation step with any number of redundant or striped array configurations.</p>
<h4>Root dataset</h4>
<h5>Creation</h5>
<p>We can create a dataset intended to be used as the root dataset with:</p>
<p><code>zfs create my-pc/root -o mountpoint=legacy</code></p>
<p>Keeping in mind this dataset will inherit important properties from the default zpool dataset such as compression, normalization and xattr storage preferences which are all highly recommended for rootfs datasets.</p>
<p>In this example I've set the mountpoint to <code>legacy</code> which leaves the dataset for fstab to mount. In our case an initramfs hook will see the root dataset at boot and use it regardless of this. However, if we were to set it as &quot;<code>/</code>&quot; it may have over-mounted our live environment and upset some things.</p>
<h5>Mounting</h5>
<p>Mount the dataset with: <code>mount -t zfs my-pc/root /mnt</code>.</p>
<p>You can verify it has been mounted with <code>df -h /mnt</code> which should show the zpool and dataset name under the Filesystem section on the left.</p>
<p>At this point we should also make a boot directory and mount our EFI partition there:</p>
<p><code>mkdir /mnt/boot</code></p>
<p><code>mount /dev/disk/by-id/ata-diskPathGoesHere-verifyPathFirst-part1 /mnt/boot</code></p>
<h4>Pacstrapping</h4>
<p>We can now begin installing to this environment. I'm going to recommend using <code>linux-lts</code> here as ZFS often run on the latest kernel versions until it has caught up with supporting build:</p>
<p><code>pacstrap -P /mnt base vim networkmanager linux-lts linux-lts-headers</code></p>
<p>Depending on your network and IO speeds this step can take a few minutes.</p>
<h4>Prepping the rootfs</h4>
<p>We can save editing /etc/fstab by running <code>genfstab /mnt &gt; /mnt/etc/fstab</code> to write an fstab file including its current rootfs and boot partition mounts. Using the <code>mountpoint=legacy</code> approach needs an fstab file so the zfs initramfs hook will mount it at boot. For some reason when it sees <code>mountpoint=legacy</code> it refuses to mount a legacy dataset without seeing it in fstab. There is <em>probably</em> good reason for this.</p>
<p>At this point chroot into the new rootfs with <code>arch-chroot /mnt</code> to configure some additional things</p>
<h5>System general</h5>
<p>First, we can enable the NetworkManager service in this example so it has networking once it boots.</p>
<p><code>systemctl enable NetworkManager</code></p>
<p>We can also give it a hostname:</p>
<p><code>echo my-pc &gt; /etc/hostname</code></p>
<p>Also don't forget to set a root login password with <code>passwd</code> so you can log in after it boots!</p>
<h5>Bootloader</h5>
<p>Install a bootloader (bootctl/gummiboot)</p>
<p><code>bootctl install</code></p>
<p>And create a boot option making sure to replace this example zpool name with your own:</p>
<pre><code>cat &gt;/boot/loader/entries/my-pc.conf &lt;&lt; EOF
title   my-pc ZFS Root
linux   vmlinuz-linux-lts
initrd  initramfs-linux-lts.img
options zfs=my-pc/root rw
EOF </code></pre>
<p>If your machine is being accessed with a serial console it would be a good idea to append <code>console=ttyS0</code> after the <code>rw</code> argument.</p>
<h5>Initramfs and ZFS</h5>
<p>We need to install zfs. The <code>pacstrap</code> command used in the archiso environment copied our ZFS repository over so we can run <code>pacman -Sy zfs-dkms</code> in the chroot to grab it.</p>
<p>We also need to regenerate our initramfs with the packaged <code>zfs</code> hook <a href="https://github.com/ipaqmaster/zfsUnlocker">Or an alternative</a>:</p>
<p>Edit <code>/etc/mkinitcpio.conf</code> and add <code>zfs</code> to your array of <code>HOOKS=</code></p>
<p>Then run: <code>mkinitcpio -P</code> to generate new images. Keep an eye open for the <code>[zfs]</code> hook in here making sure there are no errors trying to add its modules otherwise attempting to boot will fail.</p>
<h4>Booting</h4>
<p>At this point the machine is ready to boot. Restart and boot into the disk and enjoy setting up the remainder of your zfs-on-root Arch experience!</p>]]></description></item><item><title>Converting Virtual Disks to ZVOLs</title><link>https://blog.jjgaming.net/blog/vdiskstozvols</link><guid>https://blog.jjgaming.net/blog/vdiskstozvols</guid><pubDate>Fri, 01 Dec 2023 00:00:00 +1100</pubDate><description><![CDATA[<h2>Converting virtual disks to zvols</h2>
<p>Having recently worked with the latest Windows Eval environment for some testing I downloaded WinDev2309Eval.VMWare.zip from Microsoft's <a href="https://developer.microsoft.com/en-us/windows/downloads/virtual-machines/">Get a Windows 11 development environment</a> page and went with the VMWare option knowing it would be a quick n' easy vmdk conversion.</p>
<h3>Determining the virtual disk's &quot;true&quot; size.</h3>
<p>With qcow2 files you can simply run <code>file my.qcow2</code> to see the true size of the virtual disk, regardless of the actual consumed storage space as a file:</p>
<pre><code> $ file my.qcow2 
my.qcow2: QEMU QCOW Image (v3), 21474836480 bytes (v3), 21474836480 bytes</code></pre>
<p><code>file</code> isn't as familiar with vmdk files so to get the true size of the virtual disk we can instead call <code>qemu-img info</code> to help determine what size the zvol should be:</p>
<pre><code>$ qemu-img info WinDev2309Eval-disk1.vmdk
image: WinDev2309Eval-disk1.vmdk
file format: vmdk
virtual size: 125 GiB (134217728000 bytes)
disk size: 23 GiB
cluster_size: 65536
Format specific information:
    cid: 2132776597
    parent cid: 4294967295
    create type: streamOptimized
    extents:
        [0]:
            compressed: true
            virtual size: 134217728000
            filename: WinDev2309Eval-disk1.vmdk
            cluster size: 65536
            format:
Child node '/file':
    filename: WinDev2309Eval-disk1.vmdk
    protocol type: file
    file length: 23 GiB (24689722880 bytes)
    disk size: 23 GiB</code></pre>
<p>In this output from qemu-img it confirms the file only consumes consuming 23GiB of storage space - but the virtual disk's true size is 125GiB and the rest of the space hasn't been consumed. Given the various use-cases these eval environments may face it makes sense for Microsoft to create this with an additional 100GB space in the guest's disk partitioning for it to support taking in a lot more data than just this (mostly) base OS.</p>
<h3>Making a ZVol</h3>
<p>Knowing the VMDK's virtual disk has a total size of 125 GiB (134217728000 bytes) we can create a zvol for it to land in:  </p>
<p><code>sudo zfs create zpool/WinDev2309Eval-disk1 -V134217728000B -s -o encryption=on -o compression=lz4</code></p>
<p>The above command created a zvol aptly given the same basename as the virtual disk with a perfect volume size in bytes to store the VMDK's contents.</p>
<p>The <code>-s</code> flag created the volume sparse without reserving the space. This makes it possible to create virtual making it possible to create zvol of any size without immediately locking off that storage space. I often find this useful for mapping large virtual drives to take block level images of real disks with 1:1 block allocation as the original disk - knowing they have appropriate free space to a avoid actually filling up my host. image a disk knowing ahead that they won't actually consume much space maeven if the host doesn't have the storage space for it.</p>
<p>I created this zvol in a dataset which was already encrypted so I've specified <code>-o encryption=on</code> to inherit the parent's encryption settings and decryption passphrase. I also specified <code>-o compression=lz4</code> for example's sake despite that already being the default setting for my zpool.</p>
<h3>Converting the guest image to a zvol</h3>
<p>We can use the raw conversion option to convert the image to a format suitable for zvols:</p>
<p><code>sudo chown $USER /dev/zvol/zpool/WinDev2309Eval-disk1 ; qemu-img convert -f vmdk -O raw WinDev2309Eval-disk1.vmdk /dev/zvol/zpool/WinDev2309Eval-disk1</code>.</p>
<p>Depending on the IO capabilities of your host's involved source/dest disks and arrays this can take either a few minutes as it did for my laptop's NVMe or over an hour for slower disk/array configurations. - especially if the source/dest is the same zpool, there's quite a bit of overhead to be considered.</p>
<p>Because there's only about 25GB of consumed space in this image - the remaining 100GB stream of zeros (\x00's) after the VMDK's main used storage space should compress and write out near-instantly.</p>
<h3>Making the virtual disk boot</h3>
<p>It's trivial to set up a new VM on Linux with <code>virt-manager</code>, <code>virt-install</code>, <code>virsh</code> commands or even calling qemu-system-x86_64 directly (<a href="https://github.com/ipaqmaster/vfio">Maybe with some of the wrapper scripts out there</a>) but given this is a pre-packaged Windows install which wants to immediately boot into an Out of the Box user-account setup experience - it's going to have drivers preloaded in for VMWare environments but not for the drivers QEMU can provide.</p>
<p>As such, creating a generic Windows VM template and specifying the disk will likely fail to boot with either 0x00000001 or 0xC0000001 as it fails to comprehend what happened to the environment it was expecting.</p>
<p>We can open the XML <code>WinDev2309Eval.ovf</code> to get an idea of what hardware configuration this VM was made for.</p>
<h4>Reading the omf XML</h4>
<p>Reading this file reveals the guest expects some SCSI and IDE controllers. IDE controllers are commonly used for CDROM drives - especially in VMWare. The SCSI controller is more commonly seen for OS disks and at the top we can see the disk defined as <code>vmdisk1</code>, seen again further down defined as an <Item> with &quot;AddressOnParent&quot; set to <code>0</code> which corresponds to the first-defined SCSI controller. Further down the guest also has <code>firmware</code> set to <code>efi</code> and secureboot enabled. This should be enough information to get started on booting this thing.</p>
<h4>Packing up and doing something else.</h4>
<p>2023-11-03 11:45:05.016947397 +1100</p>
<p>After a few hours of fiddling. No amount of virtual-hardware tweaking was able to get this Win11 Eval VM past 0xC00000001 and 0x00000098. Getting the Win10 Eval vmdk to boot in QEMU is something I'll have to come back to, but this process works fine for converting VM qcow2's to a zvol block device.</p>]]></description></item><item><title>Securing OpenVPN (2024)</title><link>https://blog.jjgaming.net/blog/openvpn2024</link><guid>https://blog.jjgaming.net/blog/openvpn2024</guid><pubDate>Wed, 13 Mar 2024 00:00:00 +1100</pubDate><description><![CDATA[<h1>About</h1>
<p>OpenVPN is great and flexible but maybe a little too flexible. You can create any number of configurations (Sometimes not entirely valid!), even on the less sane side of things and it'll do its best to establish two or more running instances over a network.</p>
<p>For learning purposes (hopefully 'only'...) you can also create a configuration which connects the hosts without encryption which is great for packet capturing OpenVPN's traffic for learning about the protocol and ideally no valid production use case.</p>
<p>The problem I find with it isn't of its own but that of tutorials. There's a sea of openvpn tutorials out there with a bunch of default settings and other tunables without any hardening or annotation of what they're accomplishing with their settings. For users who might want to validate they're they're using the most secure settings Earth has to offer these often fall short.</p>
<p>So I figured I should add my own 'aged like milk' post to throw into the sea.</p>
<h1>Getting started</h1>
<p>OpenVPN will establish with just about any settings but to establish a connection we can consider secure there's some preparation required.</p>
<h2>Public Key Infrastructure (PKI)</h2>
<p>OpenVPN will need certificates for your client and server to validate each other and the planet uses <a href="https://en.wikipedia.org/wiki/Public_key_infrastructure">Public Key Infrastructure</a> and for the general web - the modest <a href="https://en.wikipedia.org/wiki/X.509">X.509</a> standard featuring public keys with digital signatures by an authority, identifying information and more to solve this.</p>
<p>I personally use a Hashicorp Vault cluster with an stupid amount of Access Control Lists managed by Saltstack to generate and maintain my home's Certificate Authority with an obscured root certificate (exported, encrypted offline, practically does not exist) to avoid potential threat actors gaining access to said certificate.</p>
<p>Outside the overkill of using a Vault cluster for infrastructure secret management at home its generated Certificate Authorities aren't special and can be exported for safety, generated elsewhere and imported and are the same as generated anywhere else (Save for the access lists).</p>
<p>So for the purposes of this post I'm going to use <code>openssl</code> to generate a CA for this OpenVPN server and client assuming readers also have access to that command.</p>
<h2>Creating a Certificate Authority with <code>openssl</code></h2>
<p>OpenSSL is an all-in-one cryptography tool with many subcommands each of vast features. In general a certificate signing request must be made followed by the signing of said request with a CA.</p>
<p>In our case we'll create our CA by invoking the <code>req</code> subcommand to create and self-sign its own request in one go with RSA public key cryptography; valid for 10 years:</p>
<pre><code>domain=home.internal # Set your own fancy domain here
openssl req -x509 -newkey rsa:4096 -sha256 -days 3650 -keyout ${domain}.key -out ${domain}.crt</code></pre>
<p>During the second command you will be prompted for a passphrase to encrypt the CA's private key with; followed by prompts for some information regarding who the  organization the certificate belongs to.</p>
<p>Information regarding the certificate can also be appended to the command such as: <code>-subj "/C=AU/ST=Canberra/O=Home/CN=${domain}"</code></p>
<p>Your CA is only as secure as the disks it sits on and the passphrase used to encrypt it for storage on said disks; among other policy-based gatekeeping such as Vault.</p>
<p>The passphrase can be skipped with <code>-nodes</code> but is unsuitable for anything other than testing. With this, you will be able to inspect the .key file too.</p>
<p>For example the certificate presented by reddit.com:443 with TLS has: </p>
<pre><code>C=US,            (Country Name)
ST=California,   (State/Province Name)
L=SAN FRANCISCO, (Locality Name)
O=REDDIT, INC.,  (Organization Name)
CN=*.reddit.com  (Common Name) (in this case a wildcard certificate valid for all subdomains of the domain)</code></pre>
<p>For the sake of this personal-focussed exercise and assumption that this may be used in a publicly accessible way I'd advise against filling in any personally identifying information for this simple throwaway OpenVPN client/server configuration.</p>
<h3>Signing a certificate for your server to use</h3>
<p>The OpenVPN server needs a certificate to perform a TLS handshake with clients.</p>
<p>We can sign one with our new CA; valid for server-use only for 5 years using the below command, including <code>-subj</code> to avoid prompts:</p>
<pre><code>openssl req -nodes -x509 -newkey rsa:4096 -sha256 \
  -days 1825 -keyout server.${domain}.key -out server.${domain}.crt \
  -CA ${domain}.crt  -CAkey ${domain}.key \
  -addext "keyUsage = digitalSignature, keyEncipherment, dataEncipherment, cRLSign, keyCertSign" \
  -addext "extendedKeyUsage = serverAuth" \
  -subj "/C=AU/ST=Canberra/O=Home/CN=server.${domain}"</code></pre>
<p>Without <code>-subj</code> the same questions will be asked again for certificate details. I would recommend setting the Common Name to something meaningful like <code>server.home.internal</code>.</p>
<p>This example also includes -nodes as the OpenVPN server will need access to its key autonomously.</p>
<h3>Signing a certificate for your client to use</h3>
<p>This will be more or less the same command as we used for the server but we'll sign this certificate with the clientAuth property and its own Common Name</p>
<pre><code>openssl req -x509 -newkey rsa:4096 -sha256 \
  -days 1825 -keyout client.${domain}.key -out client.${domain}.crt \
  -CA ${domain}.crt  -CAkey ${domain}.key \
  -addext "keyUsage = digitalSignature, keyEncipherment, dataEncipherment, cRLSign, keyCertSign" \
  -addext "extendedKeyUsage = clientAuth" \
  -subj "/C=AU/ST=Canberra/O=Home/CN=client.${domain}"</code></pre>
<p>The <code>clientAuth</code> and <code>serverAuth</code> extension keys prevent the certificates from being misused. In our case they prevent using the certificates in the reverse order and prevent somebody from trying to connect to the server using its own key for example (Which cryptographically checks out... but is only valid for server use, not clients).</p>
<p>This one doesn't include <code>-nodes</code> as its good safety to encrypt the client's private key for preventing access in the event the client device is physically compromises.</p>
<h1>OpenVPN</h1>
<p>We're at a point where we have a CA and both a server and client certificate for our openvpn server and client to use. We're almost ready to put together some openvpn configurations but what exactly should we secure ourselves with?</p>
<p>At the time of writing here we're up to TLS 1.3 which includes improvements over TLS1.2 for both the speed of the handshake process and privacy.</p>
<p>Regardless of the TLS version we're using - the handshake between the client and server involves an exchange of supported cipher specifications they're willing to secure the connection with. If they can agree on a common cipher suite they'll proceed.</p>
<p>But how do we make sure we're not picking some outdated weak cipher when safer standards are being made every year?</p>
<h2>Ensuring we use the best ciphers and standards</h2>
<p>As mentioned OpenVPN is powerful and flexible. So much so that it will allow you to use the worst dirt out there as long as it still supports them.</p>
<p>There are plenty of websites out there to help keep up to date with what set of cipher suites the world's using. I personally like to visit <a href="https://ssl-config.mozilla.org/">https://ssl-config.mozilla.org/</a> which can be used to generate strong configurations for many TLS-capable applications. <a href="https://ciphersuite.info/cs/">https://ciphersuite.info/cs/</a> is also a reliable resource for what's considered insecure, weak, secure and recommended.</p>
<p>Upon visiting mozilla's ssl-config site it shows an <code>nginx</code> configuration file by default which is suitable for us to peak. Contained in the config further down the page is an <code>ssl_protocols</code> option set to accept TLS versions 1.2 and 1.3.</p>
<p>It also includes a nifty list of ciphers we can reference in our configuration:
<code>ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305;</code></p>
<h2>Creating a server config</h2>
<p>According to <code>openvpn --show-tls</code>  for version <code>2.6.9</code> it supports the three currently recommended Cipher Suites for TLS 1.3, <code>TLS_AES_256_GCM_SHA384</code>, <code>TLS_CHACHA20_POLY1305_SHA256</code>, and <code>TLS_AES_128_GCM_SHA256</code>.</p>
<p>For TLS 1.2's many secure and not-so-secure options we'll stick with <code>TLS-ECDHE-ECDSA-WITH-AES-256-GCM-SHA384</code> and <code>TLS-ECDHE-ECDSA-WITH-CHACHA20-POLY1305-SHA256</code></p>
<p>So we will use those below</p>
<p>For the sake of keeping things simple I'll assume the OpenVPN server will be issuing DHCP for its own clients for a Routed VPN configuration. (Not a L2 tap attached to an existing network's bridge). I've annotated as much as possible in this example server configuration:</p>
<pre><code>mode server
tls-server
proto udp # Use UDP
port 1194 # Listen on 1194/udp
dev tun0  # Create tun0
float     # Allow the client's IP to change without disconnecting them (e.g. Mobile carrier Internet)

tls-version-min  1.2 or-highest # Use TLS 1.2 or greater
tls-version-max  1.3            # Use TLS 1.3 at most
remote-cert-tls  client         # Require the clientAuth extension in the client's certificate
tls-cert-profile preferred      # Require certs of SHA2 or better, RSA 2048 or better with any elliptic curve
tls-exit                        # Exit OpenVPN on TLS negotiation failure (More for clients)

verify-client-cert require      # Reject clients who don't send us a certificate

data-ciphers AES-256-GCM:AES-128-GCM:CHACHA20-POLY1305               # Support these ciphers
tls-ciphersuites TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256 # Use these two ciphersuites
ecdh-curve       secp384r1                           # Use the NIST P-521 elliptic curve for ECDH

  # Use these ciphers for TLS 1.2
tls-cipher TLS-ECDHE-ECDSA-WITH-AES-256-GCM-SHA384:TLS-ECDHE-ECDSA-WITH-CHACHA20-POLY1305-SHA256 

# Define a network for OpenVPN to operate in.
topology subnet            # subnet mode assigns addresses with a mask rather than point-to-point topologies.
push "topology subnet"     # Push this setting to clients

ifconfig 10.99.0.1 255.255.255.224                  # A small subnet of 30 address (+1 broadcast)
ifconfig-pool 10.99.0.2 10.99.0.30 255.255.255.224  # Assign clients anywhere from 2-30.

# Have the server ping clients every 10s. After 120s assume they're gone and end the connection.
keepalive 10 120

ca        /etc/openvpn/server/home.internal.crt
cert      /etc/openvpn/server/server.home.internal.crt
dh        none # Not needed as we're using ECDHE
key       /etc/openvpn/server/server.home.internal.key
tls-crypt /etc/openvpn/server/tls-auth_shared.key       # openvpn --genkey tls-auth tls-auth_shared.key

# Push some client options to use the VPN
push "redirect-gateway def1"   # Sets a default gateway on the client through the VPN for all traffic.
push "route-gateway 10.99.0.1" # Helps to avoid 'UNSPEC' errors on Win11 OpenVPN Connect clients.
push "dhcp-option DNS 1.1.1.1" # If you have a local DNS server you can specify it here

push "dhcp-option DOMAIN-SEARCH home.internal"
push "keepalive 10 60" # Have the client ping the server every 10 seconds timing out after 60s.

# Push an optional route to the client for LAN resources.
push "route 192.168.0.1 255.255.255.0   10.99.0.1 1"

client-to-client # Allow clients to communicate with one another.

#crl-verify /etc/openvpn/server/home.internal.crl.crt # Optionally check certs against a revocation list</code></pre>
<h2>Creating a client config</h2>
<p>The client will be using a configuration which is mostly the same with a few key differences:  </p>
<ol>
<li>The client uses <code>remote-cert-tls server</code> to verify the remote is signed for serverAuth usage</li>
<li>The client connects with <code>remote x.x.x.x port proto</code></li>
<li>The CAcert and client's cert and key will be embedded into the file for easy setup and usage on portable devices such as a mobile phone.</li>
</ol>
<pre><code>tls-client
dev tun
remote x.x.x.x 1194 udp

tls-version-min  1.2 or-highest # Use TLS 1.2 or greater
tls-version-max  1.3            # Use TLS 1.3 at most
remote-cert-tls  server         # Require the serverAuth extension in the server's certificate
tls-cert-profile preferred      # Require certs of SHA2 or better, RSA 2048 or better with any elliptic curve
tls-exit                        # Exit OpenVPN on TLS negotiation failure

verify-client-cert require      # Reject clients who don't send us a certificate

data-ciphers AES-256-GCM:AES-128-GCM:CHACHA20-POLY1305               # Support these ciphers
tls-ciphersuites TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256 # Use these two ciphersuites
ecdh-curve       secp384r1                           # Use the NIST P-521 elliptic curve for ECDH

  # Use these ciphers for TLS 1.2
tls-cipher TLS-ECDHE-ECDSA-WITH-AES-256-GCM-SHA384:TLS-ECDHE-ECDSA-WITH-CHACHA20-POLY1305-SHA256 

persist-key # Keep keys in memory
persist-tun # Keep the tunnel interface if the connection drops
pull        # Pull options from the server

# Verify the server certificate has the fields we configured
verify-x509-name "C=AU, ST=Canberra, O=Home, CN=server.home.internal"

&lt;tls-crypt&gt;
-----BEGIN OpenVPN Static key V1-----
[Paste your tls-auth_shared.key in this area]
-----END OpenVPN Static key V1-----
&lt;/tls-crypt&gt;

&lt;ca&gt;
-----BEGIN CERTIFICATE-----
[Paste the CA certificate here: home.internal.crt]
-----END CERTIFICATE-----
&lt;/ca&gt;

&lt;cert&gt;
-----BEGIN CERTIFICATE-----
[Paste the client's certificate here: client.home.internal.crt]
-----END CERTIFICATE-----
&lt;/cert&gt;

&lt;key&gt;
-----BEGIN ENCRYPTED PRIVATE KEY-----
[Paste the CA certificate here: client.home.internal.key]
-----END ENCRYPTED PRIVATE KEY-----
&lt;/key&gt;</code></pre>
<p>It is critical to change the client's <code>verify-x509-name</code> to match your chosen field values.</p>
<p>The <code>remote</code> line near the top must also be filled in.</p>
<h1>Finishing touches</h1>
<h2>Files</h2>
<p>At this point we're ready to start the server. Copy <code>home.internal.crt</code>, server.crt<code>and</code>server.key` into /etc/openvpn/server.</p>
<p>Also generate a TLS shared static key for the client and server to use: <code>openvpn --genkey tls-auth /etc/openvpn/server/tls-auth_shared.key</code> and add the contents of this new file to the client's configuration in the <code>&lt;tls-crypt&gt;</code> section.</p>
<h2>Firewalls</h2>
<p>The server will need an inbound firewall rule to allow OpenVPN to accept incoming connections. For systems running iptables this can be accomplished with:</p>
<p><code>iptables -I INPUT -p udp --dport 1194 -j ACCEPT</code></p>
<p>This rule can be further refined by defining the expected input interface such as <code>-i Wanint1</code> before the protocol specifier.</p>
<p>If the server is running behind a firewall (Natted network) and the building firewall/router is Linux-based these iptables rule examples can redirect to the intended internal IP:</p>
<p><code>iptables -t nat -A PREROUTING -p udp --dport 1194 -j DNAT --to-destination vpn.server.ip</code>
`iptables -I FORWARD -d vpn.server.ip -p udp --dport 1194 -j ACCEPT</p>
<h1>Launching</h1>
<p>Start your distribution's relevant openvpn service. In my case this is <code>systemctl enable --now openvpn-server@server.service</code> on the server intended to run the VPN.</p>
<p>Send the client configuration to another device either on the same network or expecting to access the server via the Internet and install the profile into the OpenVPN client available for the platform and try connecting!</p>
<h2>Industry standard networking gotchas</h2>
<p>You may find everything connects perfectly and both the server and client can ping each-other, but networking doesn't work.</p>
<p>There are a few potential issues in this problem. The most popular for running the VPN on a non-router would be that IPv4 forwarding isn't enabled by default. It can be enabled with <code>sudo sysctl -w net.ipv4.ip_forward=1</code> on the VPN server.</p>
<p>Past that the most popular solution to making the VPN client network world routable is to NAT the network out the VPN server. This causes traffic generated by VPN clients to take on the VPN server's IP and it translates it back as traffic returns. This can be done with <code>sudo iptables -t nat -A POSTROUTING -o VPNServer_Primary_Network_Interface -j MASQUERADE</code></p>
<p>The alternative option is to make your network aware of this network. Again if OpenVPN is running on the building router who knows of all routes - this should already be fine. Otherwise adding a static route for the network with something like <code>ip route add 10.99.0.0/27 via vpn.server.ip</code></p>
<p>If the VPN server happens to be the router then it is probably already doing all these things and has a routing table entry for the VPN client subnet. It would be worth checking for any conflicts with existing firewall rules which may be getting in the way</p>
<p>IPTables rules can be made persistent under <code>/etc/iptables/iptables.rules</code> in modern distributions. Sometimes <code>/etc/sysconfig/iptables</code> in older RHEL-based ones. There is also an accompanying <code>iptables</code> service which often needs to be enabled.</p>
<h1>Room for improvement</h1>
<p>For an individual this is a sound standalone configuration and additional client certificates (And replacements) can be issued with the same request signing command right from the shell history - complete with an encrypted CA key and client keys. But we can always do more.</p>
<h2>Public Key Infrastructure</h2>
<p>While openssl is evidently capable of pulling this setup off on its own this solution doesn't scale well for enterprise environments.</p>
<p>I can highly recommend trying to follow this guide using a Hashicorp Vault cluster as for a Certificate Authority and configuring all the extra parameters to restrict exactly what types of certificates can be issued with specific naming conventions per role. With Vault its also possible to create a Root CA and sign an Intermediate for issuing your certificates while the Root certificate remains obscured (Or even securely exported and deleted from Vault). Vault is nice in that it lets you run <code>vault server -dev</code> to instantly bring up a test server in memory, destroyed on exit. Combined with their documentation its a great platform for learning.</p>
<h1>Wrapping up</h1>
<p>Its easy to set up an OpenVPN server with a far less protective approach and newer editions even make some relatively safe assumptions when it comes to a cryptography selection. Though the defaults and lower limits on what an OpenVPN server is open to negotiating by default and while done in the name of backwards compatibility they aren't perfect and leave room for improvement.</p>
<p>I find keeping up with modern recommended ciphers makes it easy to bring up a VPN server confidently knowing exactly what its willing to negotiate and that those chosen are sane.</p>]]></description></item><item><title>Re-adding a Steam library using the console tab</title><link>https://blog.jjgaming.net/blog/steamLibraries</link><guid>https://blog.jjgaming.net/blog/steamLibraries</guid><pubDate>Sat, 16 Dec 2023 00:00:00 +1100</pubDate><description><![CDATA[<h1>Adding Steam libraries on Linux</h1>
<h2>About</h2>
<p>Sometimes Steam decides it no longer wants to use a library despite having started up with it present many times prior due to any number of buggy behaviours leaving everything saying &quot;Install&quot; Instead of &quot;Play&quot;.</p>
<p>In my experience with the modern Steam UI this year it also refuses to display any options at all under the Settings &gt; Storage &gt; Add Drive &gt; &quot;Create or select new Steam library folder&quot; dialogue anymore either making it seemingly impossible to re-add the library.</p>
<h2>Solution / Workaround</h2>
<p>With this problem not going away any time soon with many threads of many hit or miss suggestions out there I've had better luck using the modern Steam UI's hidden Console tab to restore dropped libraries in an instant.</p>
<h2>Commands</h2>
<p>This console feature can be accessed by either starting steam with the argument to enable it and going to the tab or more easily just visiting the steam browser protocol address on demand: <a href="steam://open/console">steam://open/console</a> to be bought to the tab immediately.</p>
<p>From the Console tab in Steam's main window you can then issue the <code>library_folder_add</code> command to restore your external library. In my case this was <code>library_folder_add /data/steam</code>. And just like that my Library tab had all my titles ready to use.</p>]]></description></item><item><title>Cheat Engine on Linux (Steam/Proton)</title><link>https://blog.jjgaming.net/blog/ce</link><guid>https://blog.jjgaming.net/blog/ce</guid><pubDate>Tue, 28 Nov 2023 00:00:00 +1100</pubDate><description><![CDATA[<h1>Cheat Engine on Linux (Steam/Proton)</h1>
<h2>About</h2>
<p>I've made my rounds trying to get CE working on Linux and there are countless binaries, scripts and other buildable solutions which just don't work. Including CE's very own CEProtonLauncher binary which seems to do nothing. After giving up and trying to just launch it myself using the same Proton/wine command a game was launched with with CE copied into its Steam appid' /pfx/drive_c folder - it worked like magic. Given how many dead end search results are out there for this I figured it could use a page.</p>
<h2>Alternatives</h2>
<p>Those comfortable with debuggers will find <code>gdb</code> is capable of dumping process memory for searching and comparisons while also capable of changing memory once the right scope is found.</p>
<p>A more user-friendly but lacking option is <code>scanmem</code> which is perfect for finding and changing values while also capable of dumping and writing modified memory in various convertable formats or raw hex byte strings. There is also Game Conqueror which provides a graphical front-end for <code>scanmem</code>.</p>
<p>As it stands there is no true drop in replacement for Cheat Engine with its rapid scanning and feature filled UI for endless memory tweaking, rewriting, disassembling and debugging options all in the one interface. There are also plenty of Cheat Tables (.CT) files out in the wild making good use of Lua scripting and pointers for easy access to powerful modifications of any game's build.</p>
<h2>Getting started</h2>
<h3>Native Linux software</h3>
<p>Using Cheat Engine for native Linux programs is as simple as starting the Linux build of ceserver and starting the Windows build of Cheat Engine with WINE in any prefix and connecting to the ceserver running on the machine. This provides a lot of functionality however there are a few missing features. In general this will be useful for the purposes of most users.</p>
<h3>Software running in WINE and Proton</h3>
<h4>General</h4>
<p>Using Cheat Engine in a Windows game is as simple as starting its executable in the same WINE prefix as the software you wish to attach it to and attaching to the software.</p>
<p>While this is a simple process on paper; CE needs to be copied into the same WINE prefix of which the target software is running in to be able to detect it. If you're running software using the <code>wine</code> which ships with your distro you can easily just install CE with wine making sure to specify WINEPREFIX=/path/to/prefix and then launch it with the same WINEPREFIX variable.</p>
<h4>Proton</h4>
<p>When using Proton one <em>should</em> launch CE using the same wine executable/version which were used to launch the title to avoid problems.</p>
<p>There's a few variables to discover for doing the same thing with Proton and needs to discover a few variables before jumping ahead:</p>
<ol>
<li>The appID of the title.</li>
</ol>
<p>There are a few ways to find this. You can get this by visiting the store page for a title where the app ID number will be visible in the URI.</p>
<p>Or you can simply run the title and note the the AppID from the SteamLaunch command's arguments. When the title is running, <code>ps aux|grep -Po '(?&lt;=SteamLaunch AppId=)[0-9]+</code>, which successfully returns an appId for me.</p>
<ol start="2">
<li>The WINE prefix directory for the title (/pfx)</li>
</ol>
<p>When Steam launches titles with Proton it creates a new WINE prefix to hold the experience. These WINE prefix directories are stored on disk in shorthand as <code>/pfx/</code> and are created in the same <code>steamapps</code> filesystem as your downloaded title.</p>
<p>I store titles in <code>/data/steam/steamapps</code> so my <code>/pfx/</code> directories can be found at <code>/data/steam/steamapps/compatdata/0000000/pfx</code> with the corresponding appId substituted in.</p>
<p>If <code>mlocate</code> is installed and <code>updatedb</code> has been run recently enough to index the title's pfx directory, one can find the directory using <code>locate 0000000/pfx | head -n1</code></p>
<ol start="3">
<li>A copy of Cheat Engine in the prefix</li>
</ol>
<p>WINE doesn't like running full unix paths to a PE32 executable outside a given prefix so CE will either need to be installed into the prefix, or elsewhere before being copied somewhere under drive_c/ of the prefix to be launched.</p>
<ol start="4">
<li>The Proton <code>wine</code> executable used to launch the title.</li>
</ol>
<p>The path to the WINE binary used to run a title is often visible just by grepping the process list while the title is running: <code>ps aux | grep wine</code></p>
<p>The above command revealed GE-Proton8-25 is being used for a currently running title on my machine and its path: <code>/home/user/.local/share/Steam/compatibilitytools.d/GE-Proton8-25/files/bin/wineserver</code></p>
<p>In this case <code>wineserver</code> is already running the title and we can specify <code>/wine</code> from that directory instead.</p>
<hr />
<p>With the above information it's now possible to either copy an extracted CE or install it directly into the wineprefix using the wine binary discovered in #4.</p>
<h3>Getting CE into the Proton title's prefix.</h3>
<h4>Copying an existing CE installation or using an online portable archive.</h4>
<p>There used to be an official link to an older version (v7.2) on the main CE website which does not require installation however the link on the official website no longer works. <a href="https://web.archive.org/web/20210815195836if_/https://cheatengine.org/download/CheatEngine7.2_MissingSetup.rar">But it appears archive.org have a copy</a>.</p>
<p>I'd recommend against this older version in favour of installing it directly into the title's prefix. The above archive can be extracted for use in Proton, otherwise a pre-existing installation of CE anywhere else on the system can be used too.</p>
<p>Copy the CE installation directory to the Steam title's WINE prefix such as the below example where I copy it to the root, <code>C</code>: <code>cp -rv / /data/steam/steamapps/compatdata/0000000/pfx/drive_c/</code></p>
<h4>Installing CE directly to the prefix</h4>
<h5>Woes</h5>
<p>At this time of writing (28th Nov 2023) running any variation of <code>wine</code> on Archlinux fails to install CE using the official installer. I spent some time troubleshooting this to learn that Lutris does not experience this problem. I disabled all Lutris features and it was <em>still</em> able to install CE to a new prefix of its own just fine. It asn;</p>
<p>You can install CE using Lutris into its own prefix and copy it out from <code>drive_c/Program Files/</code> of the prefix when the installation is finished and delete the prefix.</p>
<h5>Troubleshooting for installing CE with WINE directly.</h5>
<p>On my Archlinux desktop here <a href="https://github.com/cheat-engine/cheat-engine/issues/2761">I can't get WINE to install CE to any new prefix no matter what I try</a>. I extended my troubleshooting by trying to install CE 7.5 as its own empty title in Lutris <strong>and that worked.</strong> But I couldn't get WINE to start the installation even using the same runner and WINEPREFIX that Lutris used (<code>lutris-GE-Proton7-35-x86_64</code>).</p>
<p>It wasn't until I enabled &quot;Disable Lutris Runtime&quot; that Lutris started failing the installation like WINE did.</p>
<p>After some omission testing with the <code>~/.local/share/lutris/runtime</code> directory I found the <code>Ubuntu-18.04-i686</code> subdirectory has the libraries that CE's installer cares about. From there I was able to work out the true cause:</p>
<p>CE's installation instantly fails if any of the below Lutris <code>runtime/Ubuntu-18.04-i686/</code> libraries are removed:</p>
<p>Installation instantly fails if any of the below are removed:</p>
<pre><code>libnettle.so.6 causes the installation to instantly fail.
libunistring.so.2 causes the installation to instantly fail.
libgmp.so.10 causes the installation to instantly fail.
libgnutls.so.30 causes the installation to instantly fail.</code></pre>
<p>The installation window never appears (No language selection screen) if the below are removed:</p>
<pre><code>libfreetype.so.6</code></pre>
<p>The subdirectory has many Linux libraries inside (Particularly Ubuntu's...) and it seems the above are required to make CE's installer exe actually install.</p>
<p>As such, I was finally able to install CE directly into a proton title's pfx directory by including the above libraries in my installation command (This workaround requires Lutris to be installed):</p>
<p><code>LD_LIBRARY_PATH=~/.local/share/lutris/runtime/Ubuntu-18.04-i686 WINEPREFIX=/data/steam/steamapps/compatdata/0000000/pfx ~/.local/share/Steam/compatibilitytools.d/GE-Proton8-25/files/bin/wine ~/Downloads/CheatEngine75.exe</code></p>
<p>And after a few hours of troubleshooting.. I can run finally CE alongside a running title without any additional software.</p>
<h3>Conclusion &amp; Launching CE</h3>
<p>After getting a copy of CE's installation root into the prefix of a Proton-run title one way or another it can be launched by specifying the prefix directory, (Ideally) the same wine binary used to launch the title followed by the path to CE inside.</p>
<h5>For the copy method:</h5>
<p><code>WINEFSYNC=1 WINEPREFIX=/data/steam/steamapps/compatdata/0000000/pfx ${HOME}/.local/share/Steam/compatibilitytools.d/GE-Proton8-25/files/bin/wine 'C:\CE\Cheat Engine.exe'</code></p>
<h5>For the Proton installation method:</h5>
<p><code>WINEFSYNC=1 WINEPREFIX=/data/steam/steamapps/compatdata/0000000/pfx ${HOME}/.local/share/Steam/compatibilitytools.d/GE-Proton8-25/files/bin/wine 'C:\Program Files\Cheat Engine 7.5\Cheat Engine.exe'</code></p>
<p><del>I will most likely write a wrapper script for this. I can see myself writing a little <code>./launchGameWithCE %command%</code> wrapper to CE any Steam proton title desired but that will have to wait for another day after how long debugging the library issue for the CE installer took.</del></p>
<p>[18th Dec 2023 Update] Yep <a href="https://github.com/ipaqmaster/ceProton">I ended up writing such a wrapper</a> to handle the installation for me automatically on Dec 7th 2023 and it works well!</p>]]></description></item><item><title>Asterisk 22, pjsip and a trunk</title><link>https://blog.jjgaming.net/blog/asterisk22-2025</link><guid>https://blog.jjgaming.net/blog/asterisk22-2025</guid><pubDate>Sun, 06 Apr 2025 00:00:00 +1100</pubDate><description><![CDATA[<h1>Asterisk, pjsip and a trunk in 2025</h1>
<h2>About</h2>
<p>I've come back to asterisk a few times in my lifespan and every time I have to rediscover the software in its entirety as every unhelpful dead-end hyperlinked forum post has accepted answers to dead documentation links made defunct a decade earlier and examples for older modules (sip instead of pjsip) which nobody should be using anymore.</p>
<h2>Asterisk</h2>
<p>I built the asterisk AUR package for this purpose but your distro may include it already. Otherwise building it from source is fine.</p>
<h3>Enabling PJSIP</h3>
<p>Edit and uncomment <code>require = chan_pjsip.so</code> in <code>/etc/asterisk/modules.conf</code> and restart the service. By default, PJSIP will ignore random calls from internet port-knockers and bots.</p>
<h3>PJSIP</h3>
<p><code>PJSIP</code> is an open source library for implementing the SIP protocol. Most of the world transitioned to this years ago and Asterisk uses this over the now legacy SIP driver ,too.</p>
<h4>Listening</h4>
<p>A basic configuration example would involve listening on 5060 for new clients. My asterisk VM's IP is 10.9.23.2 in this example so I will have it listen on that:</p>
<pre><code>[transport-udp]
type=transport
protocol=udp
bind=10.9.23.2
</code></pre>
<p>This (And a firewall exemption) will allow my desktop sip client to log into it, but will also allow incoming connections from my trunk provider.</p>
<h4>Defining some endpoints</h4>
<p>Before we do anything we need to define some endpoints to register with us. For my purposes I made one for my desktop and one for my phone using Zoiper as the client.</p>
<p><code>pjsip.conf</code> consists of &quot;configuration sections&quot; which look like [this] and they can be any type=thing on the inside. There are a few unwritten rules about the naming conventions when referencing other sections in a given section which these examples will adhere to.</p>
<p>Here's a few example sections for definint two endpoints which can be registered by a desktop and smartphone. These endpoints reference their similarly named auth and aor blocks but this can be done more efficiently with common blocks for repeat settings.</p>
<pre><code># /etc/asterisk/pjsip.conf
[transport-udp]
type=transport
protocol=udp
bind=10.9.23.2

[user1]
type=auth
auth_type=userpass
username=user1
password=7d784791-6234-43e8-b00b-7319479fcef4

[user1]
type=aor
max_contacts=1

[user1]
type=endpoint
context=from-internal
disallow=all
allow=ulaw
auth=user1
aors=user1

[user2]
type=auth
auth_type=userpass
username=user2
password=8140cd56-907d-45f7-8f18-c1353ef5c1ff

[user2]
type=aor
max_contacts=1

[user2]
type=endpoint
context=from-internal
disallow=all
allow=ulaw
auth=user2
aors=user2
</code></pre>
<p>Notice the context <code>from-internal</code> which is important to note in the next file, <code>extensions.conf</code>. For when these clients try to make a phone call after registering with us.</p>
<h4>Defining a trunk</h4>
<p>If you're going to make calls to the outside world a provider likely gave you a login to use their services. While you can open something like Zoiper and register straight to the account and public number they provided you it would make more sense to use local extensions and only evoke use of the trunk when needed.</p>
<p>Here is a basic example of what a trunk might look like:</p>
<pre><code>[trunk_myprovider]
type=registration
outbound_auth=trunk_myprovider
server_uri=sip:sip.myprovider.com
client_uri=sip:54321@sip.myprovider.com
retry_interval=60
expiration=3600

[trunk_myprovider]
type=auth
auth_type=userpass
username=54321
password=myP@5sw0rd!

[trunk_myprovider]
type=aor
contact=sip:sip.myprovider.com:5060

[trunk_myprovider]
type=endpoint
context=from-external
disallow=all
allow=ulaw
outbound_auth=trunk_myprovider
aors=trunk_myprovider

[trunk_myprovider]
type=identify
endpoint=trunk_myprovider
match=sip.myprovider.com
</code></pre>
<p>This defines a registration, authentication, aor, endpoint and identify block for the sip trunk company providing you your trunk using ulaw. We will reference this configuration in our dial plan.</p>
<p>Notice the endpoint block for the trunk has <code>context=from-external</code> which we will create a configuration section for incoming calls.</p>
<p>You can substitute in the details for your provider.</p>
<h3>extensions (Dial plans)</h3>
<h4>extensions.conf</h4>
<p>This is where most of the magic happens. Here we define what happens with phone calls with a few dial plans. Remember our <code>from-internal</code> section from above?  That block is where we'll handle calls from our internal registrations, allowing them to call each other but also dial out if they meet the conditions. We'll make a block for that here and also an outbound and default block for all other cases:</p>
<pre><code>; By default/fallback, have the PBX answer and play the weasels message before hanging up.
[default]
exten =  _X.,1,Answer()
same  =  n,Wait(1)
same  =  n,Playback(tt-weasels)
same  =  n,HangUp()

[outbound]
; If we reach this plan assume we're calling a real world number.

exten =&gt; _X.,1,NoOp() ; catch-all, do nothing
same  =&gt; n,Set(CALLERID(NUM)=61400000551) ; Set the caller-id to the number given to you by your provider.

same  =&gt; n,Dial(PJSIP/${EXTEN}@trunk_myprovider) ; Dial the number on this catch as the extension of the trunk provider.

; What to do with inbound calls
[from-external]
exten =&gt; 61400000551,1,Answer() ; Answer our public number
same =&gt; n,Dial(PJSIP/user1,30) ; Ring user1 for 30

[from-internal]
; How to handle calls from our own internal extensions.

; If the extension 100 is dialed, play hello-world then hang up.
exten =&gt; 100,1,Answer()
same  =  n,Wait(1)
same  =  n,Playback(hello-world)
same  =  n,Hangup()

; If the extensions 200 or 201 are dialed call each PJSIP extension respectively.
exten =&gt; 200,1,Dial(PJSIP/user1)
exten =&gt; 201,1,Dial(PJSIP/user2)

; If a 04 or +61 number is being dialed, go to our outbound plan to prepare and send the call to the outside.
; This can be modified to carefully only allow calls local to your region too. Or a _X. extension can be defined to catch all remaining calls.

;include =&gt; outbound ; You can do this instead of using Goto. Doesn't scale as well.
exten =&gt; _04XXXXXXXX,1,NoOp()        ; For all real numbers starting with 04XXXXXXXX
same  =  n,Goto(outbound,${EXTEN},1) ; Go-to outbound
exten =&gt; _614XXXXXXXX,1,NoOp()       ; For all real numbers starting with 04XXXXXXXX
same  =  n,Goto(outbound,${EXTEN},1) ; Go-to outbound, too.

; Go to default (weasels) if we don't know where else to send the call.
;include =&gt; default ; You can do this instead of using Goto. Doesn't scale as well.
exten =&gt; _X.,1,NoOp()               ; Catch-all for all other numbers
same  =  n,Goto(default,${EXTEN},1) ; Go-to default (Play weasels)
</code></pre>
<h3>Finishing up</h3>
<p>Thanks for reading. This setup is good enough for my single internal extension on my desktop and serves as a good example to expand upon for larger configurations.</p>]]></description></item><item><title>Direct kernel booting with QEMU with a new ext4 virtual disk</title><link>https://blog.jjgaming.net/blog/qemu-kernel-boot-direct</link><guid>https://blog.jjgaming.net/blog/qemu-kernel-boot-direct</guid><pubDate>Fri, 17 May 2024 00:00:00 +1000</pubDate><description><![CDATA[<h1>Direct Kernel booting with QEMU and installing a new rootfs to boot with.</h1>
<h2>About</h2>
<p>Direct kernel booting is something I've found myself doing over the years. At first to see if I could pull it off and over time it became a fast way for to test initramfs hooks without having to reboot a physical machine or maintain and rebuilt initramfs images for a virual environment another layer down.</p>
<p>Booting a kernel image directly with QEMU lets me work on initramfs hooks at record pace while rebuilding images right on the host before booting them seconds later.</p>
<h2>Creating a virtual disk for the kernel to land in</h2>
<p>Direct kernel booting is great and can be pointed to an NFS server or iSCSI target to proceed. For this demonstration I'll be using a virtual disk for the guest to boot into without its own EFI partition.</p>
<p>In my case I have ZFS available on just about every machine I manage so for me creating a new virtual block device for a guest is as simple as <code>sudo zfs create zpool/ext4_directboot_test -s -V50G</code>.</p>
<p>Otherwise a flatfile image can be created by either concatenating say, zeros into a .img file - or using <code>qemu-img create -f raw /tmp/ext4_directboot_test.img 50G</code> for the same environment.</p>
<p>While its possible to use other formats with <code>-f</code> such as <code>vmdk</code> or <code>qcow2</code> - for simplicities sake I would like to partition and mkfs on the image file directly.</p>
<h2>Preparing the virtual disk</h2>
<h3>Working with raw disk dumps (Or in this case a raw virtual disk)</h3>
<p>I stuck with raw disks for this exercise to lessen the complication of having to work with other virtual disk formats which don't store data 1:1 in their flatfiles but if we're using a raw flatfile image we still have to expose the partition to the host.</p>
<p>In the case of a the raw disk image we can use <code>losetup</code> to map this raw image to a loop device. This is useful for many reasons especially ISOs but in our case it can also expose the partition table of the flat image file to the host as part of a device file mapping for ease of access.</p>
<p>This can be done with <code>losetup -Pf /tmp/ext4_directboot_test.img</code></p>
<p><code>-f</code> finds the first available loop device number index and uses that (Typically /dev/loop0) and <code>-P</code> does a partition scan so existing or new partitions we make will be picked up immediately.</p>
<p>You can then run <code>losetup</code> to see what loop device it was assigned to. If you aren't using any loops it will typically be <code>/dev/loop0</code>.</p>
<h3>Partitioning</h3>
<p>At this point we can run <code>gdisk</code> against the <code>/dev/loop0</code> path or that of a avirtual block device in the case of ZFS/LVM. <code>gdisk</code> will open up with a new empty GUID partition table to start with given its a blank 'disk'.</p>
<p>Press <code>n</code> to create a new partition and press <Enter> through the prompts to create a first partition consuming the entire virtual disk file's space with a default header with type code 8300 (Linux filesystem).</p>
<p>Then <code>q</code> to write it out and exit.</p>
<h3>Filesystem creation</h3>
<p>At this point we can run <code>mkfs.ext4</code> on the virtual disk flatfile/path (/dev/loop0p1) and it will set one of those up too.</p>
<h2>Mounting and bootstrapping an installation into it</h2>
<p>The varying distros out there have their own way of bootstrapping a system. In my case I'm using Archlinux and will bootstrap it with pacstrap.</p>
<p>At this point we can mount the new filesystem we created. For a zvol or similar such as LVM logical volumes we can mount the new partition via its partition path under /dev/zvol or its name in /dev/mapper:</p>
<p><code>mount /dev/zvol/zpool/ext4_directboot_test-part1 /mnt</code></p>
<p>For loops it will be /dev/loop0p1</p>
<p>We can use pacstrap to install the bare minimum for the smallest installation possible with some key packages:</p>
<p><code>pacstrap /mnt base vim dhclient</code></p>
<h2>Inside provisioning</h2>
<p>Now we must chroot into it and at the very least set a password:</p>
<p><code>arch-chroot /mnt</code></p>
<p><code>passwd # Set one here</code></p>
<h2>Making it boot</h2>
<p>We're at a point where we can just point QEMU at our /boot/vmlinux-linux image (Or similar) and give it arguments such as <code>root=/dev/sda1</code> and  <code>console=ttyS0</code> if there's no virtual display attached (Serial) but there are sometimes additional gotchas in this approach.</p>
<h3>Gotchas</h3>
<p>In my case the /boot/vmlinuz-linux put together by the <code>linux</code> pacman package doesn't seem to include ext4 in its list of bdev filesystems. That is to say its incapable of booting into an <code>ext4</code> disk all on its own.</p>
<p>The ahci driver and virtio driver seem to be built-in so accessing the disk should be fine.</p>
<p>In my case its just easier to compile a new kernel and check the ext4 box with <code>make xconfig</code>.</p>
<h4>Compiling the kernel</h4>
<p>Visit <a href="https://www.kernel.org/">https://www.kernel.org/</a> and grab the latest release (Typically a big gold button)</p>
<p><code>tar xvf</code> it and <code>cd</code> into the directory.</p>
<p>Run <code>make xconfig</code> for a nice configuration UI or use <code>make menuconfig</code> to use the CLI version.</p>
<p>In these flag-setting screens you can browse to File Systems and ensure that <code>The Extended 4 (ext4) filesystem</code> is checked. This will ensure the ext4 module is included as a built-in.</p>
<p>The kernel is nice and modular and with that comes many modules. Its ideal to build a kernel with only what it needs to function or boot to minimize its size in a boot partition or even more strictly into flash memory for say, your typical consumer router/switch/ap combos. A ton of typical home networking devices run Linux though often with a severely cut-down and compiled kernel image to fit into their limited boot flash and available memory. Plenty of enterprise gear too.</p>
<p>Once ext4 has been enabled exit while saving your changes and run <code>make -j$(nproc)</code>.</p>
<p><code>nproc</code> is a nice utility from the GUI coreutils suite and returns the total number of CPU threads on the system so we can build all the bits and pieces of the kernel as quickly as possible rather than only one at a time before finally putting them all together. While its possible to read, list and parse paths such as <code>/proc/cpuinfo</code> and say, <code>/sys/devices/system/cpu</code> nproc just takes the over-engineering away.</p>
<p>Once the kernel is compiled with ext4 support built-in (No initramfs, yay!) it should spit out the image to the relative subdirectory <code>arch/x86/boot/bzImage</code> which we can reference in the booting step.</p>
<h3>Booting</h3>
<p>Leave the chroot and umount the path which the new ext4 partition was mounted to. If losetup was used run 'losetup -d /dev/loopX' where X is the index it was given.</p>
<p>At this point we can refer to QEMU for getting things running.</p>
<h4>QEMU</h4>
<p>I personally use <a href="https://github.com/ipaqmaster/vfio">My VFIO script</a> for all things QEMU but it also spits out the arguments for qemu-system-x86_64 for use elsewhere.</p>
<p>If a UEFI guest is created in <code>virt-manager</code> you can go to the Boot Options tab and go to the <code>Direct kernel boot</code> section to paste in or browse to the bzImage you compiled and add some kernel args like <code>root=/dev/sda1</code> and if there's no virtual display, <code>console=ttyS0</code> so it can show you the boot process and any errors you could run into.</p>
<p>With my script I invoke this with my paths: <code>DISPLAY= ~/vfio/main -run -kernel /tmp/linux-6.9/arch/x86/boot/bzImage -cmdline 'root=/dev/vda1 rw console=ttyS0' -image /dev/zvol/zpool/ext4_directboot_test</code> to get it fired up. This boots me to a login screen for the guest in under a second!</p>
<p>For using QEMU directly arguments like the below should do the trick:</p>
<pre><code>qemu-system_x86_64 \
  -machine q35,accel=kvm,kernel_irqchip=on \
  -enable-kvm \
  -m 2048 \
  -cpu host,kvm=on,topoext=on \
  -smp sockets=1,cores=2,threads=2 \
  -drive if=pflash,format=raw,unit=0,readonly=on,file=/usr/share/ovmf/x64/OVMF_CODE.fd \
  -serial mon:stdio \
  -nodefaults \
  -drive file=/dev/zvol/zpool/ext4_directboot_test,if=none,discard=on,id=drive1 \
  -device virtio-blk-pci,drive=drive1,id=disk1, \
  -nographic \
  -vga none \
  -netdev user,id=vmnet \
  -device virtio-net,netdev=vmnet,mac=52:54:00:11:22:33 \
  -kernel /zfstmp/linux-6.9/arch/x86/boot/bzImage \
  -append root=/dev/vda1\ rw\ console=ttyS0 </code></pre>
<p>This example starts up a somewhat basic UEFI qemu machine (Your OVMF_CODE.fd path may vary) which can also quickly boot into the new virtual disk rootfs in under a second without a bootloader or boot partition.</p>
<h2>Motivations, wrapping up</h2>
<p>There aren't many valid use cases to do this. I can imagine some enterprise cases are starting up guests using QEMU which are expected to boot a common generalized kernel and some iSCSI or NFS target. At that point maintaining installations each with their own boot partition wouldn't scale well and this makes sense.</p>
<p>In my case direct kernel booting lets me test specialized boot hooks without having to either reboot a real machine nor maintaining and rebuilding the initramfs in a cumbersome virtualized guest environment every time. Direct Kernel booting with QEMU makes this a four second turnaround per test rather than a few minutes per test. When you're building hundreds of new initramfs images to try new hook revisions this becomes critical.</p>]]></description></item><item><title>Installing Archlinux on the Raspberry pi 4 with pacstrap</title><link>https://blog.jjgaming.net/blog/archlinux-pi4</link><guid>https://blog.jjgaming.net/blog/archlinux-pi4</guid><pubDate>Fri, 26 Apr 2024 00:00:00 +1000</pubDate><description><![CDATA[<h1>Archlinux for aarch64 and installing it on a Pi</h1>
<h2>About</h2>
<p>Archlinux has builds for aarch64 including a pre-packed <code>ArchLinuxARM-aarch64-latest.tar.gz</code> which can be directly extracted into freshly MBR-partitioned storage device with vvat+ext4 boot/root partitions mounted for getting started immediately. But why rely on a tarball at all?</p>
<h2>pacstrapping a new AArch64 installation for the Pi 4</h2>
<p>Given my machines already run Archlinux (x86_64) I figured it would only be fitting to <code>pacstrap</code> the Raspberry Pi 4's USB stick manually. Here are some instructions to get started:</p>
<h3>Partitioning</h3>
<p>Use fdisk to create a new MBR table with part1 being (200M) and part2 with the remainder of the space.</p>
<p>Or something small like &gt;4GB for working with virtual storage such as a ZVOL for easily fitting this prepared image onto storage of various sizes.</p>
<p>Make sure to set the first partition's type to <code>c</code> which is <code>W95 FAT32 (LBA)</code> so the Pi's bootloader firmware knows where to look.</p>
<h3>Formatting</h3>
<p>Format them as <code>vfat</code> and <code>ext4</code>:
<code>sudo mkfs.vfat /dev/disk/by-id/usb-TheDrive-0\:0-part1</code>
<code>sudo mkfs.ext4 /dev/disk/by-id/usb-TheDrive-0\:0-part2</code></p>
<h3>Mounting</h3>
<p>Mount both in order with something like
<code>sudo mount /dev/disk/by-id/usb-TheDrive-0\:0-part2 /mnt</code>
<code>sudo mkdir -p /mnt/boot</code>
<code>sudo mount /dev/disk/by-id/usb-TheDrive-0\:0-part1 /mnt/boot/</code></p>
<h3>Pacstrapping</h3>
<p>The host needs to trust the Archlinux ARM build servers for installing packages. This can be done by downloading it from the repository into pacman's keydir and then importing then signing the key. The installation itself comes with this file already.</p>
<p><code>sudo wget https://raw.githubusercontent.com/archlinuxarm/archlinuxarm-keyring/master/archlinuxarm.gpg</code>
<code>sudo pacman-key --populate archlinuxarm</code>
<code>sudo pacman-key --lsign-key archlinuxarm</code></p>
<p>We must also create a replica of the expected Archlinux AArch64 pacman.conf for our host's instance to use. It can be short and cut-down as below with a single mirror for the various repos:</p>
<pre><code>echo '[options]
HoldPkg     = pacman glibc
Architecture = aarch64

CheckSpace
SigLevel    = Required DatabaseOptional
LocalFileSigLevel = Optional

[core]
Server = http://mirror.archlinuxarm.org/$arch/$repo
[extra]
Server = http://mirror.archlinuxarm.org/$arch/$repo
[community]
Server = http://mirror.archlinuxarm.org/$arch/$repo
[alarm]
Server = http://mirror.archlinuxarm.org/$arch/$repo
[aur]
Server = http://mirror.archlinuxarm.org/$arch/$repo
' &gt; ~/pacman.aarch64.conf</code></pre>
<p>Now we can <code>pacstrap</code>into the Pi's new rootfs referencing this config and some good starter packages:</p>
<p><code>sudo pacstrap -M -K -C ~/pacman.aarch64.conf /mnt archlinuxarm-keyring base chrony openssh raspberrypi-bootloader vim networkmanager linux-rpi</code></p>
<p>We use these flags to avoid using anything from the host in this process:</p>
<ul>
<li><code>-M</code> (Avoid copying the host’s mirrorlist to the target.)</li>
<li><code>-K</code> (Initialize an empty pacman keyring in the target (implies -G).)</li>
<li><code>-C</code> (Use an alternate config file for pacman.)</li>
</ul>
<h3>Chrooting into different architectures</h3>
<p>Its difficult to <code>chroot</code>/<code>arch-chroot</code> into a rootfs of a differing architecture as none of its binaries will run. Luckily QEMU provides a solution for user-mode emulation which leverages <code>binfmt</code> to invoke <code>qemu-aarch64-static</code> for running those binaries instead of trying to execute it natively ourselves.</p>
<p>Fetch the relevant packages with:</p>
<p><code>pacman -S qemu-user-static-binfmt qemu-user-static</code></p>
<p>Activate it by copying its configuration into binfmt.d's working area:</p>
<ol start="7">
<li><code>sudo cp /usr/lib/binfmt.d/qemu-aarch64-static.conf /etc/binfmt.d/</code></li>
</ol>
<p>You can verify it's active with: <code>ls -lah /proc/sys/fs/binfmt_misc/qemu-aarch64</code></p>
<p>Now we need to copy it into the raspberry-pi's rootfs for chroot to find:</p>
<p><code>sudo cp $(which qemu-aarch64-static) /mnt/usr/bin</code></p>
<p>We can now chroot into the rootfs for this Pi.</p>
<h3>Configuring the inside</h3>
<p>Chroot into the Pi's rootfs with <code>sudo arch-chroot /mnt qemu-aarch64-static /bin/bash</code> and configure the last bits and pieces:</p>
<pre><code>systemctl enable NetworkManager sshd chronyd # Enable some critical networking and remote-access services.
pacman-key --init ; pacman-key --populate archlinuxarm # Initialize pacman's keyring and add archlinuxarm's keys.
passwd               # Set a root password.
vim /etc/hostname     # to set a hostname.
vim /etc/fstab        # Or use `genfstab -U /mnt | sudo tee /mnt/etc/fstab` on the host and verify.
vim /boot/cmdline.txt # Don't forget to change root= to either a root=UUID=YourRootfsUUID or /dev/sda</code></pre>
<p>Its advisable to use UUID=xxx for /mnt/etc/fstab to avoid device path changes from potentially borking the boot. <code>genfstab -U /mnt | sudo tee /mnt/etc/fstab</code> will create the entries for you but you will still have to uncomment the UUID lines and use them at the start of the other lines instead of the disk dev path.</p>
<p>You can place some ssh keys in <code>/root/.ssh/authorized_keys</code> or add new users.</p>
<p>Password authentication for <code>root</code> is only possible after setting <code>PermitRootLogin yes</code> in <code>/etc/ssh/sshd_config</code> either in the chroot session or outside.</p>
<p>You can then set it up like any other Archlinux box. At a minimum you should probably uncomment your desired locale in <code>/etc/locale.gen</code>, run <code>locale-gen</code> and set it in <code>/etc/locale.conf</code> as <code>LANG=xxx_yyy-zzz</code></p>
<h3>Finishing up</h3>
<p>Finally leave the chroot. You can expand the partition if needed and <code>sync</code> the USB before pulling it out (I can never trust USB device writes), remove the SD card or USB stick and boot the Pi.</p>
<p>Keep in mind Pi's (For some reason of choice) cannot USB-boot without being told to do so earlier in their life. This creates a bootstrap paradox requiring the Pi to be booted into some distro and enabling USB-booting before trying to use a USB stick.</p>
<h1>Extra goodies</h1>
<h2>Serial over USB-C</h2>
<p>Thanks to DWC2, the various available Pi drivers and the USB ports of the Pi being wired directly to the CPU - The USB-C port used for powering the device it can be configured to present as various Human Input Devices (HIDs) over a data cable while receiving power. It can even present as multiple of them at once and most usefully in this context an ethernet or serial interface.</p>
<p>To enable serial create <code>/etc/modules-load.d/usb-c_serial.conf</code> with the below lines (drivers):</p>
<pre><code>dwc2
g_serial</code></pre>
<p>You also need to append <code>dtoverlay=dwc2</code> to your <code>/boot/config.txt</code> for this feature.</p>
<p>Finally, enable agetty to present a serial login console on the serial interface to-be: <code>systemctl enable getty@ttyGS0.service</code>.</p>
<p>You can now reboot to load those two modules and start shelling in over USB-C. Well, when there's enough power delivery which most desktop and laptop USB-C ports won't do. Single cable for the win.</p>]]></description></item><item><title>Booting Archiso over a network</title><link>https://blog.jjgaming.net/blog/archiso-pxe</link><guid>https://blog.jjgaming.net/blog/archiso-pxe</guid><pubDate>Wed, 02 Oct 2024 00:00:00 +1000</pubDate><description><![CDATA[<h1>PXE and booting the Archlinux ISO over a network</h1>
<h2>About</h2>
<p>This is a small article covering how to boot the archiso over the network achieved with isc-dhcp-server, tftp-hpa and Nginx.</p>
<h2>ISOs</h2>
<h3>Booting ISOs</h3>
<p>ISOs can be booted in any number of ways; You can keep it traditional and burn them to a CD or block-level-copy them to a USB drive and boot that. Or most commonly, select the file for a virtual cd rom drive for a virtual machine to boot from as if it were real.</p>
<p>Upon booting any of these methods the system will notice the bootable ISO content like a normal installed operating system and give the option to boot from it. In the case of Archlinux this is a live environment.</p>
<p>How the maintainers of any given major distro decide to handle booting their installation medium is up to them and often unique. In the case of this Archlinux ISO I have here: <code>archlinux-2024.09.01-x86_64.iso</code> booting it enters systemd-boot and the default menu option has these arguments: <code>archisobasedir=arch archisosearchuuid=2024-09-01-12-40-01-00</code> and the ISO's partition UUID happens to be that. RedHat has its own special way and flags for doing so and I cover some of these in my <a href="https://github.com/ipaqmaster/autoProvisioner">autoProvisioner</a> project which has a few .ipxe examples for Archlinux, CentOS 7 and Rocky 9.2 showing each their own unique boot argument implementations and differences in process.</p>
<p>Referencing the installation media by a partition UUID has the benefit of not needing to care <em>how</em> the ISO was presented to the system, be it a cdrom, usb stick or even NVMe for some reason - the partition UUID will always appear in the same place regardless after the kernel reads information about the partition.</p>
<p>This can be seen any time by running <code>sudo losetup -Pf archlinux-2024.09.01-x86_64.iso</code> which sets the ISO up on a loop device causing the system to enumerate it like an attached drive. This causes <code>/dev/disk/by-uuid/2024-09-01-12-40-01-00</code> to appear (This loop can be removed again with <code>losetup -d loop0</code>).</p>
<p>In the case of this bootable ISO we're hitting the systemd-boot bootloader which has been preconfigured to boot the rest of the environment.</p>
<h3>Booting Archiso</h3>
<p>Most Linux ISOs including the Archiso support traditional BIOS booting as a hybrid ISO however we'll stick to the context of UEFI.</p>
<p>Mounting the archiso to a directory reveals close to 110 files. Many related to Syslinux (A common bootloader for ISOs), a memtest.efi (which is one of the boot options) and a top level directory named <code>arch/</code> which contains yet another bootloader and initramfs inside plus <code>arch/x86_64/airootfs.sfs</code> which is the in-memory rootfs we end up loading to with a command prompt once the Archiso finishes booting.</p>
<p>The ISO has <code>EFI/BOOT/BOOTx64.EFI</code> which is the default path a UEFI system searches for on EFI partitions and these are automatically booted from in device order when no other boot options are configured. It's common to place a copy of the intended bootloader at this path for this case. Windows systems create BOOTx64.EFI as a copy of <code>bootmgfw.efi</code>, the intended target.</p>
<p>This chain of events starts at the top with the UEFI system seeing the ISO's bootable partition and starting up <code>EFI/BOOT/BOOTx64.EFI</code> which is systemd-boot (Previously gummiboot). These three boot options can be seen in <code>loader/entries/</code> on the mounted ISO.</p>
<p><code>loader/entries/01-archiso-x86_64-linux.conf</code> shows that it intends to boot <code>arch/boot/x86_64/vmlinuz-linux</code> using initrd <code>arch/boot/x86_64/initramfs-linux.img</code> with options <code>archisobasedir=arch archisosearchuuid=2024-09-01-12-40-01-00</code>. This won't be able to help us as we will not have any form of the ISO available for it to reference the content from.</p>
<p><a href="https://wiki.archlinux.org/title/Preboot_Execution_Environment">Thanks to the wiki</a> network-booting this distribution is well documented and officially supported. We still need <code>archisobasedir=arch</code> which is the subdirectory of the ISO with all of the content but we can specify <code>archiso_http_srv</code> with a http address to fetch the bootable content from.</p>
<p>There is also <code>cms_verify</code> which seems to be repsonsible for validating airootfs.sfs which comes with a sha512 and a .sig signature file the live environment can validate against.</p>
<h3>PXE booting and limitations</h3>
<p>PXE booting has been around for decades and any typical UEFI system you run into will support it. When a computer tries to PXE boot your DHCP server can respond with hints of an address and file path for the computer to request over TFTP and boot.</p>
<p>The ROM of PCIe Network cards is often flashed with some PXE-capable program which appears as another boot option. If the provided ROM isn't up to scratch it's common to flash network cards with iPXE. The gold standard.</p>
<p>We could jump right in and modify our dhcp server to load up the above <code>vmlinuz-linux</code> file from some TFTP server but there's no way to specify an initrd file or kernel arguments for the kernel we're about to boot. Because of this limitation we must seek the help of iPXE who can act as our network bootloader for this case - allowing us to boot the kernel with an initramfs image and arguments we want included.</p>
<h3>Getting started</h3>
<p>For this we will need these packages from pacman: <code>dhcp git nginx tftp-hpa</code></p>
<h4>DHCPD</h4>
<p>Start with the DHCP server. Create or modify an existing <code>/etc/dhcpd.conf</code> and add <code>filename "ipxe.efi";</code> as one of the options and restart the service.</p>
<p>Without specifying additional configuration such as <code>option tftp-server-address</code> it will be assumed that the TFTP server is running on the router.</p>
<p>A very lightweight example dhcpd.conf might look like this:</p>
<pre><code>authoritative;
default-lease-time 1800;
max-lease-time     3600;

subnet 10.99.0.0 netmask 255.255.255.224 {
  option domain-name-servers      1.1.1.1;
  option routers                  10.99.0.1;
  option tftp-server-address      10.99.0.1;
  range dynamic-bootp             10.99.0.10 10.99.0.100;
  filename                       "/ipxe.efi";
}</code></pre>
<p>Keep in mind dhcpd will not issue IPs for a subnet until that network is visible on an active interface. So the above small configuration example is harmless until an IP in that range exists on an interface.</p>
<h4>TFTP</h4>
<p>Start and enable tftp-hpa with: systemctl enable --now tftpd`. This TFTP server operates in /srv/tftp by default.</p>
<p>Create a directory to mount your ISO like<code>mkdir -p /srv/tftp/archlinux</code> and mount the latest ISO to that directory: <code>mount ~/archlinux-2024.09.01-x86_64.iso /srv/tftp/archlinux/</code>. This exposes the ISO contents for the client to reference.</p>
<h4>NGINX</h4>
<p>Create an example vhost which points to /srv/tftp with autoindexing enabled:</p>
<pre><code># vi /etc/nginx/conf.d/tftp.conf

server {
  listen       80;
  http2        on;
  server_name  tftpServerHostnameHere;
  autoindex    on;
  root         /srv/tftp;

  access_log /var/log/nginx/access.theHostname.log;
  error_log  /var/log/nginx/error.theHostname.log;
}</code></pre>
<p>Making sure to set server_name to the network hostname or IP address of your TFTP server.</p>
<p>Start and enable Nginx with: systemctl enable --now nginx` and try browsing to the server_name to confirm the directory contents are loading and displaying correctly.</p>
<h4>iPXE</h4>
<p>Everything is set and we need one final piece of the puzzle. An iPXE image which knows how to boot the Archiso.</p>
<p><code>git clone https://github.com/ipxe/ipxe</code> and <code>cd ipxe/src</code></p>
<p>In here create a file like <code>script.ipxe</code> with the below content:</p>
<pre><code>#!ipxe
set baseUrl http://tftpServerHostnameHere
dhcp || shell
initrd ${baseUrl}/archlinux/arch/boot/x86_64/initramfs-linux.img || shell
kernel ${baseUrl}/archlinux/arch/boot/x86_64/vmlinuz-linux ip=dhcp archiso_http_srv=${baseUrl}/archlinux/ archisobasedir=arch cms_verify=y script=${baseUrl}/script.cfg || shell
boot || shell</code></pre>
<p>These iPXE commands prepare iPXE to boot the Archiso environment just like the bootloader in the ISO would do. The <code>|| shell</code> case on the end of each line specifies that if any of these commands should fail, fall back to the iPXE shell for debugging. <code>|| reboot</code> is also valid but in this testing case, it would reboot too quickly on error for us to read an error message.</p>
<p>This ipxe script includes special arguments for reaching out to a http server for the required content instead of the cdrom partition uuid. It also includes <code>script=</code>, which can be used to reference a shell script after the live environment starts up. You can create <code>/srv/tftp/script.cfg</code> with shell commands inside to initiate say, an automated unattended installation with archinstall or any number of custom commands you may wish to prep the live environment with. This could include some form of .sig pgp validation of the script itself or reaching out to a secure trusted source for a PXE client to follow.</p>
<p>Finally build the iPXE efi binary with our script embedded inside. Then move it to the tftp/webroot directory: <code>make bin-x86_64-efi/ipxe.efi -j$(nproc) EMBED=script.ipxe</code> <code>cp bin-x86_64-efi/ipxe.efi /srv/tftp/</code></p>
<h3>PXE Booting</h3>
<p>At this point with our TFTP and Webserver running we can boot the PXE client. If everything has been followed correctly it should grab our custom iPXE efi binary from the TFTP server.</p>
<p>iPXE will then follow our embedded script, fetching the kernel image, initramfs and setting our desired kernel arguments. The initramfs environment has been designed for the Archlinux ISO and will find, verify and unpack airootfs.sfs before taking you to a terminal prompt.</p>
<p>If you create <code>/srv/tftp/script.cfg</code> and leave the <code>script=</code> argument in from the above iPXE script example you can have it automatically append your ssh key to <code>/root/.ssh/authorized_keys</code> and start the sshd service for remote access to the live environment for installing by hand, or troubleshooting problems with the host using SSH.</p>]]></description></item></channel></rss>
Unique hits for this page: 558