So, I had left my little Ubuntu server alone and neglected, giving it the occasional glance, the occasional log in and do an apt-get update && apt-get autoremove
…
Well, with my recent shenanigans surrounding a power cut (self-caused, mind you), I was also prompted to upgrade to Ubuntu LTS 22.04.1…
“.1
“… Well! That should be more stable (than the .0
released back in April)! O00-kay! Time to give it a whack!
Turns out, things went south pretty fast and I needed half an evening to right everything…
S P A C E
Specifically, the lack of it… Despite having a 64GB vHDD (w/40GB partition for main), on a system installed with (relatively) little, it threw warnings mid-way through the install, and at one point, it finally threw some apt-get
errors…
So, firstly, I had to clear some files… Finally dawned on me that I hadn’t cleared the journal since the dawn of existence (or at least the build’s existence)… journalctl --vacuum-time=5d
cleared a good 3.9GB (slightly below 10%)! Snap clearing also bought me another ~1GB of partition space…
I didn’t think it was enough, so I decided to expand the 64GB vHDD to 88GB using the “Edit Virtual Hard Disk Wizard”… So, since the partition layout was:
/dev/sda1
–boot, EFI
/dev/sda2
– Ubuntu OS/dev/sda3
–swap
/dev/sda4
–zlog
I booted with the latest gparted ISO, moved the zlog partition to the end of the (now expanded) disk, deleted swap, then extended the main partition, then re-created a new swap partition (which would have been faster than moving the swap partition also)…
Spotted my mistake yet?
Like Daniel, I had forgotten to “wax on”, “wax off” – or more specifically (and technically correct), “swapoff
” (with the need to eventually “swapon
” later)…
The system threw errors on reboot but came up anyway… A quick fix soon had things back in order:
swapoff -a
(threw errors anyway)mkswap /dev/sda3
(of course, ensure that “/dev/sda3
” is replaced with the proper device path with an existing swap partition)partprobe
(you know, to “stuff something long and thin” i.e. probe the OS to force-reload the partitions it knows about)blkid | grep "swap"
to grab the swap partition’s UUID- edit
/etc/fstab
to ensure the swap partition’s UUID (if used) is replaced with this current swap partition’s UUID swapon -a
(and pray…)
So, the upgrade went on…
apt-get, grub and the BOOT/EFI
NOT!
Whilst the upgrade appeared to have gone smoothly (barring some silly sr0 errors which just delays boot but doesn’t really seem to hurt anything), apt-get
was repeatedly aborting, reporting:
dpkg: error processing package grub-efi-amd64-signed (--configure): installed grub-efi-amd64-signed package post-installation script subprocess returned error exit status 32 dpkg: dependency problems prevent processing triggers for shim-signed: shim-signed depends on grub-efi-amd64-signed | grub-efi-arm64-signed; however: Package grub-efi-amd64-signed is not configured yet. Package grub-efi-arm64-signed is not installed. dpkg: error processing package shim-signed (--configure): dependency problems - leaving triggers unprocessed Errors were encountered while processing: grub-efi-amd64-signed shim-signed
I found a similar situation that triggered the thought that I had, likewise, did a P2V migration of my Ubuntu…
So, whereas the UUIDs were intact on /etc/fstab
, the actual /dev/sda1
partition was flagged as a simple “FAT32” type partition…
Using fdisk
and parted
fixed that right up…
fdisk /dev/sda
(replace/dev/sda
with the correct device path)t
(changing partition type)1
(i.e. selecting /dev/sda1)EF
(using the “EFI” alias)w
(commit changes and quit)
parted /dev/sda
(replace/dev/sda
with the correct device path)set 1 boot on
(set the boot flag on partition #1 of/dev/sda
)set 1 esp on
(set the esp flag on on partition #1 of/dev/sda
)
Double-checking /etc/fstab
and the UUID against the “/boot/efi
” entry showed things were working. So, as per the answer providing the hints, I did a:
apt-get --fix-broken install
and that fixedgrub-efi-arm64-signed
, and- a subsequent
apt-get upgrade grub-efi-arm64-signed && apt-get upgrade shim-signed
sorted everything else out…
SSH and Keys
OK… So not everything… While upgrading, I was warned of an /etc/ssh/sshd_config
(upgraded) vendor version conflict, so I noted down what was different and decided to use the “new” version.
Chief amongst that was, like forcing public-private key authentication first before password entry on pfSense, the same for sshd (to prevent password brute-force attacks as part of 2FA). Namely, that was editing the configuration file and:
- adding the line “
AuthenticationMethods publickey,password
” and - enabling (i.e. removing the “
#
” comment prefix) “PubkeyAuthentication yes
” and “AuthorizedKeysFile .ssh/authorized_keys .ssh/authorized_keys2
“ - restarting the service, with either “
service sshd restart
” or “systemctl restart sshd
“
But that didn’t work… Attempting to SSH into the box still reported a “Server refused our key” error… Finding a similar error, and specifically checking the /var/log/auth.log
finally pointed to the fix: the answer was to add PubkeyAcceptedKeyTypes=+ssh-rsa
to /etc/ssh/sshd_config, with the view to (eventually) upgrade our keys…
Let’s see if I hit anything in the future…