mirror of
https://github.com/freebsd/freebsd-src.git
synced 2024-11-30 08:43:23 +00:00
ZTS: Use QEMU for tests on Linux and FreeBSD
This commit adds functional tests for these systems: - AlmaLinux 8, AlmaLinux 9, ArchLinux - CentOS Stream 9, Fedora 39, Fedora 40 - Debian 11, Debian 12 - FreeBSD 13, FreeBSD 14, FreeBSD 15 - Ubuntu 20.04, Ubuntu 22.04, Ubuntu 24.04 - enabled by default: - AlmaLinux 8, AlmaLinux 9 - Debian 11, Debian 12 - Fedora 39, Fedora 40 - FreeBSD 13, FreeBSD 14 Workflow for each operating system: - install qemu on the github runner - download current cloud image of operating system - start and init that image via cloud-init - install dependencies and poweroff system - start system and build openzfs and then poweroff again - clone build system and start 2 instances of it - run functional testings and complete in around 3h - when tests are done, do some logfile preparing - show detailed results for each system - in the end, generate the job summary Real-world benefits from this PR: 1. The github runner scripts are in the zfs repo itself. That means you can just open a PR against zfs, like "Add Fedora 41 tester", and see the results directly in the PR. ZFS admins no longer need manually to login to the buildbot server to update the buildbot config with new version of Fedora/Almalinux. 2. Github runners allow you to run the entire test suite against your private branch before submitting a formal PR to openzfs. Just open a PR against your private zfs repo, and the exact same Fedora/Alma/FreeBSD runners will fire up and run ZTS. This can be useful if you want to iterate on a ZTS change before submitting a formal PR. 3. buildbot is incredibly cumbersome. Our buildbot config files alone are ~1500 lines (not including any build/setup scripts)! It's a huge pain to setup. 4. We're running the super ancient buildbot 0.8.12. It's so ancient it requires python2. We actually have to build python2 from source for almalinux9 just to get it to run. Ugrading to a more modern buildbot is a huge undertaking, and the UI on the newer versions is worse. 5. Buildbot uses EC2 instances. EC2 is a pain because: * It costs money * They throttle IOPS and CPU usage, leading to mysterious, * hard-to-diagnose, failures and timeouts in ZTS. * EC2 is high maintenance. We have to setup security groups, SSH * keys, networking, users, etc, in AWS and it's a pain. We also * have to periodically go in an kill zombie EC2 instances that * buildbot is unable to kill off. 6. Buildbot doesn't always handle failures well. One of the things we saw in the past was the FreeBSD builders would often die, and each builder death would take up a "slot" in buildbot. So we would periodically have to restart buildbot via a cron job to get the slots back. 7. This PR divides up the ZTS test list into two parts, launches two VMs, and on each VM runs half the test suite. The test results are then merged and shown in the sumary page. So we're basically parallelizing ZTS on the same github runner. This leads to lower overall ZTS runtimes (2.5-3 hours vs 4+ hours on buildbot), and one unified set of results per runner, which is nice. 8. Since the tests are running on a VM, we have much more control over what happens. We can capture the serial console output even if the test completely brings down the VM. In the future, we could also restart the test on the VM where it left off, so that if a single test panics the VM, we can just restart it and run the remaining ZTS tests (this functionaly is not yet implemented though, just an idea). 9. Using the runners, users can manually kill or restart a test run via the github IU. That really isn't possible with buildbot unless you're an admin. 10. Anecdotally, the tests seem to be more stable and constant under the QEMU runners. Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Tino Reichardt <milky-zfs@mcmilk.de> Signed-off-by: Tony Hutter <hutter2@llnl.gov> Closes #16537
This commit is contained in:
parent
c4d1a19b33
commit
bca9b64e7b
14
.github/workflows/scripts/README.md
vendored
Normal file
14
.github/workflows/scripts/README.md
vendored
Normal file
@ -0,0 +1,14 @@
|
||||
|
||||
Workflow for each operating system:
|
||||
- install qemu on the github runner
|
||||
- download current cloud image of operating system
|
||||
- start and init that image via cloud-init
|
||||
- install dependencies and poweroff system
|
||||
- start system and build openzfs and then poweroff again
|
||||
- clone build system and start 2 instances of it
|
||||
- run functional testings and complete in around 3h
|
||||
- when tests are done, do some logfile preparing
|
||||
- show detailed results for each system
|
||||
- in the end, generate the job summary
|
||||
|
||||
/TR 14.09.2024
|
109
.github/workflows/scripts/merge_summary.awk
vendored
Executable file
109
.github/workflows/scripts/merge_summary.awk
vendored
Executable file
@ -0,0 +1,109 @@
|
||||
#!/bin/awk -f
|
||||
#
|
||||
# Merge multiple ZTS tests results summaries into a single summary. This is
|
||||
# needed when you're running different parts of ZTS on different tests
|
||||
# runners or VMs.
|
||||
#
|
||||
# Usage:
|
||||
#
|
||||
# ./merge_summary.awk summary1.txt [summary2.txt] [summary3.txt] ...
|
||||
#
|
||||
# or:
|
||||
#
|
||||
# cat summary*.txt | ./merge_summary.awk
|
||||
#
|
||||
BEGIN {
|
||||
i=-1
|
||||
pass=0
|
||||
fail=0
|
||||
skip=0
|
||||
state=""
|
||||
cl=0
|
||||
el=0
|
||||
upl=0
|
||||
ul=0
|
||||
|
||||
# Total seconds of tests runtime
|
||||
total=0;
|
||||
}
|
||||
|
||||
# Skip empty lines
|
||||
/^\s*$/{next}
|
||||
|
||||
# Skip Configuration and Test lines
|
||||
/^Test:/{state=""; next}
|
||||
/Configuration/{state="";next}
|
||||
|
||||
# When we see "test-runner.py" stop saving config lines, and
|
||||
# save test runner lines
|
||||
/test-runner.py/{state="testrunner"; runner=runner$0"\n"; next}
|
||||
|
||||
# We need to differentiate the PASS counts from test result lines that start
|
||||
# with PASS, like:
|
||||
#
|
||||
# PASS mv_files/setup
|
||||
#
|
||||
# Use state="pass_count" to differentiate
|
||||
#
|
||||
/Results Summary/{state="pass_count"; next}
|
||||
/PASS/{ if (state=="pass_count") {pass += $2}}
|
||||
/FAIL/{ if (state=="pass_count") {fail += $2}}
|
||||
/SKIP/{ if (state=="pass_count") {skip += $2}}
|
||||
/Running Time/{
|
||||
state="";
|
||||
running[i]=$3;
|
||||
split($3, arr, ":")
|
||||
total += arr[1] * 60 * 60;
|
||||
total += arr[2] * 60;
|
||||
total += arr[3]
|
||||
next;
|
||||
}
|
||||
|
||||
/Tests with results other than PASS that are expected/{state="expected_lines"; next}
|
||||
/Tests with result of PASS that are unexpected/{state="unexpected_pass_lines"; next}
|
||||
/Tests with results other than PASS that are unexpected/{state="unexpected_lines"; next}
|
||||
{
|
||||
if (state == "expected_lines") {
|
||||
expected_lines[el] = $0
|
||||
el++
|
||||
}
|
||||
|
||||
if (state == "unexpected_pass_lines") {
|
||||
unexpected_pass_lines[upl] = $0
|
||||
upl++
|
||||
}
|
||||
if (state == "unexpected_lines") {
|
||||
unexpected_lines[ul] = $0
|
||||
ul++
|
||||
}
|
||||
}
|
||||
|
||||
# Reproduce summary
|
||||
END {
|
||||
print runner;
|
||||
print "\nResults Summary"
|
||||
print "PASS\t"pass
|
||||
print "FAIL\t"fail
|
||||
print "SKIP\t"skip
|
||||
print ""
|
||||
print "Running Time:\t"strftime("%T", total, 1)
|
||||
if (pass+fail+skip > 0) {
|
||||
percent_passed=(pass/(pass+fail+skip) * 100)
|
||||
}
|
||||
printf "Percent passed:\t%3.2f%", percent_passed
|
||||
|
||||
print "\n\nTests with results other than PASS that are expected:"
|
||||
asort(expected_lines, sorted)
|
||||
for (j in sorted)
|
||||
print sorted[j]
|
||||
|
||||
print "\n\nTests with result of PASS that are unexpected:"
|
||||
asort(unexpected_pass_lines, sorted)
|
||||
for (j in sorted)
|
||||
print sorted[j]
|
||||
|
||||
print "\n\nTests with results other than PASS that are unexpected:"
|
||||
asort(unexpected_lines, sorted)
|
||||
for (j in sorted)
|
||||
print sorted[j]
|
||||
}
|
91
.github/workflows/scripts/qemu-1-setup.sh
vendored
Executable file
91
.github/workflows/scripts/qemu-1-setup.sh
vendored
Executable file
@ -0,0 +1,91 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
######################################################################
|
||||
# 1) setup qemu instance on action runner
|
||||
######################################################################
|
||||
|
||||
set -eu
|
||||
|
||||
# install needed packages
|
||||
export DEBIAN_FRONTEND="noninteractive"
|
||||
sudo apt-get -y update
|
||||
sudo apt-get install -y axel cloud-image-utils daemonize guestfs-tools \
|
||||
ksmtuned virt-manager linux-modules-extra-$(uname -r) zfsutils-linux
|
||||
|
||||
# generate ssh keys
|
||||
rm -f ~/.ssh/id_ed25519
|
||||
ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519 -q -N ""
|
||||
|
||||
# we expect RAM shortage
|
||||
cat << EOF | sudo tee /etc/ksmtuned.conf > /dev/null
|
||||
# https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/chap-ksm
|
||||
KSM_MONITOR_INTERVAL=60
|
||||
|
||||
# Millisecond sleep between ksm scans for 16Gb server.
|
||||
# Smaller servers sleep more, bigger sleep less.
|
||||
KSM_SLEEP_MSEC=10
|
||||
KSM_NPAGES_BOOST=300
|
||||
KSM_NPAGES_DECAY=-50
|
||||
KSM_NPAGES_MIN=64
|
||||
KSM_NPAGES_MAX=2048
|
||||
|
||||
KSM_THRES_COEF=25
|
||||
KSM_THRES_CONST=2048
|
||||
|
||||
LOGFILE=/var/log/ksmtuned.log
|
||||
DEBUG=1
|
||||
EOF
|
||||
sudo systemctl restart ksm
|
||||
sudo systemctl restart ksmtuned
|
||||
|
||||
# not needed
|
||||
sudo systemctl stop docker.socket
|
||||
sudo systemctl stop multipathd.socket
|
||||
|
||||
# remove default swapfile and /mnt
|
||||
sudo swapoff -a
|
||||
sudo umount -l /mnt
|
||||
DISK="/dev/disk/cloud/azure_resource-part1"
|
||||
sudo sed -e "s|^$DISK.*||g" -i /etc/fstab
|
||||
sudo wipefs -aq $DISK
|
||||
sudo systemctl daemon-reload
|
||||
|
||||
sudo modprobe loop
|
||||
sudo modprobe zfs
|
||||
|
||||
# partition the disk as needed
|
||||
DISK="/dev/disk/cloud/azure_resource"
|
||||
sudo sgdisk --zap-all $DISK
|
||||
sudo sgdisk -p \
|
||||
-n 1:0:+16G -c 1:"swap" \
|
||||
-n 2:0:0 -c 2:"tests" \
|
||||
$DISK
|
||||
sync
|
||||
sleep 1
|
||||
|
||||
# swap with same size as RAM
|
||||
sudo mkswap $DISK-part1
|
||||
sudo swapon $DISK-part1
|
||||
|
||||
# 60GB data disk
|
||||
SSD1="$DISK-part2"
|
||||
|
||||
# 10GB data disk on ext4
|
||||
sudo fallocate -l 10G /test.ssd1
|
||||
SSD2=$(sudo losetup -b 4096 -f /test.ssd1 --show)
|
||||
|
||||
# adjust zfs module parameter and create pool
|
||||
exec 1>/dev/null
|
||||
ARC_MIN=$((1024*1024*256))
|
||||
ARC_MAX=$((1024*1024*512))
|
||||
echo $ARC_MIN | sudo tee /sys/module/zfs/parameters/zfs_arc_min
|
||||
echo $ARC_MAX | sudo tee /sys/module/zfs/parameters/zfs_arc_max
|
||||
echo 1 | sudo tee /sys/module/zfs/parameters/zvol_use_blk_mq
|
||||
sudo zpool create -f -o ashift=12 zpool $SSD1 $SSD2 \
|
||||
-O relatime=off -O atime=off -O xattr=sa -O compression=lz4 \
|
||||
-O mountpoint=/mnt/tests
|
||||
|
||||
# no need for some scheduler
|
||||
for i in /sys/block/s*/queue/scheduler; do
|
||||
echo "none" | sudo tee $i > /dev/null
|
||||
done
|
213
.github/workflows/scripts/qemu-2-start.sh
vendored
Executable file
213
.github/workflows/scripts/qemu-2-start.sh
vendored
Executable file
@ -0,0 +1,213 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
######################################################################
|
||||
# 2) start qemu with some operating system, init via cloud-init
|
||||
######################################################################
|
||||
|
||||
set -eu
|
||||
|
||||
# short name used in zfs-qemu.yml
|
||||
OS="$1"
|
||||
|
||||
# OS variant (virt-install --os-variant list)
|
||||
OSv=$OS
|
||||
|
||||
# compressed with .zst extension
|
||||
REPO="https://github.com/mcmilk/openzfs-freebsd-images"
|
||||
FREEBSD="$REPO/releases/download/v2024-09-16"
|
||||
URLzs=""
|
||||
|
||||
# Ubuntu mirrors
|
||||
#UBMIRROR="https://cloud-images.ubuntu.com"
|
||||
#UBMIRROR="https://mirrors.cloud.tencent.com/ubuntu-cloud-images"
|
||||
UBMIRROR="https://mirror.citrahost.com/ubuntu-cloud-images"
|
||||
|
||||
# default nic model for vm's
|
||||
NIC="virtio"
|
||||
|
||||
case "$OS" in
|
||||
almalinux8)
|
||||
OSNAME="AlmaLinux 8"
|
||||
URL="https://repo.almalinux.org/almalinux/8/cloud/x86_64/images/AlmaLinux-8-GenericCloud-latest.x86_64.qcow2"
|
||||
;;
|
||||
almalinux9)
|
||||
OSNAME="AlmaLinux 9"
|
||||
URL="https://repo.almalinux.org/almalinux/9/cloud/x86_64/images/AlmaLinux-9-GenericCloud-latest.x86_64.qcow2"
|
||||
;;
|
||||
archlinux)
|
||||
OSNAME="Archlinux"
|
||||
URL="https://geo.mirror.pkgbuild.com/images/latest/Arch-Linux-x86_64-cloudimg.qcow2"
|
||||
# dns sometimes fails with that url :/
|
||||
echo "89.187.191.12 geo.mirror.pkgbuild.com" | sudo tee /etc/hosts > /dev/null
|
||||
;;
|
||||
centos-stream9)
|
||||
OSNAME="CentOS Stream 9"
|
||||
URL="https://cloud.centos.org/centos/9-stream/x86_64/images/CentOS-Stream-GenericCloud-9-latest.x86_64.qcow2"
|
||||
;;
|
||||
debian11)
|
||||
OSNAME="Debian 11"
|
||||
URL="https://cloud.debian.org/images/cloud/bullseye/latest/debian-11-generic-amd64.qcow2"
|
||||
;;
|
||||
debian12)
|
||||
OSNAME="Debian 12"
|
||||
URL="https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2"
|
||||
;;
|
||||
fedora39)
|
||||
OSNAME="Fedora 39"
|
||||
OSv="fedora39"
|
||||
URL="https://download.fedoraproject.org/pub/fedora/linux/releases/39/Cloud/x86_64/images/Fedora-Cloud-Base-39-1.5.x86_64.qcow2"
|
||||
;;
|
||||
fedora40)
|
||||
OSNAME="Fedora 40"
|
||||
OSv="fedora39"
|
||||
URL="https://download.fedoraproject.org/pub/fedora/linux/releases/40/Cloud/x86_64/images/Fedora-Cloud-Base-Generic.x86_64-40-1.14.qcow2"
|
||||
;;
|
||||
freebsd13r)
|
||||
OSNAME="FreeBSD 13.4-RELEASE"
|
||||
OSv="freebsd13.0"
|
||||
URLzs="$FREEBSD/amd64-freebsd-13.4-RELEASE.qcow2.zst"
|
||||
BASH="/usr/local/bin/bash"
|
||||
NIC="rtl8139"
|
||||
;;
|
||||
freebsd13)
|
||||
OSNAME="FreeBSD 13.4-STABLE"
|
||||
OSv="freebsd13.0"
|
||||
URLzs="$FREEBSD/amd64-freebsd-13.4-STABLE.qcow2.zst"
|
||||
BASH="/usr/local/bin/bash"
|
||||
NIC="rtl8139"
|
||||
;;
|
||||
freebsd14r)
|
||||
OSNAME="FreeBSD 14.1-RELEASE"
|
||||
OSv="freebsd14.0"
|
||||
URLzs="$FREEBSD/amd64-freebsd-14.1-RELEASE.qcow2.zst"
|
||||
BASH="/usr/local/bin/bash"
|
||||
;;
|
||||
freebsd14)
|
||||
OSNAME="FreeBSD 14.1-STABLE"
|
||||
OSv="freebsd14.0"
|
||||
URLzs="$FREEBSD/amd64-freebsd-14.1-STABLE.qcow2.zst"
|
||||
BASH="/usr/local/bin/bash"
|
||||
;;
|
||||
freebsd15)
|
||||
OSNAME="FreeBSD 15.0-CURRENT"
|
||||
OSv="freebsd14.0"
|
||||
URLzs="$FREEBSD/amd64-freebsd-15.0-CURRENT.qcow2.zst"
|
||||
BASH="/usr/local/bin/bash"
|
||||
;;
|
||||
tumbleweed)
|
||||
OSNAME="openSUSE Tumbleweed"
|
||||
OSv="opensusetumbleweed"
|
||||
MIRROR="http://opensuse-mirror-gce-us.susecloud.net"
|
||||
URL="$MIRROR/tumbleweed/appliances/openSUSE-MicroOS.x86_64-OpenStack-Cloud.qcow2"
|
||||
;;
|
||||
ubuntu20)
|
||||
OSNAME="Ubuntu 20.04"
|
||||
OSv="ubuntu20.04"
|
||||
URL="$UBMIRROR/focal/current/focal-server-cloudimg-amd64.img"
|
||||
;;
|
||||
ubuntu22)
|
||||
OSNAME="Ubuntu 22.04"
|
||||
OSv="ubuntu22.04"
|
||||
URL="$UBMIRROR/jammy/current/jammy-server-cloudimg-amd64.img"
|
||||
;;
|
||||
ubuntu24)
|
||||
OSNAME="Ubuntu 24.04"
|
||||
OSv="ubuntu24.04"
|
||||
URL="$UBMIRROR/noble/current/noble-server-cloudimg-amd64.img"
|
||||
;;
|
||||
*)
|
||||
echo "Wrong value for OS variable!"
|
||||
exit 111
|
||||
;;
|
||||
esac
|
||||
|
||||
# environment file
|
||||
ENV="/var/tmp/env.txt"
|
||||
echo "ENV=$ENV" >> $ENV
|
||||
|
||||
# result path
|
||||
echo 'RESPATH="/var/tmp/test_results"' >> $ENV
|
||||
|
||||
# FreeBSD 13 has problems with: e1000+virtio
|
||||
echo "NIC=$NIC" >> $ENV
|
||||
|
||||
# freebsd15 -> used in zfs-qemu.yml
|
||||
echo "OS=$OS" >> $ENV
|
||||
|
||||
# freebsd14.0 -> used for virt-install
|
||||
echo "OSv=\"$OSv\"" >> $ENV
|
||||
|
||||
# FreeBSD 15 (Current) -> used for summary
|
||||
echo "OSNAME=\"$OSNAME\"" >> $ENV
|
||||
|
||||
sudo mkdir -p "/mnt/tests"
|
||||
sudo chown -R $(whoami) /mnt/tests
|
||||
|
||||
# we are downloading via axel, curl and wget are mostly slower and
|
||||
# require more return value checking
|
||||
IMG="/mnt/tests/cloudimg.qcow2"
|
||||
if [ ! -z "$URLzs" ]; then
|
||||
echo "Loading image $URLzs ..."
|
||||
time axel -q -o "$IMG.zst" "$URLzs"
|
||||
zstd -q -d --rm "$IMG.zst"
|
||||
else
|
||||
echo "Loading image $URL ..."
|
||||
time axel -q -o "$IMG" "$URL"
|
||||
fi
|
||||
|
||||
DISK="/dev/zvol/zpool/openzfs"
|
||||
FORMAT="raw"
|
||||
sudo zfs create -ps -b 64k -V 80g zpool/openzfs
|
||||
while true; do test -b $DISK && break; sleep 1; done
|
||||
echo "Importing VM image to zvol..."
|
||||
sudo qemu-img dd -f qcow2 -O raw if=$IMG of=$DISK bs=4M
|
||||
rm -f $IMG
|
||||
|
||||
PUBKEY=$(cat ~/.ssh/id_ed25519.pub)
|
||||
cat <<EOF > /tmp/user-data
|
||||
#cloud-config
|
||||
|
||||
fqdn: $OS
|
||||
|
||||
users:
|
||||
- name: root
|
||||
shell: $BASH
|
||||
- name: zfs
|
||||
sudo: ALL=(ALL) NOPASSWD:ALL
|
||||
shell: $BASH
|
||||
ssh_authorized_keys:
|
||||
- $PUBKEY
|
||||
|
||||
growpart:
|
||||
mode: auto
|
||||
devices: ['/']
|
||||
ignore_growroot_disabled: false
|
||||
EOF
|
||||
|
||||
sudo virsh net-update default add ip-dhcp-host \
|
||||
"<host mac='52:54:00:83:79:00' ip='192.168.122.10'/>" --live --config
|
||||
|
||||
sudo virt-install \
|
||||
--os-variant $OSv \
|
||||
--name "openzfs" \
|
||||
--cpu host-passthrough \
|
||||
--virt-type=kvm --hvm \
|
||||
--vcpus=4,sockets=1 \
|
||||
--memory $((1024*12)) \
|
||||
--memballoon model=virtio \
|
||||
--graphics none \
|
||||
--network bridge=virbr0,model=$NIC,mac='52:54:00:83:79:00' \
|
||||
--cloud-init user-data=/tmp/user-data \
|
||||
--disk $DISK,bus=virtio,cache=none,format=$FORMAT,driver.discard=unmap \
|
||||
--import --noautoconsole >/dev/null
|
||||
|
||||
# in case the directory isn't there already
|
||||
mkdir -p $HOME/.ssh
|
||||
|
||||
cat <<EOF >> $HOME/.ssh/config
|
||||
# no questions please
|
||||
StrictHostKeyChecking no
|
||||
|
||||
# small timeout, used in while loops later
|
||||
ConnectTimeout 1
|
||||
EOF
|
218
.github/workflows/scripts/qemu-3-deps.sh
vendored
Executable file
218
.github/workflows/scripts/qemu-3-deps.sh
vendored
Executable file
@ -0,0 +1,218 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
######################################################################
|
||||
# 3) install dependencies for compiling and loading
|
||||
######################################################################
|
||||
|
||||
set -eu
|
||||
|
||||
function archlinux() {
|
||||
echo "##[group]Running pacman -Syu"
|
||||
sudo btrfs filesystem resize max /
|
||||
sudo pacman -Syu --noconfirm
|
||||
echo "##[endgroup]"
|
||||
|
||||
echo "##[group]Install Development Tools"
|
||||
sudo pacman -Sy --noconfirm base-devel bc cpio dhclient dkms fakeroot \
|
||||
fio gdb inetutils jq less linux linux-headers lsscsi nfs-utils parted \
|
||||
pax perf python-packaging python-setuptools qemu-guest-agent ksh samba \
|
||||
sysstat rng-tools rsync wget
|
||||
echo "##[endgroup]"
|
||||
}
|
||||
|
||||
function debian() {
|
||||
export DEBIAN_FRONTEND="noninteractive"
|
||||
|
||||
echo "##[group]Running apt-get update+upgrade"
|
||||
sudo apt-get update -y
|
||||
sudo apt-get upgrade -y
|
||||
echo "##[endgroup]"
|
||||
|
||||
echo "##[group]Install Development Tools"
|
||||
sudo apt-get install -y \
|
||||
acl alien attr autoconf bc cpio curl dbench dh-python dkms fakeroot \
|
||||
fio gdb gdebi git ksh lcov isc-dhcp-client jq libacl1-dev libaio-dev \
|
||||
libattr1-dev libblkid-dev libcurl4-openssl-dev libdevmapper-dev \
|
||||
libelf-dev libffi-dev libmount-dev libpam0g-dev libselinux-dev \
|
||||
libssl-dev libtool libtool-bin libudev-dev linux-headers-$(uname -r) \
|
||||
lsscsi nfs-kernel-server pamtester parted python3 python3-all-dev \
|
||||
python3-cffi python3-dev python3-distlib python3-packaging \
|
||||
python3-setuptools python3-sphinx qemu-guest-agent rng-tools rpm2cpio \
|
||||
rsync samba sysstat uuid-dev watchdog wget xfslibs-dev zlib1g-dev
|
||||
echo "##[endgroup]"
|
||||
}
|
||||
|
||||
function freebsd() {
|
||||
export ASSUME_ALWAYS_YES="YES"
|
||||
|
||||
echo "##[group]Install Development Tools"
|
||||
sudo pkg install -y autoconf automake autotools base64 checkbashisms fio \
|
||||
gdb gettext gettext-runtime git gmake gsed jq ksh93 lcov libtool lscpu \
|
||||
pkgconf python python3 pamtester pamtester qemu-guest-agent rsync \
|
||||
sysutils/coreutils
|
||||
sudo pkg install -xy \
|
||||
'^samba4[[:digit:]]+$' \
|
||||
'^py3[[:digit:]]+-cffi$' \
|
||||
'^py3[[:digit:]]+-sysctl$' \
|
||||
'^py3[[:digit:]]+-packaging$'
|
||||
echo "##[endgroup]"
|
||||
}
|
||||
|
||||
# common packages for: almalinux, centos, redhat
|
||||
function rhel() {
|
||||
echo "##[group]Running dnf update"
|
||||
echo "max_parallel_downloads=10" | sudo -E tee -a /etc/dnf/dnf.conf
|
||||
sudo dnf clean all
|
||||
sudo dnf update -y --setopt=fastestmirror=1 --refresh
|
||||
echo "##[endgroup]"
|
||||
|
||||
echo "##[group]Install Development Tools"
|
||||
sudo dnf group install -y "Development Tools"
|
||||
sudo dnf install -y \
|
||||
acl attr bc bzip2 curl dbench dkms elfutils-libelf-devel fio gdb git \
|
||||
jq kernel-rpm-macros ksh libacl-devel libaio-devel libargon2-devel \
|
||||
libattr-devel libblkid-devel libcurl-devel libffi-devel ncompress \
|
||||
libselinux-devel libtirpc-devel libtool libudev-devel libuuid-devel \
|
||||
lsscsi mdadm nfs-utils openssl-devel pam-devel pamtester parted perf \
|
||||
python3 python3-cffi python3-devel python3-packaging kernel-devel \
|
||||
python3-setuptools qemu-guest-agent rng-tools rpcgen rpm-build rsync \
|
||||
samba sysstat systemd watchdog wget xfsprogs-devel zlib-devel
|
||||
echo "##[endgroup]"
|
||||
}
|
||||
|
||||
function tumbleweed() {
|
||||
echo "##[group]Running zypper is TODO!"
|
||||
sleep 23456
|
||||
echo "##[endgroup]"
|
||||
}
|
||||
|
||||
# Install dependencies
|
||||
case "$1" in
|
||||
almalinux8)
|
||||
echo "##[group]Enable epel and powertools repositories"
|
||||
sudo dnf config-manager -y --set-enabled powertools
|
||||
sudo dnf install -y epel-release
|
||||
echo "##[endgroup]"
|
||||
rhel
|
||||
echo "##[group]Install kernel-abi-whitelists"
|
||||
sudo dnf install -y kernel-abi-whitelists
|
||||
echo "##[endgroup]"
|
||||
;;
|
||||
almalinux9|centos-stream9)
|
||||
echo "##[group]Enable epel and crb repositories"
|
||||
sudo dnf config-manager -y --set-enabled crb
|
||||
sudo dnf install -y epel-release
|
||||
echo "##[endgroup]"
|
||||
rhel
|
||||
echo "##[group]Install kernel-abi-stablelists"
|
||||
sudo dnf install -y kernel-abi-stablelists
|
||||
echo "##[endgroup]"
|
||||
;;
|
||||
archlinux)
|
||||
archlinux
|
||||
;;
|
||||
debian*)
|
||||
debian
|
||||
echo "##[group]Install Debian specific"
|
||||
sudo apt-get install -yq linux-perf dh-sequence-dkms
|
||||
echo "##[endgroup]"
|
||||
;;
|
||||
fedora*)
|
||||
rhel
|
||||
;;
|
||||
freebsd*)
|
||||
freebsd
|
||||
;;
|
||||
tumbleweed)
|
||||
tumbleweed
|
||||
;;
|
||||
ubuntu*)
|
||||
debian
|
||||
echo "##[group]Install Ubuntu specific"
|
||||
sudo apt-get install -yq linux-tools-common libtirpc-dev \
|
||||
linux-modules-extra-$(uname -r)
|
||||
if [ "$1" != "ubuntu20" ]; then
|
||||
sudo apt-get install -yq dh-sequence-dkms
|
||||
fi
|
||||
echo "##[endgroup]"
|
||||
echo "##[group]Delete Ubuntu OpenZFS modules"
|
||||
for i in $(find /lib/modules -name zfs -type d); do sudo rm -rvf $i; done
|
||||
echo "##[endgroup]"
|
||||
;;
|
||||
esac
|
||||
|
||||
# Start services
|
||||
echo "##[group]Enable services"
|
||||
case "$1" in
|
||||
freebsd*)
|
||||
# add virtio things
|
||||
echo 'virtio_load="YES"' | sudo -E tee -a /boot/loader.conf
|
||||
for i in balloon blk console random scsi; do
|
||||
echo "virtio_${i}_load=\"YES\"" | sudo -E tee -a /boot/loader.conf
|
||||
done
|
||||
echo "fdescfs /dev/fd fdescfs rw 0 0" | sudo -E tee -a /etc/fstab
|
||||
sudo -E mount /dev/fd
|
||||
sudo -E touch /etc/zfs/exports
|
||||
sudo -E sysrc mountd_flags="/etc/zfs/exports"
|
||||
echo '[global]' | sudo -E tee /usr/local/etc/smb4.conf >/dev/null
|
||||
sudo -E service nfsd enable
|
||||
sudo -E service qemu-guest-agent enable
|
||||
sudo -E service samba_server enable
|
||||
;;
|
||||
debian*|ubuntu*)
|
||||
sudo -E systemctl enable nfs-kernel-server
|
||||
sudo -E systemctl enable qemu-guest-agent
|
||||
sudo -E systemctl enable smbd
|
||||
;;
|
||||
*)
|
||||
# All other linux distros
|
||||
sudo -E systemctl enable nfs-server
|
||||
sudo -E systemctl enable qemu-guest-agent
|
||||
sudo -E systemctl enable smb
|
||||
;;
|
||||
esac
|
||||
echo "##[endgroup]"
|
||||
|
||||
# Setup Kernel cmdline
|
||||
CMDLINE="console=tty0 console=ttyS0,115200n8"
|
||||
CMDLINE="$CMDLINE selinux=0"
|
||||
CMDLINE="$CMDLINE random.trust_cpu=on"
|
||||
CMDLINE="$CMDLINE no_timer_check"
|
||||
case "$1" in
|
||||
almalinux*|centos*|fedora*)
|
||||
GRUB_CFG="/boot/grub2/grub.cfg"
|
||||
GRUB_MKCONFIG="grub2-mkconfig"
|
||||
CMDLINE="$CMDLINE biosdevname=0 net.ifnames=0"
|
||||
echo 'GRUB_SERIAL_COMMAND="serial --speed=115200"' \
|
||||
| sudo tee -a /etc/default/grub >/dev/null
|
||||
;;
|
||||
ubuntu24)
|
||||
GRUB_CFG="/boot/grub/grub.cfg"
|
||||
GRUB_MKCONFIG="grub-mkconfig"
|
||||
echo 'GRUB_DISABLE_OS_PROBER="false"' \
|
||||
| sudo tee -a /etc/default/grub >/dev/null
|
||||
;;
|
||||
*)
|
||||
GRUB_CFG="/boot/grub/grub.cfg"
|
||||
GRUB_MKCONFIG="grub-mkconfig"
|
||||
;;
|
||||
esac
|
||||
|
||||
case "$1" in
|
||||
archlinux|freebsd*)
|
||||
true
|
||||
;;
|
||||
*)
|
||||
echo "##[group]Edit kernel cmdline"
|
||||
sudo sed -i -e '/^GRUB_CMDLINE_LINUX/d' /etc/default/grub || true
|
||||
echo "GRUB_CMDLINE_LINUX=\"$CMDLINE\"" \
|
||||
| sudo tee -a /etc/default/grub >/dev/null
|
||||
sudo $GRUB_MKCONFIG -o $GRUB_CFG
|
||||
echo "##[endgroup]"
|
||||
;;
|
||||
esac
|
||||
|
||||
# reset cloud-init configuration and poweroff
|
||||
sudo cloud-init clean --logs
|
||||
sleep 2 && sudo poweroff &
|
||||
exit 0
|
153
.github/workflows/scripts/qemu-4-build.sh
vendored
Executable file
153
.github/workflows/scripts/qemu-4-build.sh
vendored
Executable file
@ -0,0 +1,153 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
######################################################################
|
||||
# 4) configure and build openzfs modules
|
||||
######################################################################
|
||||
|
||||
set -eu
|
||||
|
||||
function run() {
|
||||
LOG="/var/tmp/build-stderr.txt"
|
||||
echo "****************************************************"
|
||||
echo "$(date) ($*)"
|
||||
echo "****************************************************"
|
||||
($@ || echo $? > /tmp/rv) 3>&1 1>&2 2>&3 | stdbuf -eL -oL tee -a $LOG
|
||||
if [ -f /tmp/rv ]; then
|
||||
RV=$(cat /tmp/rv)
|
||||
echo "****************************************************"
|
||||
echo "exit with value=$RV ($*)"
|
||||
echo "****************************************************"
|
||||
echo 1 > /var/tmp/build-exitcode.txt
|
||||
exit $RV
|
||||
fi
|
||||
}
|
||||
|
||||
function freebsd() {
|
||||
export MAKE="gmake"
|
||||
echo "##[group]Autogen.sh"
|
||||
run ./autogen.sh
|
||||
echo "##[endgroup]"
|
||||
|
||||
echo "##[group]Configure"
|
||||
run ./configure \
|
||||
--prefix=/usr/local \
|
||||
--with-libintl-prefix=/usr/local \
|
||||
--enable-pyzfs \
|
||||
--enable-debug \
|
||||
--enable-debuginfo
|
||||
echo "##[endgroup]"
|
||||
|
||||
echo "##[group]Build"
|
||||
run gmake -j$(sysctl -n hw.ncpu)
|
||||
echo "##[endgroup]"
|
||||
|
||||
echo "##[group]Install"
|
||||
run sudo gmake install
|
||||
echo "##[endgroup]"
|
||||
}
|
||||
|
||||
function linux() {
|
||||
echo "##[group]Autogen.sh"
|
||||
run ./autogen.sh
|
||||
echo "##[endgroup]"
|
||||
|
||||
echo "##[group]Configure"
|
||||
run ./configure \
|
||||
--prefix=/usr \
|
||||
--enable-pyzfs \
|
||||
--enable-debug \
|
||||
--enable-debuginfo
|
||||
echo "##[endgroup]"
|
||||
|
||||
echo "##[group]Build"
|
||||
run make -j$(nproc)
|
||||
echo "##[endgroup]"
|
||||
|
||||
echo "##[group]Install"
|
||||
run sudo make install
|
||||
echo "##[endgroup]"
|
||||
}
|
||||
|
||||
function rpm_build_and_install() {
|
||||
EXTRA_CONFIG="${1:-}"
|
||||
echo "##[group]Autogen.sh"
|
||||
run ./autogen.sh
|
||||
echo "##[endgroup]"
|
||||
|
||||
echo "##[group]Configure"
|
||||
run ./configure --enable-debug --enable-debuginfo $EXTRA_CONFIG
|
||||
echo "##[endgroup]"
|
||||
|
||||
echo "##[group]Build"
|
||||
run make pkg-kmod pkg-utils
|
||||
echo "##[endgroup]"
|
||||
|
||||
echo "##[group]Install"
|
||||
run sudo dnf -y --skip-broken localinstall $(ls *.rpm | grep -v src.rpm)
|
||||
echo "##[endgroup]"
|
||||
|
||||
}
|
||||
|
||||
function deb_build_and_install() {
|
||||
echo "##[group]Autogen.sh"
|
||||
run ./autogen.sh
|
||||
echo "##[endgroup]"
|
||||
|
||||
echo "##[group]Configure"
|
||||
run ./configure \
|
||||
--prefix=/usr \
|
||||
--enable-pyzfs \
|
||||
--enable-debug \
|
||||
--enable-debuginfo
|
||||
echo "##[endgroup]"
|
||||
|
||||
echo "##[group]Build"
|
||||
run make native-deb-kmod native-deb-utils
|
||||
echo "##[endgroup]"
|
||||
|
||||
echo "##[group]Install"
|
||||
# Do kmod install. Note that when you build the native debs, the
|
||||
# packages themselves are placed in parent directory '../' rather than
|
||||
# in the source directory like the rpms are.
|
||||
run sudo apt-get -y install $(find ../ | grep -E '\.deb$' \
|
||||
| grep -Ev 'dkms|dracut')
|
||||
echo "##[endgroup]"
|
||||
}
|
||||
|
||||
# Debug: show kernel cmdline
|
||||
if [ -f /proc/cmdline ] ; then
|
||||
cat /proc/cmdline || true
|
||||
fi
|
||||
|
||||
# save some sysinfo
|
||||
uname -a > /var/tmp/uname.txt
|
||||
|
||||
cd $HOME/zfs
|
||||
export PATH="$PATH:/sbin:/usr/sbin:/usr/local/sbin"
|
||||
|
||||
# build
|
||||
case "$1" in
|
||||
freebsd*)
|
||||
freebsd
|
||||
;;
|
||||
alma*|centos*)
|
||||
rpm_build_and_install "--with-spec=redhat"
|
||||
;;
|
||||
fedora*)
|
||||
rpm_build_and_install
|
||||
;;
|
||||
debian*|ubuntu*)
|
||||
deb_build_and_install
|
||||
;;
|
||||
*)
|
||||
linux
|
||||
;;
|
||||
esac
|
||||
|
||||
# building the zfs module was ok
|
||||
echo 0 > /var/tmp/build-exitcode.txt
|
||||
|
||||
# reset cloud-init configuration and poweroff
|
||||
sudo cloud-init clean --logs
|
||||
sync && sleep 2 && sudo poweroff &
|
||||
exit 0
|
121
.github/workflows/scripts/qemu-5-setup.sh
vendored
Executable file
121
.github/workflows/scripts/qemu-5-setup.sh
vendored
Executable file
@ -0,0 +1,121 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
######################################################################
|
||||
# 5) start test machines and load openzfs module
|
||||
######################################################################
|
||||
|
||||
set -eu
|
||||
|
||||
# read our defined variables
|
||||
source /var/tmp/env.txt
|
||||
|
||||
# wait for poweroff to succeed
|
||||
PID=$(pidof /usr/bin/qemu-system-x86_64)
|
||||
tail --pid=$PID -f /dev/null
|
||||
sudo virsh undefine openzfs
|
||||
|
||||
# definitions of per operating system
|
||||
case "$OS" in
|
||||
freebsd*)
|
||||
VMs=2
|
||||
CPU=3
|
||||
RAM=6
|
||||
;;
|
||||
*)
|
||||
VMs=2
|
||||
CPU=3
|
||||
RAM=7
|
||||
;;
|
||||
esac
|
||||
|
||||
# this can be different for each distro
|
||||
echo "VMs=$VMs" >> $ENV
|
||||
|
||||
# create snapshot we can clone later
|
||||
sudo zfs snapshot zpool/openzfs@now
|
||||
|
||||
# setup the testing vm's
|
||||
PUBKEY=$(cat ~/.ssh/id_ed25519.pub)
|
||||
for i in $(seq 1 $VMs); do
|
||||
|
||||
echo "Creating disk for vm$i..."
|
||||
DISK="/dev/zvol/zpool/vm$i"
|
||||
FORMAT="raw"
|
||||
sudo zfs clone zpool/openzfs@now zpool/vm$i
|
||||
sudo zfs create -ps -b 64k -V 80g zpool/vm$i-2
|
||||
|
||||
cat <<EOF > /tmp/user-data
|
||||
#cloud-config
|
||||
|
||||
fqdn: vm$i
|
||||
|
||||
users:
|
||||
- name: root
|
||||
shell: $BASH
|
||||
- name: zfs
|
||||
sudo: ALL=(ALL) NOPASSWD:ALL
|
||||
shell: $BASH
|
||||
ssh_authorized_keys:
|
||||
- $PUBKEY
|
||||
|
||||
growpart:
|
||||
mode: auto
|
||||
devices: ['/']
|
||||
ignore_growroot_disabled: false
|
||||
EOF
|
||||
|
||||
sudo virsh net-update default add ip-dhcp-host \
|
||||
"<host mac='52:54:00:83:79:0$i' ip='192.168.122.1$i'/>" --live --config
|
||||
|
||||
sudo virt-install \
|
||||
--os-variant $OSv \
|
||||
--name "vm$i" \
|
||||
--cpu host-passthrough \
|
||||
--virt-type=kvm --hvm \
|
||||
--vcpus=$CPU,sockets=1 \
|
||||
--memory $((1024*RAM)) \
|
||||
--memballoon model=virtio \
|
||||
--graphics none \
|
||||
--cloud-init user-data=/tmp/user-data \
|
||||
--network bridge=virbr0,model=$NIC,mac="52:54:00:83:79:0$i" \
|
||||
--disk $DISK,bus=virtio,cache=none,format=$FORMAT,driver.discard=unmap \
|
||||
--disk $DISK-2,bus=virtio,cache=none,format=$FORMAT,driver.discard=unmap \
|
||||
--import --noautoconsole >/dev/null
|
||||
done
|
||||
|
||||
# check the memory state from time to time
|
||||
cat <<EOF > cronjob.sh
|
||||
# $OS
|
||||
exec 1>>/var/tmp/stats.txt
|
||||
exec 2>&1
|
||||
echo "*******************************************************"
|
||||
date
|
||||
uptime
|
||||
free -m
|
||||
df -h /mnt/tests
|
||||
zfs list
|
||||
EOF
|
||||
sudo chmod +x cronjob.sh
|
||||
sudo mv -f cronjob.sh /root/cronjob.sh
|
||||
echo '*/5 * * * * /root/cronjob.sh' > crontab.txt
|
||||
sudo crontab crontab.txt
|
||||
rm crontab.txt
|
||||
|
||||
# check if the machines are okay
|
||||
echo "Waiting for vm's to come up... (${VMs}x CPU=$CPU RAM=$RAM)"
|
||||
for i in $(seq 1 $VMs); do
|
||||
while true; do
|
||||
ssh 2>/dev/null zfs@192.168.122.1$i "uname -a" && break
|
||||
done
|
||||
done
|
||||
echo "All $VMs VMs are up now."
|
||||
|
||||
# Save the VM's serial output (ttyS0) to /var/tmp/console.txt
|
||||
# - ttyS0 on the VM corresponds to a local /dev/pty/N entry
|
||||
# - use 'virsh ttyconsole' to lookup the /dev/pty/N entry
|
||||
for i in $(seq 1 $VMs); do
|
||||
mkdir -p $RESPATH/vm$i
|
||||
read "pty" <<< $(sudo virsh ttyconsole vm$i)
|
||||
sudo nohup bash -c "cat $pty > $RESPATH/vm$i/console.txt" &
|
||||
done
|
||||
echo "Console logging for ${VMs}x $OS started."
|
102
.github/workflows/scripts/qemu-6-tests.sh
vendored
Executable file
102
.github/workflows/scripts/qemu-6-tests.sh
vendored
Executable file
@ -0,0 +1,102 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
######################################################################
|
||||
# 6) load openzfs module and run the tests
|
||||
#
|
||||
# called on runner: qemu-6-tests.sh
|
||||
# called on qemu-vm: qemu-6-tests.sh $OS $2/$3
|
||||
######################################################################
|
||||
|
||||
set -eu
|
||||
|
||||
function prefix() {
|
||||
ID="$1"
|
||||
LINE="$2"
|
||||
CURRENT=$(date +%s)
|
||||
TSSTART=$(cat /tmp/tsstart)
|
||||
DIFF=$((CURRENT-TSSTART))
|
||||
H=$((DIFF/3600))
|
||||
DIFF=$((DIFF-(H*3600)))
|
||||
M=$((DIFF/60))
|
||||
S=$((DIFF-(M*60)))
|
||||
|
||||
CTR=$(cat /tmp/ctr)
|
||||
echo $LINE| grep -q "^Test[: ]" && CTR=$((CTR+1)) && echo $CTR > /tmp/ctr
|
||||
|
||||
BASE="$HOME/work/zfs/zfs"
|
||||
COLOR="$BASE/scripts/zfs-tests-color.sh"
|
||||
CLINE=$(echo $LINE| grep "^Test[ :]" | sed -e 's|/usr/local|/usr|g' \
|
||||
| sed -e 's| /usr/share/zfs/zfs-tests/tests/| |g' | $COLOR)
|
||||
if [ -z "$CLINE" ]; then
|
||||
printf "vm${ID}: %s\n" "$LINE"
|
||||
else
|
||||
# [vm2: 00:15:54 256] Test: functional/checksum/setup (run as root) [00:00] [PASS]
|
||||
printf "[vm${ID}: %02d:%02d:%02d %4d] %s\n" \
|
||||
"$H" "$M" "$S" "$CTR" "$CLINE"
|
||||
fi
|
||||
}
|
||||
|
||||
# called directly on the runner
|
||||
if [ -z ${1:-} ]; then
|
||||
cd "/var/tmp"
|
||||
source env.txt
|
||||
SSH=$(which ssh)
|
||||
TESTS='$HOME/zfs/.github/workflows/scripts/qemu-6-tests.sh'
|
||||
echo 0 > /tmp/ctr
|
||||
date "+%s" > /tmp/tsstart
|
||||
|
||||
for i in $(seq 1 $VMs); do
|
||||
IP="192.168.122.1$i"
|
||||
daemonize -c /var/tmp -p vm${i}.pid -o vm${i}log.txt -- \
|
||||
$SSH zfs@$IP $TESTS $OS $i $VMs
|
||||
# handly line by line and add info prefix
|
||||
stdbuf -oL tail -fq vm${i}log.txt \
|
||||
| while read -r line; do prefix "$i" "$line"; done &
|
||||
echo $! > vm${i}log.pid
|
||||
# don't mix up the initial --- Configuration --- part
|
||||
sleep 0.13
|
||||
done
|
||||
|
||||
# wait for all vm's to finish
|
||||
for i in $(seq 1 $VMs); do
|
||||
tail --pid=$(cat vm${i}.pid) -f /dev/null
|
||||
pid=$(cat vm${i}log.pid)
|
||||
rm -f vm${i}log.pid
|
||||
kill $pid
|
||||
done
|
||||
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# this part runs inside qemu vm
|
||||
export PATH="$PATH:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/sbin:/usr/local/bin"
|
||||
case "$1" in
|
||||
freebsd*)
|
||||
sudo kldstat -n zfs 2>/dev/null && sudo kldunload zfs
|
||||
sudo -E ./zfs/scripts/zfs.sh
|
||||
TDIR="/usr/local/share/zfs"
|
||||
;;
|
||||
*)
|
||||
# use xfs @ /var/tmp for all distros
|
||||
sudo mv -f /var/tmp/*.txt /tmp
|
||||
sudo mkfs.xfs -fq /dev/vdb
|
||||
sudo mount -o noatime /dev/vdb /var/tmp
|
||||
sudo chmod 1777 /var/tmp
|
||||
sudo mv -f /tmp/*.txt /var/tmp
|
||||
sudo -E modprobe zfs
|
||||
TDIR="/usr/share/zfs"
|
||||
;;
|
||||
esac
|
||||
|
||||
# run functional testings and save exitcode
|
||||
cd /var/tmp
|
||||
TAGS=$2/$3
|
||||
sudo dmesg -c > dmesg-prerun.txt
|
||||
mount > mount.txt
|
||||
df -h > df-prerun.txt
|
||||
$TDIR/zfs-tests.sh -vK -s 3GB -T $TAGS
|
||||
RV=$?
|
||||
df -h > df-postrun.txt
|
||||
echo $RV > tests-exitcode.txt
|
||||
sync
|
||||
exit 0
|
119
.github/workflows/scripts/qemu-7-prepare.sh
vendored
Executable file
119
.github/workflows/scripts/qemu-7-prepare.sh
vendored
Executable file
@ -0,0 +1,119 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
######################################################################
|
||||
# 7) prepare output of the results
|
||||
# - this script pre-creates all needed logfiles for later summary
|
||||
######################################################################
|
||||
|
||||
set -eu
|
||||
|
||||
# read our defined variables
|
||||
cd /var/tmp
|
||||
source env.txt
|
||||
|
||||
mkdir -p $RESPATH
|
||||
|
||||
# check if building the module has failed
|
||||
if [ -z ${VMs:-} ]; then
|
||||
cd $RESPATH
|
||||
echo ":exclamation: ZFS module didn't build successfully :exclamation:" \
|
||||
| tee summary.txt | tee /tmp/summary.txt
|
||||
tar cf /tmp/qemu-$OS.tar -C $RESPATH -h . || true
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# build was okay
|
||||
BASE="$HOME/work/zfs/zfs"
|
||||
MERGE="$BASE/.github/workflows/scripts/merge_summary.awk"
|
||||
|
||||
# catch result files of testings (vm's should be there)
|
||||
for i in $(seq 1 $VMs); do
|
||||
rsync -arL zfs@192.168.122.1$i:$RESPATH/current $RESPATH/vm$i || true
|
||||
scp zfs@192.168.122.1$i:"/var/tmp/*.txt" $RESPATH/vm$i || true
|
||||
done
|
||||
cp -f /var/tmp/*.txt $RESPATH || true
|
||||
cd $RESPATH
|
||||
|
||||
# prepare result files for summary
|
||||
for i in $(seq 1 $VMs); do
|
||||
file="vm$i/build-stderr.txt"
|
||||
test -s $file && mv -f $file build-stderr.txt
|
||||
|
||||
file="vm$i/build-exitcode.txt"
|
||||
test -s $file && mv -f $file build-exitcode.txt
|
||||
|
||||
file="vm$i/uname.txt"
|
||||
test -s $file && mv -f $file uname.txt
|
||||
|
||||
file="vm$i/tests-exitcode.txt"
|
||||
test -s $file || echo 1 > $file
|
||||
rv=$(cat vm$i/tests-exitcode.txt)
|
||||
test $rv != 0 && touch /tmp/have_failed_tests
|
||||
|
||||
file="vm$i/current/log"
|
||||
if [ -s $file ]; then
|
||||
cat $file >> log
|
||||
awk '/\[FAIL\]|\[KILLED\]/{ show=1; print; next; }; \
|
||||
/\[SKIP\]|\[PASS\]/{ show=0; } show' \
|
||||
$file > /tmp/vm${i}dbg.txt
|
||||
fi
|
||||
|
||||
file="vm${i}log.txt"
|
||||
fileC="/tmp/vm${i}log.txt"
|
||||
if [ -s $file ]; then
|
||||
cat $file >> summary
|
||||
cat $file | $BASE/scripts/zfs-tests-color.sh > $fileC
|
||||
fi
|
||||
done
|
||||
|
||||
# create summary of tests
|
||||
if [ -s summary ]; then
|
||||
$MERGE summary | grep -v '^/' > summary.txt
|
||||
$MERGE summary | $BASE/scripts/zfs-tests-color.sh > /tmp/summary.txt
|
||||
rm -f summary
|
||||
else
|
||||
touch summary.txt /tmp/summary.txt
|
||||
fi
|
||||
|
||||
# create file for debugging
|
||||
if [ -s log ]; then
|
||||
awk '/\[FAIL\]|\[KILLED\]/{ show=1; print; next; }; \
|
||||
/\[SKIP\]|\[PASS\]/{ show=0; } show' \
|
||||
log > summary-failure-logs.txt
|
||||
rm -f log
|
||||
else
|
||||
touch summary-failure-logs.txt
|
||||
fi
|
||||
|
||||
# create debug overview for failed tests
|
||||
cat summary.txt \
|
||||
| awk '/\(expected PASS\)/{ if ($1!="SKIP") print $2; next; } show' \
|
||||
| while read t; do
|
||||
echo "check: $t"
|
||||
cat summary-failure-logs.txt \
|
||||
| awk '$0~/Test[: ]/{ show=0; } $0~v{ show=1; } show' v="$t" \
|
||||
> /tmp/fail.txt
|
||||
SIZE=$(stat --printf="%s" /tmp/fail.txt)
|
||||
SIZE=$((SIZE/1024))
|
||||
# Test Summary:
|
||||
echo "##[group]$t ($SIZE KiB)" >> /tmp/failed.txt
|
||||
cat /tmp/fail.txt | $BASE/scripts/zfs-tests-color.sh >> /tmp/failed.txt
|
||||
echo "##[endgroup]" >> /tmp/failed.txt
|
||||
# Job Summary:
|
||||
echo -e "\n<details>\n<summary>$t ($SIZE KiB)</summary><pre>" >> failed.txt
|
||||
cat /tmp/fail.txt >> failed.txt
|
||||
echo "</pre></details>" >> failed.txt
|
||||
done
|
||||
|
||||
if [ -e /tmp/have_failed_tests ]; then
|
||||
echo ":warning: Some tests failed!" >> failed.txt
|
||||
else
|
||||
echo ":thumbsup: All tests passed." >> failed.txt
|
||||
fi
|
||||
|
||||
if [ ! -s uname.txt ]; then
|
||||
echo ":interrobang: Panic - where is my uname.txt?" > uname.txt
|
||||
fi
|
||||
|
||||
# artifact ready now
|
||||
tar cf /tmp/qemu-$OS.tar -C $RESPATH -h . || true
|
73
.github/workflows/scripts/qemu-8-summary.sh
vendored
Executable file
73
.github/workflows/scripts/qemu-8-summary.sh
vendored
Executable file
@ -0,0 +1,73 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
######################################################################
|
||||
# 8) show colored output of results
|
||||
######################################################################
|
||||
|
||||
set -eu
|
||||
|
||||
# read our defined variables
|
||||
source /var/tmp/env.txt
|
||||
cd $RESPATH
|
||||
|
||||
# helper function for showing some content with headline
|
||||
function showfile() {
|
||||
content=$(dd if=$1 bs=1024 count=400k 2>/dev/null)
|
||||
if [ -z "$2" ]; then
|
||||
group1=""
|
||||
group2=""
|
||||
else
|
||||
SIZE=$(stat --printf="%s" "$file")
|
||||
SIZE=$((SIZE/1024))
|
||||
group1="##[group]$2 ($SIZE KiB)"
|
||||
group2="##[endgroup]"
|
||||
fi
|
||||
cat <<EOF > tmp$$
|
||||
$group1
|
||||
$content
|
||||
$group2
|
||||
EOF
|
||||
cat tmp$$
|
||||
rm -f tmp$$
|
||||
}
|
||||
|
||||
# overview
|
||||
cat /tmp/summary.txt
|
||||
echo ""
|
||||
|
||||
if [ -e /tmp/have_failed_tests ]; then
|
||||
RV=1
|
||||
echo "Debuginfo of failed tests:"
|
||||
cat /tmp/failed.txt
|
||||
echo ""
|
||||
cat /tmp/summary.txt | grep -v '^/'
|
||||
echo ""
|
||||
else
|
||||
RV=0
|
||||
fi
|
||||
|
||||
echo -e "\nFull logs for download:\n $1\n"
|
||||
|
||||
for i in $(seq 1 $VMs); do
|
||||
rv=$(cat vm$i/tests-exitcode.txt)
|
||||
|
||||
if [ $rv = 0 ]; then
|
||||
vm="[92mvm$i[0m"
|
||||
else
|
||||
vm="[1;91mvm$i[0m"
|
||||
fi
|
||||
|
||||
file="vm$i/dmesg-prerun.txt"
|
||||
test -s "$file" && showfile "$file" "$vm: dmesg kernel"
|
||||
|
||||
file="/tmp/vm${i}log.txt"
|
||||
test -s "$file" && showfile "$file" "$vm: test results"
|
||||
|
||||
file="vm$i/console.txt"
|
||||
test -s "$file" && showfile "$file" "$vm: serial console"
|
||||
|
||||
file="/tmp/vm${i}dbg.txt"
|
||||
test -s "$file" && showfile "$file" "$vm: failure logfile"
|
||||
done
|
||||
|
||||
exit $RV
|
54
.github/workflows/scripts/qemu-9-summary-page.sh
vendored
Executable file
54
.github/workflows/scripts/qemu-9-summary-page.sh
vendored
Executable file
@ -0,0 +1,54 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
######################################################################
|
||||
# 9) generate github summary page of all the testings
|
||||
######################################################################
|
||||
|
||||
set -eu
|
||||
|
||||
function output() {
|
||||
echo -e $* >> "out-$logfile.md"
|
||||
}
|
||||
|
||||
function outfile() {
|
||||
cat "$1" >> "out-$logfile.md"
|
||||
}
|
||||
|
||||
function outfile_plain() {
|
||||
output "<pre>"
|
||||
cat "$1" >> "out-$logfile.md"
|
||||
output "</pre>"
|
||||
}
|
||||
|
||||
function send2github() {
|
||||
test -f "$1" || exit 0
|
||||
dd if="$1" bs=1023k count=1 >> $GITHUB_STEP_SUMMARY
|
||||
}
|
||||
|
||||
# https://docs.github.com/en/enterprise-server@3.6/actions/using-workflows/workflow-commands-for-github-actions#step-isolation-and-limits
|
||||
# Job summaries are isolated between steps and each step is restricted to a maximum size of 1MiB.
|
||||
# [ ] can not show all error findings here
|
||||
# [x] split files into smaller ones and create additional steps
|
||||
|
||||
# first call, generate all summaries
|
||||
if [ ! -f out-1.md ]; then
|
||||
logfile="1"
|
||||
for tarfile in Logs-functional-*/qemu-*.tar; do
|
||||
if [ ! -s "$tarfile" ]; then
|
||||
output "\n## Functional Tests: unknown\n"
|
||||
output ":exclamation: Tarfile $tarfile is empty :exclamation:"
|
||||
continue
|
||||
fi
|
||||
rm -rf vm* *.txt
|
||||
tar xf "$tarfile"
|
||||
source env.txt
|
||||
output "\n## Functional Tests: $OSNAME\n"
|
||||
outfile_plain uname.txt
|
||||
outfile_plain summary.txt
|
||||
outfile failed.txt
|
||||
logfile=$((logfile+1))
|
||||
done
|
||||
send2github out-1.md
|
||||
else
|
||||
send2github out-$1.md
|
||||
fi
|
141
.github/workflows/zfs-qemu.yml
vendored
Normal file
141
.github/workflows/zfs-qemu.yml
vendored
Normal file
@ -0,0 +1,141 @@
|
||||
name: zfs-qemu
|
||||
|
||||
on:
|
||||
push:
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
|
||||
qemu-vm:
|
||||
name: qemu-x86
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
# all:
|
||||
# os: [almalinux8, almalinux9, archlinux, centos-stream9, fedora39, fedora40, debian11, debian12, freebsd13, freebsd13r, freebsd14, freebsd14r, freebsd15, ubuntu20, ubuntu22, ubuntu24]
|
||||
# openzfs:
|
||||
os: [almalinux8, almalinux9, centos-stream9, debian11, debian12, fedora39, fedora40, freebsd13, freebsd13r, freebsd14, freebsd14r]
|
||||
# freebsd:
|
||||
# os: [freebsd13, freebsd14]
|
||||
# small test:
|
||||
# os: [archlinux]
|
||||
runs-on: ubuntu-24.04
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
ref: ${{ github.event.pull_request.head.sha }}
|
||||
|
||||
- name: Setup QEMU
|
||||
timeout-minutes: 10
|
||||
run: .github/workflows/scripts/qemu-1-setup.sh
|
||||
|
||||
- name: Start build machine
|
||||
timeout-minutes: 10
|
||||
run: .github/workflows/scripts/qemu-2-start.sh ${{ matrix.os }}
|
||||
|
||||
- name: Install dependencies
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
echo "Install dependencies in QEMU machine"
|
||||
IP=192.168.122.10
|
||||
while pidof /usr/bin/qemu-system-x86_64 >/dev/null; do
|
||||
ssh 2>/dev/null zfs@$IP "uname -a" && break
|
||||
done
|
||||
scp .github/workflows/scripts/qemu-3-deps.sh zfs@$IP:qemu-3-deps.sh
|
||||
PID=`pidof /usr/bin/qemu-system-x86_64`
|
||||
ssh zfs@$IP '$HOME/qemu-3-deps.sh' ${{ matrix.os }}
|
||||
# wait for poweroff to succeed
|
||||
tail --pid=$PID -f /dev/null
|
||||
sleep 5 # avoid this: "error: Domain is already active"
|
||||
rm -f $HOME/.ssh/known_hosts
|
||||
|
||||
- name: Build modules
|
||||
timeout-minutes: 30
|
||||
run: |
|
||||
echo "Build modules in QEMU machine"
|
||||
sudo virsh start openzfs
|
||||
IP=192.168.122.10
|
||||
while pidof /usr/bin/qemu-system-x86_64 >/dev/null; do
|
||||
ssh 2>/dev/null zfs@$IP "uname -a" && break
|
||||
done
|
||||
rsync -ar $HOME/work/zfs/zfs zfs@$IP:./
|
||||
ssh zfs@$IP '$HOME/zfs/.github/workflows/scripts/qemu-4-build.sh' ${{ matrix.os }}
|
||||
|
||||
- name: Setup testing machines
|
||||
timeout-minutes: 5
|
||||
run: .github/workflows/scripts/qemu-5-setup.sh
|
||||
|
||||
- name: Run tests
|
||||
timeout-minutes: 270
|
||||
run: .github/workflows/scripts/qemu-6-tests.sh
|
||||
|
||||
- name: Prepare artifacts
|
||||
if: always()
|
||||
timeout-minutes: 10
|
||||
run: .github/workflows/scripts/qemu-7-prepare.sh
|
||||
|
||||
- uses: actions/upload-artifact@v4
|
||||
id: artifact-upload
|
||||
if: always()
|
||||
with:
|
||||
name: Logs-functional-${{ matrix.os }}
|
||||
path: /tmp/qemu-${{ matrix.os }}.tar
|
||||
if-no-files-found: ignore
|
||||
|
||||
- name: Test Summary
|
||||
if: always()
|
||||
run: .github/workflows/scripts/qemu-8-summary.sh '${{ steps.artifact-upload.outputs.artifact-url }}'
|
||||
|
||||
cleanup:
|
||||
if: always()
|
||||
name: Cleanup
|
||||
runs-on: ubuntu-latest
|
||||
needs: [ qemu-vm ]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
ref: ${{ github.event.pull_request.head.sha }}
|
||||
- uses: actions/download-artifact@v4
|
||||
- name: Generating summary
|
||||
run: .github/workflows/scripts/qemu-9-summary-page.sh
|
||||
- name: Generating summary...
|
||||
run: .github/workflows/scripts/qemu-9-summary-page.sh 2
|
||||
- name: Generating summary...
|
||||
run: .github/workflows/scripts/qemu-9-summary-page.sh 3
|
||||
- name: Generating summary...
|
||||
run: .github/workflows/scripts/qemu-9-summary-page.sh 4
|
||||
- name: Generating summary...
|
||||
run: .github/workflows/scripts/qemu-9-summary-page.sh 5
|
||||
- name: Generating summary...
|
||||
run: .github/workflows/scripts/qemu-9-summary-page.sh 6
|
||||
- name: Generating summary...
|
||||
run: .github/workflows/scripts/qemu-9-summary-page.sh 7
|
||||
- name: Generating summary...
|
||||
run: .github/workflows/scripts/qemu-9-summary-page.sh 8
|
||||
- name: Generating summary...
|
||||
run: .github/workflows/scripts/qemu-9-summary-page.sh 9
|
||||
- name: Generating summary...
|
||||
run: .github/workflows/scripts/qemu-9-summary-page.sh 10
|
||||
- name: Generating summary...
|
||||
run: .github/workflows/scripts/qemu-9-summary-page.sh 11
|
||||
- name: Generating summary...
|
||||
run: .github/workflows/scripts/qemu-9-summary-page.sh 12
|
||||
- name: Generating summary...
|
||||
run: .github/workflows/scripts/qemu-9-summary-page.sh 13
|
||||
- name: Generating summary...
|
||||
run: .github/workflows/scripts/qemu-9-summary-page.sh 14
|
||||
- name: Generating summary...
|
||||
run: .github/workflows/scripts/qemu-9-summary-page.sh 15
|
||||
- name: Generating summary...
|
||||
run: .github/workflows/scripts/qemu-9-summary-page.sh 16
|
||||
- name: Generating summary...
|
||||
run: .github/workflows/scripts/qemu-9-summary-page.sh 17
|
||||
- name: Generating summary...
|
||||
run: .github/workflows/scripts/qemu-9-summary-page.sh 18
|
||||
- name: Generating summary...
|
||||
run: .github/workflows/scripts/qemu-9-summary-page.sh 19
|
||||
- uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: Summary Files
|
||||
path: out-*
|
@ -1,5 +1,6 @@
|
||||
#!/bin/sh
|
||||
#!/usr/bin/env bash
|
||||
# shellcheck disable=SC2154
|
||||
# shellcheck disable=SC2292
|
||||
#
|
||||
# CDDL HEADER START
|
||||
#
|
||||
@ -208,6 +209,49 @@ find_runfile() {
|
||||
fi
|
||||
}
|
||||
|
||||
# Given a TAGS with a format like "1/3" or "2/3" then divide up the test list
|
||||
# into portions and print that portion. So "1/3" for "the first third of the
|
||||
# test tags".
|
||||
#
|
||||
#
|
||||
split_tags() {
|
||||
# Get numerator and denominator
|
||||
NUM=$(echo "$TAGS" | cut -d/ -f1)
|
||||
DEN=$(echo "$TAGS" | cut -d/ -f2)
|
||||
# At the point this is called, RUNFILES will contain a comma separated
|
||||
# list of full paths to the runfiles, like:
|
||||
#
|
||||
# "/home/hutter/qemu/tests/runfiles/common.run,/home/hutter/qemu/tests/runfiles/linux.run"
|
||||
#
|
||||
# So to get tags for our selected tests we do:
|
||||
#
|
||||
# 1. Remove unneeded chars: [],\
|
||||
# 2. Print out the last field of each tag line. This will be the tag
|
||||
# for the test (like 'zpool_add').
|
||||
# 3. Remove duplicates between the runfiles. If the same tag is defined
|
||||
# in multiple runfiles, then when you do '-T <tag>' ZTS is smart
|
||||
# enough to know to run the tag in each runfile. So '-T zpool_add'
|
||||
# will run the zpool_add from common.run and linux.run.
|
||||
# 4. Ignore the 'functional' tag since we only want individual tests
|
||||
# 5. Print out the tests in our faction of all tests. This uses modulus
|
||||
# so "1/3" will run tests 1,3,6,9 etc. That way the tests are
|
||||
# interleaved so, say, "3/4" isn't running all the zpool_* tests that
|
||||
# appear alphabetically at the end.
|
||||
# 6. Remove trailing comma from list
|
||||
#
|
||||
# TAGS will then look like:
|
||||
#
|
||||
# "append,atime,bootfs,cachefile,checksum,cp_files,deadman,dos_attributes, ..."
|
||||
|
||||
# Change the comma to a space for easy processing
|
||||
_RUNFILES=${RUNFILES//","/" "}
|
||||
# shellcheck disable=SC2002,SC2086
|
||||
cat $_RUNFILES | tr -d "[],\'" | awk '/tags = /{print $NF}' | sort | \
|
||||
uniq | grep -v functional | \
|
||||
awk -v num="$NUM" -v den="$DEN" '{ if(NR % den == (num - 1)) {printf "%s,",$0}}' | \
|
||||
sed -E 's/,$//'
|
||||
}
|
||||
|
||||
#
|
||||
# Symlink file if it appears under any of the given paths.
|
||||
#
|
||||
@ -331,10 +375,14 @@ OPTIONS:
|
||||
-t PATH|NAME Run single test at PATH relative to test suite,
|
||||
or search for test by NAME
|
||||
-T TAGS Comma separated list of tags (default: 'functional')
|
||||
Alternately, specify a fraction like "1/3" or "2/3" to
|
||||
run the first third of tests or 2nd third of the tests. This
|
||||
is useful for splitting up the test amongst different
|
||||
runners.
|
||||
-u USER Run single test as USER (default: root)
|
||||
|
||||
EXAMPLES:
|
||||
# Run the default ($(echo "${DEFAULT_RUNFILES}" | sed 's/\.run//')) suite of tests and output the configuration used.
|
||||
# Run the default ${DEFAULT_RUNFILES//\.run/} suite of tests and output the configuration used.
|
||||
$0 -v
|
||||
|
||||
# Run a smaller suite of tests designed to run more quickly.
|
||||
@ -347,7 +395,7 @@ $0 -t tests/functional/cli_root/zfs_bookmark/zfs_bookmark_cliargs.ksh
|
||||
$0 -t zfs_bookmark_cliargs
|
||||
|
||||
# Cleanup a previous run of the test suite prior to testing, run the
|
||||
# default ($(echo "${DEFAULT_RUNFILES}" | sed 's/\.run//')) suite of tests and perform no cleanup on exit.
|
||||
# default ${DEFAULT_RUNFILES//\.run//} suite of tests and perform no cleanup on exit.
|
||||
$0 -x
|
||||
|
||||
EOF
|
||||
@ -489,6 +537,8 @@ fi
|
||||
#
|
||||
TAGS=${TAGS:='functional'}
|
||||
|
||||
|
||||
|
||||
#
|
||||
# Attempt to locate the runfiles describing the test workload.
|
||||
#
|
||||
@ -509,6 +559,23 @@ done
|
||||
unset IFS
|
||||
RUNFILES=${R#,}
|
||||
|
||||
# The tag can be a fraction to indicate which portion of ZTS to run, Like
|
||||
#
|
||||
# "1/3": Run first one third of all tests in runfiles
|
||||
# "2/3": Run second one third of all test in runfiles
|
||||
# "6/10": Run 6th tenth of all tests in runfiles
|
||||
#
|
||||
# This is useful for splitting up the test across multiple runners.
|
||||
#
|
||||
# After this code block, TAGS will be transformed from something like
|
||||
# "1/3" to a comma separate taglist, like:
|
||||
#
|
||||
# "append,atime,bootfs,cachefile,checksum,cp_files,deadman,dos_attributes, ..."
|
||||
#
|
||||
if echo "$TAGS" | grep -Eq '^[0-9]+/[0-9]+$' ; then
|
||||
TAGS=$(split_tags)
|
||||
fi
|
||||
|
||||
#
|
||||
# This script should not be run as root. Instead the test user, which may
|
||||
# be a normal user account, needs to be configured such that it can
|
||||
|
@ -295,6 +295,18 @@ User: %s
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
"""
|
||||
Log each test we run to /dev/ttyu0 (on FreeBSD), so if there's a kernel
|
||||
warning we'll be able to match it up to a particular test.
|
||||
"""
|
||||
if options.kmsg is True and exists("/dev/ttyu0"):
|
||||
try:
|
||||
kp = Popen([SUDO, "sh", "-c",
|
||||
f"echo ZTS run {self.pathname} > /dev/ttyu0"])
|
||||
kp.wait()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
self.result.starttime = monotonic_time()
|
||||
|
||||
if kmemleak:
|
||||
|
Loading…
Reference in New Issue
Block a user