Linux Dev. Environment#
This shows the manual steps of setting up a physical machine for xNVMe test and development. This includes using devices attached to the machine, as well as using the physical machine to setup virtual machines with advanced NVMe-device emulation.
Advanced NVMe device emulation covers command-set enchancements such as Flexible Data-Placement (FDP), Zoned Namespaces (ZNS), and Key-Value SSDs (KV).
The documentation is provided as a means to setup development environment with minimal fuss and maximum foss :)
xNVMe
Built from source
Testing package
CIJOE
Scripts and utilities for development, testing, and maintenance
Custom Qemu
Built from source
Latest NVMe emulation features
Custom Linux kernel
Built from source
Installed via
.deb
Physical Machine#
It is assumed that you have a physisical machine with NVMe storage available, and that you can utilize it destructively. That is, that any data can and will be lost when using the NVMe devices for testing.
Qemu is utilized extensively during testing to verify NVMe functionality of NVM, ZNS, KV, FDP, etc. that is NVMe features not readiliy available in HW is emulated via qemu. Thus, a physical machine is attractive reduce build and runtime.
It is of course possible to do nested-virtualization, however, it is often impractical as I/O get orders of magnitudes slower which less than ideal during build.
Most machines will do for common development and testing, throughout a machine with the following traits are used:
An x86 64-bit CPU
At least 8GB of memory
A SATA SSD, 100GB will suffice, used for the
Operating system
User data (
$HOME), repositories, workdirs, etc.
One or more NVMe SSDs
Utilized for testing
This physical machine will be referred to as box01.
Tip
Since, the devices utilized for testing a primarily NVMe devices, then it is
convenient to have the operating system and user-data installed on a
separate non-NVMe device, quite simply since SATA pops up as /dev/sd*
and NVMe as /dev/nvme* / /dev/ng*. It avoids one accidentally wiping
the OS device when the devices are entirely separate.
Operating System#
Here, the latest stable Debian Linux, currently bookworm is used throughout.
Install it
Un-select desktop environment
Select OpenSSH
Add user
oduswith password as you see fitPartition using the entire device
Setup networking
And make sure it is accessible from your “main” development machine:
ssh-copy-id odus@box01
Log into the machine:
ssh odus@box01
And then install a couple of things:
# Switch to root
su -
apt-get -qy update && apt-get -qy install \
git \
htop \
screen \
pipx \
sudo \
vim
# Make sure that cli-tools installed with pipx are available
pipx ensurepath
# Add odus to sudoers (required to do various things as non-root)
usermod -aG sudo odus
# Add odus to kvm (required to run qemu as non-root)
usermod -aG kvm odus
# switch back out of root
exit
# log out of odus
exit
Tip
Log out and back in again, to refresh credentials
Additionally, in order to prepare the system for user-space NVMe drivers, then vfio/iommu should be enabled along with a couple of user-limit tweaks.
Have a look at the System Configuration section for the details on this.
Homedir#
Regardless of whether you are using the box directly as root, or using the
odus user, then setup the $HOME directory like so:
mkdir $HOME/{artifacts,git,workdirs,guests,images}
The directories are used for the following:
- git
A place to store source-repositories, usually these are git repositories for projects like: xnvme, fio, spdk, linux, and qemu.
- workdirs
A place for auxilary files, when executing cijoe workflows, or doing misc. experiments and exploration.
- artifacts
A place to store intermediate artifacts during development. Such as adhoc Linux kernel
.debpackages, source-archives etc.- guests
A place where boot-images, pid-files, cloud-seeds and other files related to qemu guests live.
- images
A place to store VM “boot-images”, such as cloud-init enabled images.
Screen + http.server#
Regardless of whether your devbox is physical/virtual/local/remote or some combination thereof. Then having access to misc. files, and specifically, to things like cijoe output / reports. Is very convenient.
With minimal fuss, then this is achievable with a combinaion of screen and
Python:
cd ~/workdirs
screen -d -m python3 -m http.server
The above starts a webserver, serving the content of the cwd where
python3 is executed and served up over tcp/http on port 8000.
The screen -d -m part, creates a screen-session and detaches from it. Thus,
it continues executing even if you disconnect.
You can see the running screen-sessions with:
screen -list
And attach to them using their <name>:
screen -r <name>
xNVMe#
Clone, build, and install xNVMe and checkout the next branch:
cd ~/git
git clone https://github.com/OpenMPDK/xNVMe.git xnvme
cd xnvme
git checkout next
Install prerequisites:
sudo ./toolbox/pkgs/debian-bookworm.sh
Build and install xNVMe:
cd ~/git/xnvme
make
sudo make install
Check that it is functional:
sudo xnvme enum
This should yield output similar to:
xnvme_cli_enumeration:
- {uri: '/dev/nvme0n1', dtype: 0x2, nsid: 0x1, csi: 0x0, subnqn: ''}
cijoe#
Setup by running the following in the root of the xNVMe repository:
cd ~/git/xnvme
make guest-env
This will:
Install cijoe along with dependencies for Linux, qemu, and fio
Check that QEMU and KVM are available
Present an interactive menu to select a guest (e.g.
debian-trixie,freebsd-14)
The selected guest is stored in cijoe/current.guest and used by all
subsequent make guest-* targets.
Then logout and back in to reload the environment, the addition of pipx and
the cijoe into $PATH.
Note
Make sure that pytest is not installed system-wide by running which
pytest. In case it says /usr/bin/pytest, then it is not using the
pytest provided with the CIJOE module. Thus, uninstall it using
apt-get remove python3-pytest.
Artifacts#
Produce a set of artifacts:
cd ~/git/xnvme
make clobber gen-artifacts
# Keep them handy if need be
cp -r /tmp/artifacts ~/artifacts/xnvme
Warning
The make clobber removes any unstaged changes and removes subprojects.
This is done to ensure an entirely “clean” repository. Thus, make sure that
you have commit your changes.
The make clobber is required for make gen-artifacts, as it will
otherwise include side-effects from previous builds.
Note
The artifacts produces by make gen-artifacts are output to
/tmp/artifacts. There are cijoe workflows, expecting to be available
at that location, specifically the provision workflow.
Linux Kernel#
Install prerequisites:
# Essentials for building the kernel
sudo apt-get -qy install \
bc \
bison \
build-essential \
debhelper \
flex \
git \
libelf-dev \
libssl-dev \
pahole \
rsync
# A couple of extra libraries and tools
sudo apt-get -qy install \
libncurses-dev \
linux-cpupower \
python3-cpuinfo
Note
libnurses-dev is needed for make menuconfig. linux-cpupower
provides a cli-tool cpupower that let’s you control the Linux CPU
governor, useful for performance evaluation.
Then run the cijoe workflow, compiling a custom kernel as a .deb
package:
# Create a workdir for the workflow
mkdir -p ~/workdirs/linux
cd ~/workdirs/linux
# Grab the cijoe-example for linux
cijoe --example linux
# Run it with logging (-l)
cijoe -l
Then re-run the command above. It should now succeed, after which you can collect the artifacts of interest:
cp -r cijoe-output/artifacts/linux ~/artifacts/
You can install them by running:
sudo dpkg -i ~/artifacts/linux/*.deb
Qemu#
Install prerequisites:
# Packages for building qemu
sudo apt-get -qy install \
meson \
libattr1-dev \
libcap-ng-dev \
libglib2.0-dev \
libpixman-1-dev \
libslirp-dev \
pkg-config
# Packages for cloud-init
sudo apt-get -qy install \
cloud-image-utils
Checkout qemu:
cd ~/git
git clone https://github.com/SamsungDS/qemu.git --recursive
cd qemu
git checkout for-xnvme
git submodule update --init --recursive
Create a work-directory:
mkdir -p ~/workdirs/qemu
cd ~/workdirs/qemu
Run the cijoe qemu workflow:
# Grab the config and workflow example for qemu
cijoe --example qemu
# Run it with log-level debug (-l)
cijoe -l
With the packages installed, go back and run the cijoe workflow. Have a look at the report, it describes what it does, that is, build and install qemu, spin up a vm using a cloud-init-enabled Debian image, ssh into it.
Tip
In case you get errors such as:
Could not access KVM kernel module: No such file or directory
qemu-system-x86_64: failed to initialize kvm: No such file or directory
Then this is usually a symptom of virtualization being
disabled in the BIOS of the physical machine. Have a look
at dmesg it might proide messages supporting this.
Setup qemu-guest / virtual machine for testing#
Now that you have qemu built and installed. You can use it to emulate NVMe
devices in the guest for testing. The xNVMe Makefile has a bunch of
helper-targets to do this. That is, spinning up the guest, synchronizing your
xNVMe git repository changes into the qemu-guest, building, installing, and
running tests.
The make guest-env target will present an interactive menu to select the
guest OS (e.g. debian-trixie, freebsd-14). The same workflow applies
regardless of which guest you choose:
# One-time setup: install CIJOE and select a guest
make guest-env
# Build source artifacts and run the full test cycle
make gen-artifacts ALLOW_DIRTY=1 && make verify-guest
# Or run stages individually:
make guest-start # start the guest
make guest-provision # sync source, build, install
make guest-test # run the test suite
To switch to a different guest, run make guest-select.
Generate documentation#
You can reuse the qemu-guest to generate the documentation. See the
Reproduce GitHUB Actions locally section for using
make docgen-guest inside the Docker container.
Reproduce GitHUB Actions locally#
The CI test suite can be reproduced locally using Docker. The
make docker target provides a privileged container with a custom QEMU
build that includes NVMe device emulation.
Build the source artifacts on your host:
make gen-artifacts ALLOW_DIRTY=1
Drop into the Docker container:
make docker
The container bind-mounts the repository at /tmp/xnvme and
/tmp/artifacts, so the artifacts from step 1 are available inside.
Inside the container, set up CIJOE, select a guest, and run tests:
# Install CIJOE and select guest
echo debian-trixie | make guest-env
# Test
make verify-guest
# Or generate documentation
make docgen-guest
In case you are setting up the test-target using other tools, or just want to run pytest directly, then the following two sections describe how to do that.
Running pytest from the repository#
Invoke pytest providing a configuration file and an output directory for artifacts and captured output:
pytest \
--config configs/debian-trixie.toml \
--output /tmp/somewhere \
tests
The --config is needed to inform pytest about the environment you are
running in such as which devices it can use for testing. The information is
utilized by pytest to, among other things, do parametrization for xNVMe backend
configurations etc.
Provision a qemu-guest#
Setup a virtual machine with xNVMe installed, and a bunch of NVMe devices configured:
cijoe -c configs/debian-trixie.toml provision.yaml
Tip
It will likely fail with the error:
/bin/sh: 1: /opt/qemu/bin/qemu-system-x86_64: not found
This is because the default configuration is for running on Github. Thus,
adjust the file configs/debian-trixie.toml such that qemu is
pointing to $HOME.
Create boot-images#
The debian-trixie-amd64.qcow2 is created by:
cijoe -c configs/debian-trixie.toml workflows/bootimg-debian-trixie-amd64.yaml
The freebsd-13.1-ksrc-amd64.qcow2 is created by:
cijoe -c configs/freebsd-13.toml workflows/bootimg-freebsd-13-amd64.yaml
Remote dev#
Assuming your primary device for development is something like a Chromebook/ Macbook, something light-weight and great for reading mail… but now you want to fire up your editor and do some development.
Or, your primary system is simply separate from the dev-box for a myriad of
reasons. Then, have a look at the existing cijoe configuration files in
cijoe/configs/*.toml, copy one that mathes your intended system, e.g.
use the configuration file for the qemu-guest as a starting point for a
configuration file for another physical machine:
cp configs/debian-trixie.toml ~/.config/cijoe/cijoe-config.toml
Open up ~/.config/cijoe/cijoe-config.toml and adjust it to your physical
machine. That is, change the ssh-login information, change the list of devices,
paths to binaries etc. Once you have done that, then go ahead and run:
# Sync source, build, and install xNVMe on the remote end
make guest-provision
# Run the test suite
make guest-test
You can do any of the make guest-* things like the above.