Getting Started

Previously a quick start guide was available. However, it usually left one asking questions, and thus, did not as the name suggests; get you started quickly. Thus, a fuller story on getting started using xNVMe is provided here. Needlessly, this section will have subsections that you can skip and revisit only in case you find xNVMe, or the system you are running on, to be misbehaving.

If you have read through, and still have questions, then please raise a issue, start an asynchronous discussion, or go to Discord for synchronous interaction.

The task of getting started with xNVMe will take you through Building xNVMe with a companion section on Toolchain prerequisites, followed by a section describing runtime requirements in Backends and System Config, ending with an example of Building an xNVMe Program.

Building xNVMe

xNVMe builds and runs on Linux, FreeBSD, and Windows. The latter is not publicly supported, however, if you are interested in Windows support then have a look at the Windows section for more information.

Retrieve the xNVMe repository from GitHUB:

# Clone the xNVMe repos
git clone https://github.com/OpenMPDK/xNVMe.git --recursive xnvme
cd xnvme

# make: auto-configure xNVMe, build third-party libraries, and xNVMe itself
make

# make install: install via the apt package manager
sudo make install-deb

# make install: install in the "usual" manner
sudo make install

Note

The repository uses git-submodules, so make sure you are cloning with --recursive, if you overlooked that, then invoke git submodule update --init --recursive

If you want to change the build-configuration, then have a look at the following Custom Configuration, if you are seeing build errors, then jump to the Toolchain section describing packages to install on different Linux distributions and on FreeBSD.

There you will also find notes on customizing the toolchain and cross-compilation.

Toolchain

The toolchain (compiler, archiver, and linker) used for building xNVMe must support C11, pthreads and on the system the following libraries and tools must be available:

  • CMake (>= 3.9, For xNVMe)

  • glibc (>= 2.28, for io_uring/liburing)

  • libaio-dev (>=0.3, For xNVMe and SPDK)

  • libnuma-dev (>=2, For SPDK)

  • libssl-dev (>=1.1, For SPDK)

  • make (gmake)

  • meson (>=0.48, for SPDK)

  • ninja (>=, for SPDK)

  • uuid-dev (>=2.3, For SPDK)

The preferred toolchain is gcc and the following sections describe how to install it and required libraries on FreeBSD and a set of popular Linux Distributions.

If you which to use a different toolchain then see the Non-GCC Toolchain, on how to instrument the build-system using a compiler other than gcc.

Arch Linux 20200306

Install the following packages via pacman:

base-devel
cmake
cunit
gcc
git
git
libaio
libutil-linux
make
meson
ncurses
ninja
numactl
python3

For example, from the root of the xNVMe source repository, do:

# Install packages via pacman
pacman -Syyu --noconfirm
pacman -S --noconfirm $(cat "scripts/pkgs/archlinux:20200306.txt")

CentOS 7

Install the following packages via yum:

CUnit-devel
git
glibc-static
libaio-devel
libuuid-devel
make
nasm
ncurses-devel
numactl-devel
openssl-devel
patch
python3
python3-pip
wget

For example, from the root of the xNVMe source repository, do:

# Install Developer Tools
yum install -y centos-release-scl
yum-config-manager --enable rhel-server-rhscl-7-rpms
yum install -y devtoolset-8

# Install packages via yum
yum install -y $(cat "scripts/pkgs/centos:centos7.txt")

# Install CMake using installer from GitHUB
wget https://github.com/Kitware/CMake/releases/download/v3.16.5/cmake-3.16.5-Linux-x86_64.sh -O cmake.sh
chmod +x cmake.sh
./cmake.sh --skip-license --prefix=/usr/

# Install packages via PyPI
pip3 install meson ninja

# Source them in for usage before building
source /opt/rh/devtoolset-8/enable

gcc --version
g++ --version

# Add to bash-profile if it makes sense to you
#echo "#!/bin/bash\nsource scl_source enable devtoolset-8" >> /etc/profile.d/devtoolset.sh
#echo "#!/bin/bash\nsource scl_source enable devtoolset-8" >> ~/.bashrc
#echo "#!/bin/bash\nsource scl_source enable devtoolset-8" >> ~/.bash_profile

Debian 11 (Bullseye)

Install the following packages via apt-get:

astyle
build-essential
cmake
git
libaio-dev
libcunit1-dev
libncurses5-dev
libnuma-dev
libssl-dev
nasm
python3
python3-pip
uuid-dev

For example, from the root of the xNVMe source repository, do:

# Install packages via aptitude -- seems to handle dependencies better
aptitude -q -y -f install $(cat "scripts/pkgs/debian:bullseye.txt")

# Install packages via PyPI
pip3 install meson ninja

Debian 10 (Buster)

Install the following packages via apt-get:

build-essential
cmake
git
libaio-dev
libcunit1-dev
libncurses5-dev
libnuma-dev
libssl-dev
nasm
python3
python3-pip
uuid-dev

For example, from the root of the xNVMe source repository, do:

apt-get -qy autoclean

# Install packages via apt-get
apt-get install -qy $(cat "scripts/pkgs/debian:buster.txt")

apt-get -t buster-backports install -qy meson ninja-build

Debian 9 (Stretch)

Install the following packages via apt-get:

build-essential
git
libaio-dev
libcunit1-dev
libncurses5-dev
libnuma-dev
libssl-dev
nasm
python3
python3-pip
uuid-dev
wget

For example, from the root of the xNVMe source repository, do:

# Install packages via apt-get
apt-get install -qy $(cat "scripts/pkgs/debian:stretch.txt")

# Install CMake using installer from GitHUB
wget https://github.com/Kitware/CMake/releases/download/v3.16.5/cmake-3.16.5-Linux-x86_64.sh -O cmake.sh
chmod +x cmake.sh
./cmake.sh --skip-license --prefix=/usr/

# Install packages via PyPI
pip3 install meson ninja

Freebsd 12

Ensure that you have kernel source in /usr/src, then install the following packages via pkg:

autoconf
automake
bash
cmake
cunit
e2fsprogs-libuuid
git
gmake
meson
nasm
ncurses
ninja
openssl
python
python3
python3-pip

For example, from the root of the xNVMe source repository, do:

# Install packages via pkg
pkg install -qy $(cat "scripts/pkgs/freebsd-12.txt")

# Install packages via PyPI
pip3 install meson ninja

Ubuntu 20.04 (Focal)

Install the following packages via apt-get:

build-essential
cmake
g++
gcc
git
libaio-dev
libcunit1-dev
libncurses5-dev
libnuma-dev
libssl-dev
nasm
python3
python3-pip
uuid-dev

For example, from the root of the xNVMe source repository, do:

# Install packages via apt-get
apt-get install -qy $(cat "scripts/pkgs/ubuntu:focal.txt")

# Install packages via PyPI
pip3 install meson ninja

Ubuntu 18.04 (Bionic)

Install the following packages via apt-get:

build-essential
cmake
g++
gcc
git
libaio-dev
libcunit1-dev
libncurses5-dev
libnuma-dev
libssl-dev
nasm
python3
python3-pip
uuid-dev

For example, from the root of the xNVMe source repository, do:

# Install packages via apt-get
apt-get install -qy $(cat "scripts/pkgs/ubuntu:bionic.txt")

# Install packages via PyPI
pip3 install meson ninja

Ubuntu 16.04 (Xenial)

Install the following packages via apt-get:

build-essential
g++
gcc
git
libaio-dev
libcunit1-dev
libncurses5-dev
libnuma-dev
libssl-dev
nasm
python3
python3-pip
uuid-dev
wget

For example, from the root of the xNVMe source repository, do:

# Install packages via apt-get
apt-get install -qy $(cat "scripts/pkgs/ubuntu:xenial.txt")

# Install CMake using installer from GitHUB
wget https://github.com/Kitware/CMake/releases/download/v3.16.5/cmake-3.16.5-Linux-x86_64.sh -O cmake.sh
chmod +x cmake.sh
./cmake.sh --skip-license --prefix=/usr/

# Install packages via PyPI
pip3 install meson ninja

Alpine Linux 3.11.3

Install the following packages via apk:

bash
bsd-compat-headers
build-base
cmake
coreutils
gawk
git
libexecinfo-dev
linux-headers
meson
ninja
make
musl-dev
ncurses
numactl-dev
python3
python3-pip
util-linux-dev

For example, from the root of the xNVMe source repository, do:

# Install packages via apk
apk add $(cat "scripts/pkgs/alpine:3.12.0.txt")

# Install packages via PyPI
pip3 install meson ninja

Backends and System Config

xNVMe relies on certain Operating System Kernel features and infrastructure that must be available and correctly configured. This subsection goes through what is uses on Linux and how check whether is it available.

Backends

The purpose of xNVMe backends are to provide an instrumental runtime supporting the xNVMe API in a single library with batteries included.

That is, it comes with the essential third-party libraries bundled into the xNVMe library. Thus, you get a single C API to program against and a single library to link with. An similarly for the command-line tools; a single binary to communicating with devices via the I/O stacks that available on the system.

To inspect the libraries which xNVMe is build against and the supported/enabled backends then invoke:

xnvme library-info

It should produce output similar to:

# xNVMe Library Information
ver: {major: 0, minor: 0, patch: 22}
xnvme_3p:
  - 'fio;git-describe:fio-3.25'
  - 'libnvme;git-rev:master/a458217'
  - 'liburing;git-describe:liburing-0.7'
  - 'spdk;git-describe:v20.10;+patches'
  - 'linux;LINUX_VERSION_CODE-UAPI/329995-5.9.11'
xnvme_be_attr_list:
  count: 3
  capacity: 3
  items:
  - {name: 'spdk', enabled: 1, schemes: [pci, pcie, fab]}
  - {name: 'linux', enabled: 1, schemes: [file]}
  - {name: 'fbsd', enabled: 0, schemes: [file, fbsd]}

The xnvme_3p part of the output informs about the third-party projects which xNVMe was built against, and in the case of libraries, the version it has bundled.

Although a single API and a single library is provided by xNVMe, then runtime and system configuration dependencies remain. The following subsections describe how to instrument xNVMe to utilize the different kernel interfaces and user space drivers.

Kernel

Linux Kernel version 5.9 or newer is currently preferred as it has all the features which xNVMe utilizes. This section also gives you a brief overview of the different I/O paths and APIs which the xNVMe API unifies access to.

NVMe Driver and IOCTLs

The default for xNVMe is to communicate with devices via the operating system NVMe driver IOCTLs, specifically on Linux the following are used:

  • NVME_IOCTL_ID

  • NVME_IOCTL_IO_CMD

  • NVME_IOCTL_ADMIN_CMD

  • NVME_IOCTL_IO64_CMD

  • NVME_IOCTL_ADMIN64_CMD

In case the *64_CMD IOCTLs are not available then xNVMe falls back to using the non-64bit equivalents. The 64 vs 32 completion result mostly affect commands such as Zone Append. You can check that this interface is behaving as expected by running:

xnvme info /dev/nvme0n1

Which you yield output equivalent to:

xnvme_dev:
  xnvme_ident:
    trgt: '/dev/nvme0n1'
    schm: 'file'
    opts: ''
    uri: 'file:/dev/nvme0n1'
  xnvme_be:
    async: {id: 'thr', enabled: 1}
    sync: {id: 'nvme_ioctl', enabled: 1}
    attr: {name: 'linux', enabled: 1}

This tells you that xNVMe can communicate with the given device identifier and it informs you that it utilizes nvme_ioctl for synchronous command execution and it uses thr for asynchronous command execution. Since IOCTLs are inherently synchronous then xNVMe mimics asynchronous behavior over IOCTLs to support the asynchronous primitives provided by the xNVMe API.

Block Layer

In case your device is not an NVMe device, then the NVMe IOCTLs won’t be available. xNVMe will then try to utilize the Linux Block Layer and treat a given block device as a NVMe device via shim-layer for NVMe admin commands such as identify and get-features.

A brief example of checking this:

# Create a NULL Block instance
modprobe null_blk nr_devices=1
# Open and query the NULL Block instance with xNVMe
xnvme info /dev/nullb0
# Remove the NULL Block instance
modprobe -r null_blk

Yielding:

xnvme_dev:
  xnvme_ident:
    trgt: '/dev/nullb0'
    schm: 'file'
    opts: ''
    uri: 'file:/dev/nullb0'
  xnvme_be:
    async: {id: 'thr', enabled: 1}
    sync: {id: 'block_ioctl', enabled: 1}
    attr: {name: 'linux', enabled: 1}
  xnvme_cmd_opts:
    mask: '00000000000000000000000000000001'
    iomd: 'SYNC'
    payload_data: 'DRV'
    payload_meta: 'DRV'
    csi: 0x0
    nsid: 0x1
    ssw: 9
  xnvme_geo:
    type: XNVME_GEO_CONVENTIONAL
    npugrp: 1
    npunit: 1
    nzone: 1
    nsect: 524288000
    nbytes: 512
    nbytes_oob: 0
    tbytes: 268435456000
    mdts_nbytes: 65024
    lba_nbytes: 512
    lba_extended: 0

Block Zoned IOCTLs

Building on the Linux Block model, then the Zoned Block Device model is also utilized, specifically the following IOCTLs:

  • BLK_ZONE_REP_CAPACITY

  • BLKCLOSEZONE

  • BLKFINISHZONE

  • BLKOPENZONE

  • BLKRESETZONE

  • BLKGETNRZONES

  • BLKREPORTZONE

When available, then xNVMe can make use of the above IOCTLs. This is mostly useful when developing/testing using Linux Null Block devices. And similar for a Zoned NULL Block instance:

# Create a Zoned NULL Block instance
modprobe null_blk nr_devices=1 zoned=1
# Open and query the Zoned NULL Block instance with xNVMe
xnvme info /dev/nullb0
# Remove the Zoned NULL Block instance
modprobe -r null_blk

Yielding:

xnvme_dev:
  xnvme_ident:
    trgt: '/dev/nullb0'
    schm: 'file'
    opts: ''
    uri: 'file:/dev/nullb0'
  xnvme_be:
    async: {id: 'thr', enabled: 1}
    sync: {id: 'block_ioctl', enabled: 1}
    attr: {name: 'linux', enabled: 1}
  xnvme_cmd_opts:
    mask: '00000000000000000000000000000001'
    iomd: 'SYNC'
    payload_data: 'DRV'
    payload_meta: 'DRV'
    csi: 0x2
    nsid: 0x1
    ssw: 9
  xnvme_geo:
    type: XNVME_GEO_ZONED
    npugrp: 1
    npunit: 1
    nzone: 1000
    nsect: 524288
    nbytes: 512
    nbytes_oob: 0
    tbytes: 268435456000
    mdts_nbytes: 65024
    lba_nbytes: 512
    lba_extended: 0

Async I/O via libaio

When AIO is available then the NVMe NVM Commands for read and write are sent over the Linux AIO interface. Doing so improves command-throughput at higher queue-depths when compared to sending the command over the driver-ioctl.

One can explicitly tell xNVMe to utilize libaio for async I/O by encoding it in the device identifier, like so:

# Use libaio for asynchronous read and write
xnvme info /dev/nvme0n1?async=libaio

Yielding the output:

xnvme_dev:
  xnvme_ident:
    trgt: '/dev/nvme0n1'
    schm: 'file'
    opts: '?async=libaio'
    uri: 'file:/dev/nvme0n1?async=libaio'
  xnvme_be:
    async: {id: 'libaio', enabled: 1}
    sync: {id: 'nvme_ioctl', enabled: 1}
    attr: {name: 'linux', enabled: 1}

Async. I/O via io_uring

xNVMe utilizes the Linux io_uring interface, its support for feature-probing the io_uring interface and the io_uring opcodes:

  • IORING_OP_READ

  • IORING_OP_WRITE

When available, then xNVMe can send the NVMe NVM Commands for read and write via the Linux io_uring interface. Doing so improves command-throughput at all io-depths when compared to sending the command via NVMe Driver IOCTLs and libaio. It also leverages the io_uring interface to enabling I/O polling and kernel-side submission polling.

One can explicitly tell xNVMe to utilize io_uring for async I/O by encoding it in the device identifier, like so:

# Use io_uring for asynchronous read and write
xnvme info /dev/nvme0n1?async=io_uring

Yielding the output:

xnvme_dev:
  xnvme_ident:
    trgt: '/dev/nvme0n1'
    schm: 'file'
    opts: '?async=io_uring'
    uri: 'file:/dev/nvme0n1?async=io_uring'
  xnvme_be:
    async: {id: 'io_uring', enabled: 1}
    sync: {id: 'nvme_ioctl', enabled: 1}
    attr: {name: 'linux', enabled: 1}

User Space

Linux provides the Userspace I/O (uio) and Virtual Function I/O vfio frameworks to write user space I/O drivers. Both interfaces work by binding a given device to an in-kernel stub-driver. The stub-driver in turn exposes device-memory and device-interrupts to user space. Thus enabling the implementation of device drivers entirely in user space.

Although Linux provides a capable NVMe Driver with flexible IOCTLs, then a user space NVMe driver serves those who seek the lowest possible command per-command processing overhead or wants full control over NVMe command construction, including command-payloads.

Fortunately, you do not need to go and write a user space NVMe driver since a highly efficient, mature and well-maintained driver already exists. Namely, the NVMe driver provided by the Storage Platform Development Kit (SPDK).

Another great fortune is that xNVMe bundles the SPDK NVMe Driver with the xNVMe library. So, if you have built and installed xNVMe then the SPDK NVMe Driver is readily available to xNVMe.

The following subsections goes through a configuration checklist, then shows how to bind and unbind drivers, and lastly how to utilize non-devfs device identifiers by enumerating the system and inspecting a device.

Config

What remains is checking your system configuration, enabling IOMMU for use by the vfio-pci driver, and possibly falling back to the uio_pci_generic driver in case vfio-pci is not working out. vfio is preferred as hardware support for IOMMU allows for isolation between devices.

  1. Verify that your CPU supports virtualization / VT-d and that it is enabled in your board BIOS.

  2. Enable your kernel for an intel CPU then provide the kernel option intel_iommu=on. If you have a non-Intel CPU then consult documentation on enabling VT-d / IOMMU for your CPU.

  3. Increase limits, open /etc/security/limits.conf and add:

*    soft memlock unlimited
*    hard memlock unlimited
root soft memlock unlimited
root hard memlock unlimited

Once you have gone through these steps, and rebooted, then this command:

dmesg | grep "DMAR: IOMMU"

Should output:

[    0.119519] DMAR: IOMMU enabled

And this this command:

find /sys/kernel/iommu_groups/ -type l

Should have output similar to:

/sys/kernel/iommu_groups/7/devices/0000:01:00.0
/sys/kernel/iommu_groups/5/devices/0000:00:05.0
/sys/kernel/iommu_groups/3/devices/0000:00:03.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/8/devices/0000:03:00.0
/sys/kernel/iommu_groups/8/devices/0000:02:00.0
/sys/kernel/iommu_groups/6/devices/0000:00:1f.2
/sys/kernel/iommu_groups/6/devices/0000:00:1f.0
/sys/kernel/iommu_groups/6/devices/0000:00:1f.3
/sys/kernel/iommu_groups/4/devices/0000:00:04.0
/sys/kernel/iommu_groups/2/devices/0000:00:02.0
/sys/kernel/iommu_groups/0/devices/0000:00:00.0

Unbinding and binding

With the system configured then you can use the xnvme-driver script bind and unbind devices. The xnvme-driver script is a merge of the SPDK setup.sh script and its dependencies.

By running the command below 8GB of hugepages will be configured, the Kernel NVMe driver unbound, and vfio-pci bound to the device:

HUGEMEM=8192 xnvme-driver

The command above should produce output similar to:

0000:03:00.0 (1b36 0010): nvme -> vfio-pci
0000:00:02.0 (1af4 1001): Active mountpoints on /dev/vda, so not binding

To unbind from vfio-pci and back to the Kernel NVMe driver, then run:

xnvme-driver reset

Should output similar to:

0000:03:00.0 (1b36 0010): vfio-pci -> nvme
0000:00:02.0 (1af4 1001): Already using the virtio-pci driver

Device Identifiers

Since the Kernel NVMe driver is unbound from the device, then the kernel no longer know that the PCIe device is an NVMe device, thus, it no longer lives in Linux devfs, that is, no longer available in /dev as e.g. /dev/nvme0n1.

Instead of the filepath in devfs, then you use PCI ids and xNVMe options.

As always, use the xnvme cli tool to enumerate devices:

xnvme enum
# xnvme_enumerate()
xnvme_enumeration:
  capacity: 98
  nentries: 2
  entries:
  - {trgt: '0000:03:00.0', schm: 'pci', opts: '?nsid=1', uri: 'pci:0000:03:00.0?nsid=1'}
  - {trgt: '0000:03:00.0', schm: 'pci', opts: '?nsid=2', uri: 'pci:0000:03:00.0?nsid=2'}

Notice that multiple URIs using the same PCI id but with different xNVMe ?opts=<val>. This is provided as a means to tell xNVMe that you want to use the NVMe controller at 0000:03:00.0 and the namespace identified by nsid=1.

Similarly, when using the API, then you would use these URIs instead of filepaths:

...
struct xnvme_dev *dev = xnvme_dev_open("pci:0000:01:00.0?nsid=1");
...

Building an xNVMe Program

At this point you should have xNVMe built and installed on your system and have the system correctly configured and you should by now also be familiar with how to instrument xNVMe to utilize different backends and backend options.

With all that in place, go ahead and compile your own xNVMe program.

Example code

This “hello-world” example prints out device information of the NVMe device at /dev/nvme0n1.

To use xNVMe include the libxnvme.h header in your C/C++ source:

#include <stdio.h>
#include <libxnvme.h>
#include <libxnvme_pp.h>

int main(int argc, char **argv)
{
	struct xnvme_dev *dev;

	dev = xnvme_dev_open("/dev/nvme0n1");
	if (!dev) {
		perror("xnvme_dev_open");
		return 1;
	}

	xnvme_dev_pr(dev, XNVME_PR_DEF);
	xnvme_dev_close(dev);

	return 0;
}

Run!

chmod +x hello
./hello
xnvme_dev:
  xnvme_ident:
    trgt: '/dev/nvme0n1'
    schm: 'file'
    opts: ''
    uri: 'file:/dev/nvme0n1'
  xnvme_be:
    async: {id: 'thr', enabled: 1}
    sync: {id: 'nvme_ioctl', enabled: 1}
    attr: {name: 'linux', enabled: 1}
  xnvme_cmd_opts:
    mask: '00000000000000000000000000000001'
    iomd: 'SYNC'
    payload_data: 'DRV'
    payload_meta: 'DRV'
    csi: 0x0
    nsid: 0x1
    ssw: 9
  xnvme_geo:
    type: XNVME_GEO_CONVENTIONAL
    npugrp: 1
    npunit: 1
    nzone: 1
    nsect: 16777216
    nbytes: 512
    nbytes_oob: 0
    tbytes: 8589934592
    mdts_nbytes: 65024
    lba_nbytes: 512
    lba_extended: 0

This should conclude the getting started guide of xNVMe, go ahead and explore the Tools, C API, and C API: Examples.

Should xNVMe or your system still be misbehaving, then take a look in the Troubleshooting section or reach out by raising an issue, start an asynchronous discussion, or go to Discord for synchronous interaction.

Troubleshooting

User space

In case you are having issues using running with SPDK backend / using then make sure you following the config section Config and if issues persist a solution might be found in the following subsections.

No devices found

When running xnvme enum then the output-listing is empty, there are no devices. When running with vfio-pci then this can occur when your devices are sharing iommu-group with other devices which are still bound to in-kernel drivers. This could be NICs, GPUs or other kinds of peripherals.

The division of devices into groups is not something that can be easily switched, but you try to manually unbind the other devices in the iommu group from their kernel drivers.

If that is not an option then you can try to re-organize your physical connectivity of deviecs, e.g. move devices around.

Lastly you can try using uio_pci_generic instead, this can most easily be done by disabling iommu by adding the kernel option: iommu=off to the kernel command-line and rebooting.

Memory Issues

If you see a message similar to the below while unbind devices:

Current user memlock limit: 16 MB

This is the maximum amount of memory you will be
able to use with DPDK and VFIO if run as current user.
To change this, please adjust limits.conf memlock limit for current user.

## WARNING: memlock limit is less than 64MB
## DPDK with VFIO may not be able to initialize if run as current user.

Then go you should do as suggested, that is, adjust limits.conf, see for an example of doing here Config.

Build Errors

If you are getting errors while attempting to configure and build xNVMe then it is likely due to one of the following:

git submodules

  • You did not clone with --recursive or are for other reasons missing the submodules. Either clone again with the --recursive flag, or update submodules: git submodule update --init --recursive.

  • You used the “Download Source” link on GitHUB, this does not work as it does not provide all the third-party repositories, only the xNVMe source-tree.

In case you cannot use git submodules then a source-archive is provided with each xNVMe release, you can download it from the GitHUB Release page release page. It contains the xNVMe source code along with all the third-party dependencies, namely: SPDK, liburing, libnvme, and fio.

missing dependencies / toolchain

  • You are missing dependencies, see the Toolchain for installing these on FreeBSD and a handful of different Linux Distributions

The Toolchain section describes preferred ways of installing libraries and tools. For example, on Ubuntu 18.04 it is preferred to install meson via pip since the version in the package registry is too old for SPDK, thus if installed via the package manager then you will experience build errors as the xNVMe build system starts building SPDK.

Once you have the full source of xNVMe, third-party library dependencies, and setup the toolchain then run the following to ensure that the xNVMe repository is clean from any artifacts left behind by previous build failures:

make clobber

And then go back to the Building xNVMe and follow the steps there.

Known Build Issues

If the above did not sort out your build-issues, then you might be facing one of the following known build-issues. If these do not apply to you, then please post an issue on GitHUB describing your build environment and output from the failed build.

When building xNVMe on Alpine Linux you might encounter some issues due to musl standard library not being entirely compatible with GLIBC / BSD.

The SPDK backend does not build on due to re-definition of STAILQ_* macros. As a work-around, then disable the SPDK backend:

./configure --disable-be-spdk

The Linux backend support for io_uring fails on Alpine Linux due to a missing definition in musl leading to this error message:

include/liburing.h:195:17: error: unknown type name 'loff_t'; did you mean
'off_t'?

As a work-around, then disable io_uring support:

./configure --disable-be-linux-iou

See more details on changing the default build-configuration of xNVMe in the section Custom Configuration.

Customizing the Build

Non-GCC Toolchain

To use a compiler other than gcc, then:

  1. Set the CC and CXX environment variable for the ./configure script

  2. Pass CC, and CXX as arguments to make

For example, compiling xNVMe on a system where the default compiler is not gcc:

CC=gcc CXX=g++ ./configure <YOUR_OPTIONS_HERE>
make CC=gcc CXX=g++
make install CC=gcc CXX=g++

Recent versions of icc, clang, and pgi should at be able to satisfy the C11 and pthreads requirements. However, it will most likely require a bit of fiddling.

Note

The icc works well after you bought a license and installed it correctly. There is a free option with Intel System Suite 2019.

Note

The pgi compiler has some issues linking with SPDK/DPDK due to unstable ABI for RTE it seems.

The path of least fiddling around is to just install the toolchain and libraries as described in the Toolchain section.

Cross-compiling for ARM on x86

In case you do not have the build-tools available on your ARM target, then you can cross-compile g by parsing CC parameter to make e.g.:

CC=aarch64-linux-gnu-gcc-9 ./configure <config_options>
make CC=aarch64-linux-gnu-gcc-9

Then transfer and unpack xnvme0.tar.gz from the build directory to your ARM machine.

Note

This is currently not supported with the SPDK backend

Custom Configuration

The xNVMe build system configures itself, so you do not have to run configure script yourself. Manually invoking the xNVMe build-configuration should not be needed, for the common task of enabling debugging then just do:

make clean
make config-debug
make

However, if for some reason you want to manually run the xNVMe build-configuration, then this section describes how to accomplish that.

A configure script is provided to configure the build of xNVMe. You can inspect configuration options by invoking:

./configure --help
  • Boolean options are enabled by prefixing --enable-<OPTION>

  • Boolean options are disabled by prefixing --disable-<OPTION>

  • A couple of examples:

    • Enable debug with: --enable-debug

    • Disable the SPDK backend with: --disable-be-spdk

The configure-script will enable backends relevant to the system platform determined by the environment variable OSTYPE.

On Linux, the configure script enables the SPDK backend be:spdk and the Linux native backend be:linux, with all features enabled, that is, support for libaio, io_uring, nil, and thr.

On FreeBSD, the configure script enables the SPDK backend be:spdk and the FreeBSD native backend be:fbsd. For more information about these so-called library backends, see the Backends section.

xNVMe provides third-party libraries via submodules, it builds these and embeds them in the xNVMe static libraries and executables. If you want to link with your version of these libraries then you can overwrite the respective include and library paths. See ./configure --help for details.

Windows

Windows is not supported in public domain. However, if you want to roll your support into xNVMe, then you could follow the pointers below.

C11 support is quite poor with most compilers on Windows except for Intel ICC and the GCC port TDM-GCC.

A backend implementation for Windows could utilize an IO path, sending read/write IO to the block device and then wrap all other NVMe commands around the NVMe driver IOCTL interface.

Such as the one provided by the Open-Source NVMe Driver for Windows:

Or use the IOCTL interface of the built-in NVMe driver:

A bit of care has been taken in xNVMe, e.g. buffer-allocation, to support / run on Windows. So, you “only” have to worry about implementing the command-transport and async interface.