Getting Started
Previously a quick start guide was available. However, it usually left one asking questions, and thus, did not as the name suggests; get you started quickly. Thus, a fuller story on getting started using xNVMe is provided here. Needlessly, this section will have subsections that you can skip and revisit only in case you find xNVMe, or the system you are running on, to be misbehaving.
If you have read through, and still have questions, then please raise an issue, start an asynchronous discussion, or go to Discord for synchronous interaction.
The task of getting started with xNVMe will take you through Building xNVMe with a companion section on Toolchain prerequisites, followed by a section describing runtime requirements in Backends and System Config, ending with an example of Windows Kernel.
Building xNVMe
xNVMe builds and runs on Linux, FreeBSD and Windows. First, retrieve the xNVMe repository from GitHUB:
# Clone the xNVMe repos into the folder 'xnvme'
git clone https://github.com/OpenMPDK/xNVMe.git xnvme
Note
The xNVMe build-system uses meson/ninja
and its
subproject-feature with wraps, dependencies such as fio, libvfn, and SPDK
are fetched by the build-system (meson), not as previously done via
git-submodules.
Before you invoke the compilation, then ensure that you have the compiler, tools, and libraries needed. The Toolchain section describes what to install, and how, on rich a selection of Linux distributions, FreeBSD and Windows. For example on Debian do:
sudo ./xnvme/toolbox/pkgs/debian-bullseye.sh
With the source available, and toolchain up and running, then go ahead:
cd xnvme
# configure xNVMe and build dependencies (fio, libvfn, and SPDK/NVMe)
meson setup builddir
cd builddir
# build xNVMe
meson compile
# install xNVMe
meson install
# uninstall xNVMe
# meson --internal uninstall
Note
Details on the build-errors can be seen by inspecting
builddir/meson-logs/meson-log.txt
.
Note
In case you ran the meson-commands before installing, then you
probably need to remove your builddir
before re-running build commands.
In case you want to customize the build, e.g. install into a different location
etc. then this is all handled by meson built-in options, in addition to those, then you
can inspect meson_options.txt
which contains build-options specific to
xNVMe. For examples of customizing the build then have a look at the
following Custom Configuration.
Otherwise, with a successfully built and installed xNVMe, then jump to Backends and System Config and Windows Kernel.
Toolchain
The toolchain (compiler, archiver, and linker) used for building xNVMe must support C11, pthreads and on the system the following tools must be available:
Python (>=3.7)
meson (>=0.58) and matching version of ninja
make (gmake)
gcc/mingw/clang
Along with libraries:
glibc (>= 2.28, for io_uring/liburing)
libaio-dev (>=0.3, For xNVMe and SPDK)
libnuma-dev (>=2, For SPDK)
libssl-dev (>=1.1, For SPDK)
liburing (>=2.2, for xNVMe)
uuid-dev (>=2.3, For SPDK)
xNVMe makes use of libraries and interfaces when available and will “gracefully degrade” when a given library is not available. For example, if liburing is not available on your system and you do not want to install it, then xNVMe will simply build without io_uring-support.
The preferred toolchain is gcc and the following sections describe how to install it and required libraries on a set of popular Linux Distributions, FreeBSD, MacOS, and Windows. If you wish to use a different toolchain then see the Non-GCC Toolchain, on how to instrument the build-system using a compiler other than gcc.
In the following sections, the system package-manager is used whenever possible to install the toolchain and libraries. However, on some Linux distribution there are not recent enough versions. To circumvent that, then the packages are installed via the Python package-manager. In some cases even a recent enough version of Python is not available, to bootstrap it, then Python is built and installed from source.
Note
When installing packages via the Python package-manager (python3 -m
pip install
), then packages should be installed system-wide. This is
ensure that the installed packages behave as though they were installed
using the system package-manager.
Package-managers: apk, pkg, dnf, yum, pacman, apt, aptitude, apt-get, pkg, choco, brew.
Alpine Linux (latest)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/alpine-latest.sh
Or, run the commands contained within the script manually:
# Install packages via the system package-manager (apk)
apk add \
bash \
bsd-compat-headers \
build-base \
clang15-extra-tools \
coreutils \
cunit \
findutils \
gawk \
git \
libaio-dev \
libarchive-dev \
liburing-dev \
libuuid \
linux-headers \
make \
meson \
musl-dev \
nasm \
ncurses \
numactl-dev \
openssl-dev \
patch \
perl \
pkgconf \
py3-pip \
python3 \
python3-dev \
util-linux-dev
#
# Clone, build and install libvfn
#
# Assumptions:
#
# - Dependencies for building libvfn are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/OpenMPDK/libvfn.git toolbox/third-party/libvfn/repository
pushd toolbox/third-party/libvfn/repository
git checkout v3.0.1
meson setup builddir -Dlibnvme="disabled" -Ddocs="disabled" --buildtype=release --prefix=/usr
meson compile -C builddir
meson install -C builddir
popd
# Install packages via the Python package-manager (pip)
python3 -m pip install --upgrade pip
python3 -m pip install \
pipx
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-alpine-latest:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
# configure xNVMe with SPDK and libvfn disabled
meson setup builddir -Dwith-spdk=false
# build xNVMe
meson compile -C builddir
# install xNVMe
meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
Note
There are issues with SPDK/DPDK due to incompatibilities with the standard
library provided by musl libc
. Additionally, the libexecinfo-dev
package is no longer available on Alpine.
Additionally, libvfn also relies on libexecinfo-dev
which is currently
not available for Alpine Linux. Thus, it is also disabled.
Pull-request fixing these issues are most welcome, until then, disable
libvfn and spdk on Alpine.
Arch Linux (latest)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/archlinux-latest.sh
Or, run the commands contained within the script manually:
# Install packages via the system package-manager (pacman)
pacman -Syyu --noconfirm
pacman -S --noconfirm \
base-devel \
bash \
clang \
cunit \
findutils \
git \
libaio \
libarchive \
liburing \
libutil-linux \
make \
meson \
nasm \
ncurses \
numactl \
openssl \
patch \
pkg-config \
python-pip \
python-pipx \
python-pyelftools \
python-setuptools \
python3 \
util-linux-libs
#
# Clone, build and install libvfn
#
# Assumptions:
#
# - Dependencies for building libvfn are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/OpenMPDK/libvfn.git toolbox/third-party/libvfn/repository
pushd toolbox/third-party/libvfn/repository
git checkout v3.0.1
meson setup builddir -Dlibnvme="disabled" -Ddocs="disabled" --buildtype=release --prefix=/usr
meson compile -C builddir
meson install -C builddir
popd
#
# Clone, build and install libisal
#
# Assumptions:
#
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/intel/isa-l.git toolbox/third-party/libisal/repository
pushd toolbox/third-party/libisal/repository
git checkout v2.30.0
./autogen.sh
./configure --prefix=/usr --libdir=/usr/lib
make
make install
popd
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-archlinux-latest:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
# configure xNVMe and build meson subprojects(SPDK)
meson setup builddir --prefix=/usr
# build xNVMe
meson compile -C builddir
# install xNVMe
meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
Note
The build is configured to install with --prefix=/usr
this is
intentional such the the pkg-config
files end up in the default search
path on the system. If you do not want this, then remove --prefix=/usr
and adjust your $PKG_CONFIG_PATH
accordingly.
Oracle Linux 9 (9)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/oraclelinux-9.sh
Or, run the commands contained within the script manually:
# This repo has CUnit-devel + meson
dnf install -y 'dnf-command(config-manager)'
dnf config-manager --set-enabled ol9_codeready_builder
# Install packages via the system package-manager (dnf)
dnf install -y \
CUnit-devel \
autoconf \
bash \
clang-tools-extra \
diffutils \
findutils \
gcc \
gcc-c++ \
git \
libaio-devel \
libarchive-devel \
libtool \
libuuid-devel \
make \
meson \
nasm \
ncurses \
numactl-devel \
openssl-devel \
patch \
pkgconfig \
procps \
python3-devel \
python3-pip \
python3-pyelftools \
unzip \
wget \
zlib-devel
# Install packages via the Python package-manager (pip)
python3 -m pip install --upgrade pip
python3 -m pip install \
pipx
#
# Clone, build and install libisal
#
# Assumptions:
#
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/intel/isa-l.git toolbox/third-party/libisal/repository
pushd toolbox/third-party/libisal/repository
git checkout v2.30.0
./autogen.sh
./configure --prefix=/usr --libdir=/usr/lib64
make
make install
popd
# Clone, build and install liburing v2.2
#
# Assumptions:
#
# - Dependencies for building liburing are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/axboe/liburing.git toolbox/third-party/liburing/repository
pushd toolbox/third-party/liburing/repository
git checkout liburing-2.2
./configure --libdir=/usr/lib64 --libdevdir=/usr/lib64
make
make install
popd
#
# Clone, build and install libvfn
#
# Assumptions:
#
# - Dependencies for building libvfn are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/OpenMPDK/libvfn.git toolbox/third-party/libvfn/repository
pushd toolbox/third-party/libvfn/repository
git checkout v3.0.1
meson setup builddir -Dlibnvme="disabled" -Ddocs="disabled" --buildtype=release --prefix=/usr
meson compile -C builddir
meson install -C builddir
popd
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-oraclelinux-9:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
# configure xNVMe and build meson subprojects(SPDK)
meson setup builddir --prefix=/usr
# build xNVMe
meson compile -C builddir
# install xNVMe
meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
Rocky Linux 9.2 (9.2)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/rockylinux-9.2.sh
Or, run the commands contained within the script manually:
# This repo has CUnit-devel + meson
dnf install -y 'dnf-command(config-manager)'
dnf config-manager --set-enabled crb
# Install packages via the system package-manager (dnf)
dnf install -y \
CUnit-devel \
autoconf \
bash \
clang-tools-extra \
diffutils \
findutils \
gcc \
gcc-c++ \
git \
libaio-devel \
libarchive-devel \
libtool \
libuuid-devel \
make \
meson \
nasm \
ncurses \
numactl-devel \
openssl-devel \
patch \
pkgconfig \
procps \
python3-devel \
python3-pip \
python3-pyelftools \
unzip \
wget \
zlib-devel
# Install packages via the Python package-manager (pip)
python3 -m pip install --upgrade pip
python3 -m pip install \
pipx
#
# Clone, build and install libisal
#
# Assumptions:
#
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/intel/isa-l.git toolbox/third-party/libisal/repository
pushd toolbox/third-party/libisal/repository
git checkout v2.30.0
./autogen.sh
./configure --prefix=/usr --libdir=/usr/lib64
make
make install
popd
# Clone, build and install liburing v2.2
#
# Assumptions:
#
# - Dependencies for building liburing are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/axboe/liburing.git toolbox/third-party/liburing/repository
pushd toolbox/third-party/liburing/repository
git checkout liburing-2.2
./configure --libdir=/usr/lib64 --libdevdir=/usr/lib64
make
make install
popd
#
# Clone, build and install libvfn
#
# Assumptions:
#
# - Dependencies for building libvfn are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/OpenMPDK/libvfn.git toolbox/third-party/libvfn/repository
pushd toolbox/third-party/libvfn/repository
git checkout v3.0.1
meson setup builddir -Dlibnvme="disabled" -Ddocs="disabled" --buildtype=release --prefix=/usr
meson compile -C builddir
meson install -C builddir
popd
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-rockylinux-9.2:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
# configure xNVMe and build meson subprojects(SPDK)
meson setup builddir --prefix=/usr
# build xNVMe
meson compile -C builddir
# install xNVMe
meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
CentOS Stream 9 (stream9)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/centos-stream9.sh
Or, run the commands contained within the script manually:
# This repos has CUnit-devel
dnf install -y 'dnf-command(config-manager)'
dnf config-manager --set-enabled crb
# Install packages via the system package-manager (dnf)
dnf install -y \
CUnit-devel \
autoconf \
bash \
diffutils \
findutils \
gcc \
gcc-c++ \
git \
libaio-devel \
libarchive-devel \
libtool \
libuuid-devel \
make \
nasm \
ncurses \
numactl-devel \
openssl-devel \
patch \
pkgconfig \
procps \
python3-devel \
python3-pip \
unzip \
wget \
zlib-devel
# Install packages via the Python package-manager (pip)
python3 -m pip install --upgrade pip # Otherwise too old to understand new Manylinux formats
python3 -m pip install \
meson \
ninja \
pipx \
pyelftools
#
# Clone, build and install libisal
#
# Assumptions:
#
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/intel/isa-l.git toolbox/third-party/libisal/repository
pushd toolbox/third-party/libisal/repository
git checkout v2.30.0
./autogen.sh
./configure --prefix=/usr --libdir=/usr/lib64
make
make install
popd
# Clone, build and install liburing v2.2
#
# Assumptions:
#
# - Dependencies for building liburing are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/axboe/liburing.git toolbox/third-party/liburing/repository
pushd toolbox/third-party/liburing/repository
git checkout liburing-2.2
./configure --libdir=/usr/lib64 --libdevdir=/usr/lib64
make
make install
popd
#
# Clone, build and install libvfn
#
# Assumptions:
#
# - Dependencies for building libvfn are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/OpenMPDK/libvfn.git toolbox/third-party/libvfn/repository
pushd toolbox/third-party/libvfn/repository
git checkout v3.0.1
meson setup builddir -Dlibnvme="disabled" -Ddocs="disabled" --buildtype=release --prefix=/usr
meson compile -C builddir
meson install -C builddir
popd
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-centos-stream9:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
# configure xNVMe and build meson subprojects(SPDK)
meson setup builddir --prefix=/usr
# build xNVMe
meson compile -C builddir
# install xNVMe
meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
Debian Testing (trixie)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/debian-trixie.sh
Or, run the commands contained within the script manually:
# Unattended update, upgrade, and install
export DEBIAN_FRONTEND=noninteractive
export DEBIAN_PRIORITY=critical
apt-get -qy update
apt-get -qy \
-o "Dpkg::Options::=--force-confdef" \
-o "Dpkg::Options::=--force-confold" upgrade
apt-get -qy --no-install-recommends install apt-utils
apt-get -qy autoclean
apt-get -qy install \
autoconf \
bash \
build-essential \
clang-format \
findutils \
git \
libaio-dev \
libarchive-dev \
libcunit1-dev \
libisal-dev \
libncurses5-dev \
libnuma-dev \
libssl-dev \
libtool \
liburing-dev \
make \
meson \
nasm \
openssl \
patch \
pipx \
pkg-config \
python3 \
python3-pip \
python3-pyelftools \
python3-venv \
uuid-dev
#
# Clone, build and install libvfn
#
# Assumptions:
#
# - Dependencies for building libvfn are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/OpenMPDK/libvfn.git toolbox/third-party/libvfn/repository
pushd toolbox/third-party/libvfn/repository
git checkout v3.0.1
meson setup builddir -Dlibnvme="disabled" -Ddocs="disabled" --buildtype=release --prefix=/usr
meson compile -C builddir
meson install -C builddir
popd
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-debian-trixie:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
# configure xNVMe and build meson subprojects(SPDK)
meson setup builddir
# build xNVMe
meson compile -C builddir
# install xNVMe
meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
Debian Stable (bookworm)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/debian-bookworm.sh
Or, run the commands contained within the script manually:
# Unattended update, upgrade, and install
export DEBIAN_FRONTEND=noninteractive
export DEBIAN_PRIORITY=critical
apt-get -qy update
apt-get -qy \
-o "Dpkg::Options::=--force-confdef" \
-o "Dpkg::Options::=--force-confold" upgrade
apt-get -qy --no-install-recommends install apt-utils
apt-get -qy autoclean
apt-get -qy install \
autoconf \
bash \
build-essential \
clang-format \
findutils \
git \
libaio-dev \
libarchive-dev \
libcunit1-dev \
libisal-dev \
libncurses5-dev \
libnuma-dev \
libssl-dev \
libtool \
liburing-dev \
make \
meson \
nasm \
openssl \
patch \
pipx \
pkg-config \
python3 \
python3-pip \
python3-pyelftools \
python3-venv \
uuid-dev
#
# Clone, build and install libvfn
#
# Assumptions:
#
# - Dependencies for building libvfn are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/OpenMPDK/libvfn.git toolbox/third-party/libvfn/repository
pushd toolbox/third-party/libvfn/repository
git checkout v3.0.1
meson setup builddir -Dlibnvme="disabled" -Ddocs="disabled" --buildtype=release --prefix=/usr
meson compile -C builddir
meson install -C builddir
popd
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-debian-bookworm:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
# configure xNVMe and build meson subprojects(SPDK)
meson setup builddir
# build xNVMe
meson compile -C builddir
# install xNVMe
meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
Debian Oldstable (bullseye)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/debian-bullseye.sh
Or, run the commands contained within the script manually:
# Unattended update, upgrade, and install
export DEBIAN_FRONTEND=noninteractive
export DEBIAN_PRIORITY=critical
apt-get -qy update
apt-get -qy \
-o "Dpkg::Options::=--force-confdef" \
-o "Dpkg::Options::=--force-confold" upgrade
apt-get -qy --no-install-recommends install apt-utils
apt-get -qy autoclean
apt-get -qy install \
autoconf \
bash \
build-essential \
clang-format \
findutils \
git \
libaio-dev \
libarchive-dev \
libcunit1-dev \
libisal-dev \
libncurses5-dev \
libnuma-dev \
libssl-dev \
libtool \
make \
nasm \
openssl \
patch \
pkg-config \
python3 \
python3-pip \
python3-pyelftools \
python3-venv \
uuid-dev
# Install packages via the Python package-manager (pip)
python3 -m pip install --upgrade pip
python3 -m pip install \
meson \
ninja \
pipx
# Clone, build and install liburing v2.2
#
# Assumptions:
#
# - Dependencies for building liburing are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/axboe/liburing.git toolbox/third-party/liburing/repository
pushd toolbox/third-party/liburing/repository
git checkout liburing-2.2
./configure
make
make install
popd
#
# Clone, build and install libvfn
#
# Assumptions:
#
# - Dependencies for building libvfn are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/OpenMPDK/libvfn.git toolbox/third-party/libvfn/repository
pushd toolbox/third-party/libvfn/repository
git checkout v3.0.1
meson setup builddir -Dlibnvme="disabled" -Ddocs="disabled" --buildtype=release --prefix=/usr
meson compile -C builddir
meson install -C builddir
popd
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-debian-bullseye:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
# configure xNVMe and build meson subprojects(SPDK)
meson setup builddir
# build xNVMe
meson compile -C builddir
# install xNVMe
meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
Fedora (38)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/fedora-38.sh
Or, run the commands contained within the script manually:
# Install packages via the system package-manager (dnf)
dnf install -y \
CUnit-devel \
autoconf \
bash \
clang-tools-extra \
diffutils \
findutils \
g++ \
gcc \
git \
libaio-devel \
libarchive-devel \
libtool \
liburing \
liburing-devel \
libuuid-devel \
make \
meson \
nasm \
ncurses \
numactl-devel \
openssl-devel \
patch \
pipx \
pkgconfig \
procps \
python3-devel \
python3-pip \
python3-pyelftools \
zlib-devel
#
# Clone, build and install libvfn
#
# Assumptions:
#
# - Dependencies for building libvfn are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/OpenMPDK/libvfn.git toolbox/third-party/libvfn/repository
pushd toolbox/third-party/libvfn/repository
git checkout v3.0.1
meson setup builddir -Dlibnvme="disabled" -Ddocs="disabled" --buildtype=release --prefix=/usr
meson compile -C builddir
meson install -C builddir
popd
#
# Clone, build and install libisal
#
# Assumptions:
#
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/intel/isa-l.git toolbox/third-party/libisal/repository
pushd toolbox/third-party/libisal/repository
git checkout v2.30.0
./autogen.sh
./configure --prefix=/usr --libdir=/usr/lib
make
make install
popd
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-fedora-38:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
# configure xNVMe and build meson subprojects(SPDK)
meson setup builddir --prefix=/usr
# build xNVMe
meson compile -C builddir
# install xNVMe
meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
Fedora (37)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/fedora-37.sh
Or, run the commands contained within the script manually:
# Install packages via the system package-manager (dnf)
dnf install -y \
CUnit-devel \
autoconf \
bash \
clang-tools-extra \
diffutils \
findutils \
g++ \
gcc \
git \
libaio-devel \
libarchive-devel \
libtool \
liburing \
liburing-devel \
libuuid-devel \
make \
meson \
nasm \
ncurses \
numactl-devel \
openssl-devel \
patch \
pipx \
pkgconfig \
procps \
python3-devel \
python3-pip \
python3-pyelftools \
zlib-devel
#
# Clone, build and install libvfn
#
# Assumptions:
#
# - Dependencies for building libvfn are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/OpenMPDK/libvfn.git toolbox/third-party/libvfn/repository
pushd toolbox/third-party/libvfn/repository
git checkout v3.0.1
meson setup builddir -Dlibnvme="disabled" -Ddocs="disabled" --buildtype=release --prefix=/usr
meson compile -C builddir
meson install -C builddir
popd
#
# Clone, build and install libisal
#
# Assumptions:
#
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/intel/isa-l.git toolbox/third-party/libisal/repository
pushd toolbox/third-party/libisal/repository
git checkout v2.30.0
./autogen.sh
./configure --prefix=/usr --libdir=/usr/lib
make
make install
popd
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-fedora-37:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
# configure xNVMe and build meson subprojects(SPDK)
meson setup builddir --prefix=/usr
# build xNVMe
meson compile -C builddir
# install xNVMe
meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
Fedora (36)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/fedora-36.sh
Or, run the commands contained within the script manually:
# Install packages via the system package-manager (dnf)
dnf install -y \
CUnit-devel \
autoconf \
bash \
clang-tools-extra \
diffutils \
findutils \
g++ \
gcc \
git \
libaio-devel \
libarchive-devel \
libtool \
libuuid-devel \
make \
meson \
nasm \
ncurses \
numactl-devel \
openssl-devel \
patch \
pipx \
pkgconfig \
procps \
python3-devel \
python3-pip \
python3-pyelftools \
zlib-devel
#
# Clone, build and install libvfn
#
# Assumptions:
#
# - Dependencies for building libvfn are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/OpenMPDK/libvfn.git toolbox/third-party/libvfn/repository
pushd toolbox/third-party/libvfn/repository
git checkout v3.0.1
meson setup builddir -Dlibnvme="disabled" -Ddocs="disabled" --buildtype=release --prefix=/usr
meson compile -C builddir
meson install -C builddir
popd
#
# Clone, build and install libisal
#
# Assumptions:
#
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/intel/isa-l.git toolbox/third-party/libisal/repository
pushd toolbox/third-party/libisal/repository
git checkout v2.30.0
./autogen.sh
./configure --prefix=/usr --libdir=/usr/lib64
make
make install
popd
# Clone, build and install liburing v2.2
#
# Assumptions:
#
# - Dependencies for building liburing are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/axboe/liburing.git toolbox/third-party/liburing/repository
pushd toolbox/third-party/liburing/repository
git checkout liburing-2.2
./configure --libdir=/usr/lib64 --libdevdir=/usr/lib64
make
make install
popd
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-fedora-36:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
# configure xNVMe and build meson subprojects(SPDK)
meson setup builddir --prefix=/usr
# build xNVMe
meson compile -C builddir
# install xNVMe
meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
Ubuntu Latest (lunar)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/ubuntu-lunar.sh
Or, run the commands contained within the script manually:
# Unattended update, upgrade, and install
export DEBIAN_FRONTEND=noninteractive
export DEBIAN_PRIORITY=critical
apt-get -qy update
apt-get -qy \
-o "Dpkg::Options::=--force-confdef" \
-o "Dpkg::Options::=--force-confold" upgrade
apt-get -qy --no-install-recommends install apt-utils
apt-get -qy autoclean
apt-get -qy install \
autoconf \
bash \
build-essential \
clang-format \
findutils \
git \
libaio-dev \
libarchive-dev \
libcunit1-dev \
libisal-dev \
libncurses5-dev \
libnuma-dev \
libssl-dev \
libtool \
liburing-dev \
make \
meson \
nasm \
openssl \
patch \
pipx \
pkg-config \
python3 \
python3-pip \
python3-pyelftools \
python3-venv \
uuid-dev
#
# Clone, build and install libvfn
#
# Assumptions:
#
# - Dependencies for building libvfn are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/OpenMPDK/libvfn.git toolbox/third-party/libvfn/repository
pushd toolbox/third-party/libvfn/repository
git checkout v3.0.1
meson setup builddir -Dlibnvme="disabled" -Ddocs="disabled" --buildtype=release --prefix=/usr
meson compile -C builddir
meson install -C builddir
popd
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-ubuntu-lunar:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
# configure xNVMe and build meson subprojects(SPDK)
meson setup builddir
# build xNVMe
meson compile -C builddir
# install xNVMe
meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
Note
All tools and libraries are available via system package-manager.
Ubuntu LTS (jammy)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/ubuntu-jammy.sh
Or, run the commands contained within the script manually:
# Unattended update, upgrade, and install
export DEBIAN_FRONTEND=noninteractive
export DEBIAN_PRIORITY=critical
apt-get -qy update
apt-get -qy \
-o "Dpkg::Options::=--force-confdef" \
-o "Dpkg::Options::=--force-confold" upgrade
apt-get -qy --no-install-recommends install apt-utils
apt-get -qy autoclean
apt-get -qy install \
autoconf \
bash \
build-essential \
clang-format \
findutils \
git \
libaio-dev \
libarchive-dev \
libcunit1-dev \
libisal-dev \
libncurses5-dev \
libnuma-dev \
libssl-dev \
libtool \
make \
nasm \
openssl \
patch \
pipx \
pkg-config \
python3 \
python3-pip \
python3-pyelftools \
python3-venv \
uuid-dev
# Install packages via the Python package-manager (pip)
python3 -m pip install --upgrade pip
python3 -m pip install \
meson \
ninja
# Clone, build and install liburing v2.2
#
# Assumptions:
#
# - Dependencies for building liburing are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/axboe/liburing.git toolbox/third-party/liburing/repository
pushd toolbox/third-party/liburing/repository
git checkout liburing-2.2
./configure
make
make install
popd
#
# Clone, build and install libvfn
#
# Assumptions:
#
# - Dependencies for building libvfn are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/OpenMPDK/libvfn.git toolbox/third-party/libvfn/repository
pushd toolbox/third-party/libvfn/repository
git checkout v3.0.1
meson setup builddir -Dlibnvme="disabled" -Ddocs="disabled" --buildtype=release --prefix=/usr
meson compile -C builddir
meson install -C builddir
popd
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-ubuntu-jammy:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
# configure xNVMe and build meson subprojects(SPDK)
meson setup builddir
# build xNVMe
meson compile -C builddir
# install xNVMe
meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
Note
Installing liburing from source and meson + ninja via pip
Ubuntu LTS (focal)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/ubuntu-focal.sh
Or, run the commands contained within the script manually:
# Unattended update, upgrade, and install
export DEBIAN_FRONTEND=noninteractive
export DEBIAN_PRIORITY=critical
apt-get -qy update
apt-get -qy \
-o "Dpkg::Options::=--force-confdef" \
-o "Dpkg::Options::=--force-confold" upgrade
apt-get -qy --no-install-recommends install apt-utils
apt-get -qy autoclean
apt-get -qy install \
autoconf \
bash \
build-essential \
clang-format \
findutils \
git \
libaio-dev \
libarchive-dev \
libcunit1-dev \
libisal-dev \
libncurses5-dev \
libnuma-dev \
libssl-dev \
libtool \
make \
nasm \
openssl \
patch \
pipx \
pkg-config \
python3 \
python3-pip \
python3-pyelftools \
python3-venv \
uuid-dev
# Install packages via the Python package-manager (pip)
python3 -m pip install --upgrade pip
python3 -m pip install \
meson \
ninja
# Clone, build and install liburing v2.2
#
# Assumptions:
#
# - Dependencies for building liburing are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/axboe/liburing.git toolbox/third-party/liburing/repository
pushd toolbox/third-party/liburing/repository
git checkout liburing-2.2
./configure
make
make install
popd
#
# Clone, build and install libvfn
#
# Assumptions:
#
# - Dependencies for building libvfn are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/OpenMPDK/libvfn.git toolbox/third-party/libvfn/repository
pushd toolbox/third-party/libvfn/repository
git checkout v3.0.1
meson setup builddir -Dlibnvme="disabled" -Ddocs="disabled" --buildtype=release --prefix=/usr
meson compile -C builddir
meson install -C builddir
popd
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-ubuntu-focal:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
# configure xNVMe and build meson subprojects(SPDK)
meson setup builddir
# build xNVMe
meson compile -C builddir
# install xNVMe
meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
Note
Installing liburing from source and meson + ninja via pip
Gentoo (latest)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/gentoo-latest.sh
Or, run the commands contained within the script manually:
echo ""
echo "build requires linking against ncurses AND tinfo, run the following before compilation:"
echo "export LDFLAGS=\"-ltinfo -lncurses\""
emerge-webrsync
emerge \
app-arch/libarchive \
bash \
dev-lang/nasm \
dev-libs/isa-l \
dev-libs/libaio \
dev-libs/openssl \
dev-python/pip \
dev-python/pyelftools \
dev-util/cunit \
dev-util/meson \
dev-util/pkgconf \
dev-vcs/git \
findutils \
make \
patch \
sys-libs/liburing \
sys-libs/ncurses \
sys-process/numactl
#
# Clone, build and install libvfn
#
# Assumptions:
#
# - Dependencies for building libvfn are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/OpenMPDK/libvfn.git toolbox/third-party/libvfn/repository
pushd toolbox/third-party/libvfn/repository
git checkout v3.0.1
meson setup builddir -Dlibnvme="disabled" -Ddocs="disabled" --buildtype=release --prefix=/usr
meson compile -C builddir
meson install -C builddir
popd
# Install packages via the Python package-manager (pip)
python3 -m pip install --break-system-packages --upgrade pip
python3 -m pip install --break-system-packages \
pipx
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-gentoo-latest:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
export LDFLAGS="-ltinfo -lncurses"
# configure xNVMe and build meson subprojects(SPDK)
meson setup builddir --prefix=/usr
# build xNVMe
meson compile -C builddir
# install xNVMe
meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
Note
In case you get error adding symbols: DSO missing from command line
,
during compilation, then add -ltinfo -lnurces
to LDFLAGS
as it is
done in the commands above.
The build is configured to install with --prefix=/usr
this is
intentional such the the pkg-config
files end up in the default search
path on the system. If you do not want this, then remove --prefix=/usr
and adjust your $PKG_CONFIG_PATH
accordingly.
openSUSE (tumbleweed-latest)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/opensuse-tumbleweed-latest.sh
Or, run the commands contained within the script manually:
zypper --non-interactive refresh
# Install packages via the system package-manager (zypper)
zypper --non-interactive install -y --allow-downgrade \
autoconf \
awk \
bash \
clang-tools \
cunit-devel \
findutils \
gcc \
gcc-c++ \
git \
gzip \
libaio-devel \
libarchive-devel \
libnuma-devel \
libopenssl-devel \
libtool \
liburing-devel \
libuuid-devel \
make \
meson \
nasm \
ncurses \
patch \
pkg-config \
python3 \
python3-devel \
python3-pip \
python3-pipx \
python3-pyelftools \
tar
#
# Clone, build and install libvfn
#
# Assumptions:
#
# - Dependencies for building libvfn are met (system packages etc.)
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/OpenMPDK/libvfn.git toolbox/third-party/libvfn/repository
pushd toolbox/third-party/libvfn/repository
git checkout v3.0.1
meson setup builddir -Dlibnvme="disabled" -Ddocs="disabled" --buildtype=release --prefix=/usr
meson compile -C builddir
meson install -C builddir
popd
#
# Clone, build and install libisal
#
# Assumptions:
#
# - Commands are executed with sufficient privileges (sudo/root)
#
git clone https://github.com/intel/isa-l.git toolbox/third-party/libisal/repository
pushd toolbox/third-party/libisal/repository
git checkout v2.30.0
./autogen.sh
./configure --prefix=/usr --libdir=/usr/lib
make
make install
popd
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-opensuse-tumbleweed-latest:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
# configure xNVMe and build meson subprojects(SPDK)
meson setup builddir
# build xNVMe
meson compile -C builddir
# install xNVMe
meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
Note
All tools and libraries are available via system package-manager.
FreeBSD (13)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/freebsd-13.sh
Or, run the commands contained within the script manually:
# Install packages via the system package-manager (pkg)
pkg install -qy \
autoconf \
automake \
bash \
cunit \
devel/py-pyelftools \
e2fsprogs-libuuid \
findutils \
git \
gmake \
isa-l \
libtool \
meson \
nasm \
ncurses \
openssl \
patch \
pkgconf \
py39-pipx \
python3 \
wget
# Upgrade pip
python3 -m ensurepip --upgrade
# Installing meson via pip as the one available 'pkg install' currently that is 0.62.2, breaks with
# a "File name too long", 0.60 seems ok.
python3 -m pip install meson==0.60
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-freebsd-13:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
# configure xNVMe and build meson subprojects(SPDK)
meson setup builddir
# build xNVMe
meson compile -C builddir
# install xNVMe
meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
Note
Interfaces; libaio, liburing, and libvfn are not supported on FreeBSD.
macOS (13)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/macos-13.sh
Or, run the commands contained within the script manually:
# This is done to avoid appleframework deprecation warnings
export MACOSX_DEPLOYMENT_TARGET=11.0
export HOMEBREW_NO_AUTO_UPDATE=1
# Install packages via brew, assuming that brew is: installed, updated, and upgraded
clang-format --version && echo "Installed" || brew install clang-format --overwrite || echo "Failed installing"
git --version && echo "Installed" || brew install git --overwrite || echo "Failed installing"
isa-l --version && echo "Installed" || brew install isa-l --overwrite || echo "Failed installing"
if make --version | grep i386-apple; then
brew install make
fi
meson --version && echo "Installed" || brew install meson --overwrite || echo "Failed installing"
pkg-config --version && echo "Installed" || brew install pkg-config --overwrite || echo "Failed installing"
python3 --version && echo "Installed" || brew install python3 --overwrite || echo "Failed installing"
# Install packages via the Python package-manager (pip)
python3 -m pip install --upgrade pip
python3 -m pip install \
pipx
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-macos-13:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
# configure xNVMe and build meson subprojects(SPDK)
meson setup builddir
# build xNVMe
meson compile -C builddir
# install xNVMe
meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
Note
Interfaces; libaio, liburing, libvfn, and SPDK are not supported on macOS.
macOS (12)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/macos-12.sh
Or, run the commands contained within the script manually:
# This is done to avoid appleframework deprecation warnings
export MACOSX_DEPLOYMENT_TARGET=11.0
export HOMEBREW_NO_AUTO_UPDATE=1
# Install packages via brew, assuming that brew is: installed, updated, and upgraded
clang-format --version && echo "Installed" || brew install clang-format --overwrite || echo "Failed installing"
git --version && echo "Installed" || brew install git --overwrite || echo "Failed installing"
isa-l --version && echo "Installed" || brew install isa-l --overwrite || echo "Failed installing"
if make --version | grep i386-apple; then
brew install make
fi
meson --version && echo "Installed" || brew install meson --overwrite || echo "Failed installing"
pkg-config --version && echo "Installed" || brew install pkg-config --overwrite || echo "Failed installing"
python3 --version && echo "Installed" || brew install python3 --overwrite || echo "Failed installing"
# Install packages via the Python package-manager (pip)
python3 -m pip install --upgrade pip
python3 -m pip install \
pipx
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-macos-12:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
# This is done to avoid appleframework deprecation warnings
export MACOSX_DEPLOYMENT_TARGET=11.0
# configure xNVMe
meson setup builddir
# build xNVMe
meson compile -C builddir
# install xNVMe
sudo meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
Note
Interfaces; libaio, liburing, libvfn, and SPDK are not supported on macOS.
macOS (11)
Install the required toolchain and libraries, with sufficient system privileges
(e.g. as root
or with sudo
), by executing the commands below. You can
run this from the root of the xNVMe by invoking:
sudo ./xnvme/toolbox/pkgs/macos-11.sh
Or, run the commands contained within the script manually:
# This is done to avoid appleframework deprecation warnings
export MACOSX_DEPLOYMENT_TARGET=11.0
export HOMEBREW_NO_AUTO_UPDATE=1
# Install packages via brew, assuming that brew is: installed, updated, and upgraded
clang-format --version && echo "Installed" || brew install clang-format --overwrite || echo "Failed installing"
git --version && echo "Installed" || brew install git --overwrite || echo "Failed installing"
isa-l --version && echo "Installed" || brew install isa-l --overwrite || echo "Failed installing"
if make --version | grep i386-apple; then
brew install make
fi
meson --version && echo "Installed" || brew install meson --overwrite || echo "Failed installing"
pkg-config --version && echo "Installed" || brew install pkg-config --overwrite || echo "Failed installing"
python3 --version && echo "Installed" || brew install python3 --overwrite || echo "Failed installing"
# Install packages via the Python package-manager (pip)
python3 -m pip install --upgrade pip
python3 -m pip install \
pipx
Note
A Docker-image is provided via ghcr.io
, specifically
ghcr.io/xnvme/xnvme-deps-macos-11:next
. This Docker-image contains
all the software described above.
Then go ahead and configure, build and install using meson
:
# This is done to avoid appleframework deprecation warnings
export MACOSX_DEPLOYMENT_TARGET=11.0
# configure xNVMe
meson setup builddir
# build xNVMe
meson compile -C builddir
# install xNVMe
sudo meson install -C builddir
# uninstall xNVMe
# cd builddir && meson --internal uninstall
Note
Interfaces; libaio, liburing, libvfn, and SPDK are not supported on macOS.
Windows (2022)
Install the required toolchain and libraries, with sufficient system privileges (e.g. as eleveted command prompt), by executing the commands below. You can run this on administrator command prompt of the xNVMe by invoking:
call xnvme\toolbox\pkgs\windows-2022.bat
Or, run the commands contained within the script manually:
@echo off
@setlocal enableextensions enabledelayedexpansion
net session >nul: 2>&1
if errorlevel 1 (
echo %0 must be run with Administrator privileges
goto :eof
)
:: Use PowerShell to install Chocolatey
set "PATH=%ALLUSERSPROFILE%\chocolatey\bin;%PATH%"
echo [1/6] Install: Chocolatey Manager
powershell -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command "[System.Net.ServicePointManager]::SecurityProtocol = 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))"
where /q choco
if errorlevel 1 (
echo [1/6] Install: Chocolatey = FAIL
goto :eof
)
echo [1/6] Install: Chocolatey = PASS
:: Use Chocolatey to install msys2
echo [2/6] Install: MSYS2
set "PATH=%SystemDrive%\msys64;%PATH%"
choco install msys2 -y -r --params "/NoUpdate /InstallDir:C:\msys64"
where /q msys2
if errorlevel 1 (
echo [2/6] Install: MSYS2 = FAIL
goto :eof
)
echo [2/6] Install: MSYS2 = PASS
echo [3/6] Install: Git
choco install git -y -r
echo [3/6] Install: Git = PASS
echo [4/6] Install: dos2unix
choco install dos2unix
echo [4/6] Install: dos2unix = PASS
:: Use MSYS2/pacman to install gcc and clang toolchain
set MSYS2=call msys2_shell -no-start -here -use-full-path -defterm
echo [5/6] Install: MinGW Toolchain via msys2/pacman
set "PATH=%SystemDrive%\msys64\mingw64\bin;%PATH%"
%MSYS2% -c "pacman --noconfirm -Syy --needed base-devel mingw-w64-x86_64-toolchain"
where /q gcc
if errorlevel 1 (
echo [5/6] Install: MinGW Toolchain via msys2/pacman = FAIL
goto :eof
)
echo [5/6] Install: MinGW Toolchain via msys2/pacman = PASS
echo [6/6] Install: MinGW/meson via msys2/pacman
%MSYS2% -c "pacman --noconfirm -Syy mingw-w64-x86_64-meson"
echo [6/6] Install: MinGW/meson via msys2/pacman = OK
Then go ahead and configure, build and install using helper batch script build.bat
:
# build: auto-configure xNVMe, build third-party libraries, and xNVMe itself
build.bat
# config: only configure xNVMe
build.bat config
# config: debug only configure xNVMe
build.bat config-debug
# install xNVMe
build.bat install
# uninstall xNVMe
# build.bat uninstall
Note
In case you see .dll
loader-errors, then check that the environment
variable PATH
contains the various library locations of the toolchain.
Interfaces; libaio, liburing, libvfn, and SPDK are not supported on
Windows.
Backends and System Config
xNVMe relies on certain Operating System Kernel features and infrastructure that must be available and correctly configured. This subsection goes through what is used on Linux and how check whether is it available.
Backends
The purpose of xNVMe backends are to provide an instrumental runtime supporting the xNVMe API in a single library with batteries included.
That is, it comes with the essential third-party libraries bundled into the xNVMe library. Thus, you get a single C API to program against and a single library to link with. And similarly for the command-line tools; a single binary to communicating with devices via the I/O stacks that available on the system.
To inspect the libraries which xNVMe is build against and the supported/enabled backends then invoke:
xnvme library-info
It should produce output similar to:
# xNVMe Library Information
ver: {major: 0, minor: 7, patch: 4}
xnvme_libconf:
- '3p: spdk;git-describe:v22.09;+patches'
- 'conf: XNVME_BE_CBI_ADMIN_SHIM_ENABLED'
- 'conf: XNVME_BE_CBI_ASYNC_EMU_ENABLED'
- 'conf: XNVME_BE_CBI_ASYNC_NIL_ENABLED'
- 'conf: XNVME_BE_CBI_ASYNC_POSIX_ENABLED'
- 'conf: XNVME_BE_CBI_ASYNC_THRPOOL_ENABLED'
- 'conf: XNVME_BE_CBI_MEM_POSIX_ENABLED'
- 'conf: XNVME_BE_CBI_SYNC_PSYNC_ENABLED'
- 'conf: XNVME_BE_RAMDISK_ENABLED'
- 'conf: XNVME_BE_LINUX_ENABLED'
- 'conf: XNVME_BE_LINUX_BLOCK_ENABLED'
- 'conf: XNVME_BE_LINUX_BLOCK_ZONED_ENABLED'
- 'conf: XNVME_BE_LINUX_LIBAIO_ENABLED'
- 'conf: XNVME_BE_LINUX_LIBURING_ENABLED'
- 'conf: XNVME_BE_LINUX_VFIO_ENABLED'
- 'conf: XNVME_BE_SPDK_ENABLED'
- 'conf: XNVME_BE_SPDK_TRANSPORT_PCIE_ENABLED'
- 'conf: XNVME_BE_SPDK_TRANSPORT_TCP_ENABLED'
- 'conf: XNVME_BE_SPDK_TRANSPORT_RDMA_ENABLED'
- 'conf: XNVME_BE_SPDK_TRANSPORT_FC_ENABLED'
- 'conf: XNVME_BE_ASYNC_ENABLED'
- '3p: linux;LINUX_VERSION_CODE-UAPI/393728-6.2.0'
- '3p: NVME_IOCTL_IO64_CMD'
- '3p: NVME_IOCTL_IO64_CMD_VEC'
- '3p: NVME_IOCTL_ADMIN64_CMD'
xnvme_be_attr_list:
count: 7
capacity: 7
items:
- name: 'spdk'
enabled: 1
- name: 'linux'
enabled: 1
- name: 'fbsd'
enabled: 0
- name: 'macos'
enabled: 0
- name: 'windows'
enabled: 0
- name: 'ramdisk'
enabled: 1
- name: 'vfio'
enabled: 1
The xnvme_3p
part of the output informs about the third-party projects
which xNVMe was built against, and in the case of libraries, the version it
has bundled.
Although a single API and a single library is provided by xNVMe, then runtime and system configuration dependencies remain. The following subsections describe how to instrument xNVMe to utilize the different kernel interfaces and user space drivers.
Kernel
Linux Kernel version 5.9 or newer is currently preferred as it has all the features which xNVMe utilizes. This section also gives you a brief overview of the different I/O paths and APIs which the xNVMe API unifies access to.
NVMe Driver and IOCTLs
The default for xNVMe is to communicate with devices via the operating system NVMe driver IOCTLs, specifically on Linux the following are used:
NVME_IOCTL_ID
NVME_IOCTL_IO_CMD
NVME_IOCTL_ADMIN_CMD
NVME_IOCTL_IO64_CMD
NVME_IOCTL_ADMIN64_CMD
In case the *64_CMD
IOCTLs are not available then xNVMe falls back to
using the non-64bit equivalents. The 64 vs 32 completion result mostly affect
commands such as Zone Append. You can check that this interface is behaving as
expected by running:
xnvme info /dev/nvme0n1
Which you yield output equivalent to:
xnvme_dev:
xnvme_ident:
uri: '/dev/nvme0n1'
dtype: 0x2
nsid: 0x1
csi: 0x0
subnqn: 'nqn.2019-08.org.qemu:deadbeef'
xnvme_be:
admin: {id: 'nvme'}
sync: {id: 'nvme'}
This tells you that xNVMe can communicate with the given device identifier and it informs you that it utilizes nvme_ioctl for synchronous command execution and it uses thr for asynchronous command execution. Since IOCTLs are inherently synchronous then xNVMe mimics asynchronous behavior over IOCTLs to support the asynchronous primitives provided by the xNVMe API.
Block Layer
In case your device is not an NVMe device, then the NVMe IOCTLs won’t be available. xNVMe will then try to utilize the Linux Block Layer and treat a given block device as a NVMe device via shim-layer for NVMe admin commands such as identify and get-features.
A brief example of checking this:
# Create a NULL Block instance
modprobe null_blk nr_devices=1
# Open and query the NULL Block instance with xNVMe
xnvme info /dev/nullb0
# Remove the NULL Block instance
modprobe -r null_blk
Yielding:
xnvme_dev:
xnvme_ident:
uri: '/dev/nullb0'
dtype: 0x3
nsid: 0x1
csi: 0x1f
subnqn: ''
xnvme_be:
admin: {id: 'block'}
sync: {id: 'block'}
async: {id: 'emu'}
attr: {name: 'linux'}
xnvme_opts:
be: 'linux'
mem: 'posix'
dev: 'FIX-ID-VS-MIXIN-NAME'
admin: 'block'
sync: 'block'
async: 'emu'
xnvme_geo:
type: XNVME_GEO_CONVENTIONAL
npugrp: 1
npunit: 1
nzone: 1
nsect: 524288000
nbytes: 512
nbytes_oob: 0
tbytes: 268435456000
mdts_nbytes: 262144
lba_nbytes: 512
lba_extended: 0
ssw: 9
pi_type: 0
pi_loc: 0
pi_format: 0
Block Zoned IOCTLs
Building on the Linux Block model, then the Zoned Block Device model is also utilized, specifically the following IOCTLs:
BLK_ZONE_REP_CAPACITY
BLKCLOSEZONE
BLKFINISHZONE
BLKOPENZONE
BLKRESETZONE
BLKGETNRZONES
BLKREPORTZONE
When available, then xNVMe can make use of the above IOCTLs. This is mostly useful when developing/testing using Linux Null Block devices. And similar for a Zoned NULL Block instance:
# Create a Zoned NULL Block instance
modprobe null_blk nr_devices=1 zoned=1
# Open and query the Zoned NULL Block instance with xNVMe
xnvme info /dev/nullb0
# Remove the Zoned NULL Block instance
modprobe -r null_blk
Yielding:
xnvme_dev:
xnvme_ident:
uri: '/dev/nullb0'
dtype: 0x3
nsid: 0x1
csi: 0x2
subnqn: ''
xnvme_be:
admin: {id: 'block'}
sync: {id: 'block'}
async: {id: 'emu'}
attr: {name: 'linux'}
xnvme_opts:
be: 'linux'
mem: 'posix'
dev: 'FIX-ID-VS-MIXIN-NAME'
admin: 'block'
sync: 'block'
async: 'emu'
xnvme_geo:
type: XNVME_GEO_ZONED
npugrp: 1
npunit: 1
nzone: 1000
nsect: 524288
nbytes: 512
nbytes_oob: 0
tbytes: 268435456000
mdts_nbytes: 262144
lba_nbytes: 512
lba_extended: 0
ssw: 9
pi_type: 0
pi_loc: 0
pi_format: 0
Async I/O via libaio
When AIO is available then the NVMe NVM Commands for read and write are sent over the Linux AIO interface. Doing so improves command-throughput at higher queue-depths when compared to sending the command over via the NVMe driver ioctl().
One can explicitly tell xNVMe to utilize libaio
for async I/O by
encoding it in the device identifier, like so:
xnvme_io_async read /dev/nvme0n1 --slba 0x0 --qdepth 1 --async libaio
Yielding the output:
# Allocating and filling buf of nbytes: 4096
# Initializing queue and setting default callback function and arguments
# Read uri: '/dev/nvme0n1', qd: 1
xnvme_lba_range:
slba: 0x0000000000000000
elba: 0x0000000000000000
naddrs: 1
nbytes: 4096
attr: { is_zones: 0, is_valid: 1}
wall-clock: {elapsed: 0.0004, mib: 0.00, mib_sec: 10.75}
# cb_args: {submitted: 1, completed: 1, ecount: 0}
Async. I/O via io_uring
xNVMe utilizes the Linux io_uring interface, its support for feature-probing the io_uring interface and the io_uring opcodes:
IORING_OP_READ
IORING_OP_WRITE
When available, then xNVMe can send the NVMe NVM Commands for read and write via the Linux io_uring interface. Doing so improves command-throughput at all io-depths when compared to sending the command via NVMe Driver IOCTLs and libaio. It also leverages the io_uring interface to enabling I/O polling and kernel-side submission polling.
One can explicitly tell xNVMe to utilize io_uring
for async I/O by
encoding it in the device identifier, like so:
xnvme_io_async read /dev/nvme0n1 --slba 0x0 --qdepth 1 --async io_uring
Yielding the output:
# Allocating and filling buf of nbytes: 4096
# Initializing queue and setting default callback function and arguments
# Read uri: '/dev/nvme0n1', qd: 1
xnvme_lba_range:
slba: 0x0000000000000000
elba: 0x0000000000000000
naddrs: 1
nbytes: 4096
attr: { is_zones: 0, is_valid: 1}
wall-clock: {elapsed: 0.0003, mib: 0.00, mib_sec: 11.18}
# cb_args: {submitted: 1, completed: 1, ecount: 0}
User Space
Linux provides the Userspace I/O (uio) and Virtual Function I/O vfio frameworks to write user space I/O drivers. Both interfaces work by binding a given device to an in-kernel stub-driver. The stub-driver in turn exposes device-memory and device-interrupts to user space. Thus enabling the implementation of device drivers entirely in user space.
Although Linux provides a capable NVMe Driver with flexible IOCTLs, then a user space NVMe driver serves those who seek the lowest possible per-command processing overhead or wants full control over NVMe command construction, including command-payloads.
Fortunately, you do not need to go and write an user space NVMe driver since a highly efficient, mature and well-maintained driver already exists. Namely, the NVMe driver provided by the Storage Platform Development Kit (SPDK).
Another great fortune is that xNVMe bundles the SPDK NVMe Driver with the xNVMe library. So, if you have built and installed xNVMe then the SPDK NVMe Driver is readily available to xNVMe.
The following subsections goes through a configuration checklist, then shows how to bind and unbind drivers, and lastly how to utilize non-devfs device identifiers by enumerating the system and inspecting a device.
Config
What remains is checking your system configuration, enabling IOMMU for use by
the vfio-pci
driver, and possibly falling back to the uio_pci_generic
driver in case vfio-pci
is not working out. vfio
is preferred as
hardware support for IOMMU allows for isolation between devices.
Verify that your CPU supports virtualization / VT-d and that it is enabled in your board BIOS.
Enable your kernel for an intel CPU then provide the kernel option
intel_iommu=on
. If you have a non-Intel CPU then consult documentation on enabling VT-d / IOMMU for your CPU.Increase limits, open
/etc/security/limits.conf
and add:
* soft memlock unlimited
* hard memlock unlimited
root soft memlock unlimited
root hard memlock unlimited
Once you have gone through these steps, and rebooted, then this command:
dmesg | grep "DMAR: IOMMU"
Should output:
[ 0.112423] DMAR: IOMMU enabled
And this command:
find /sys/kernel/iommu_groups/ -type l
Should have output similar to:
/sys/kernel/iommu_groups/7/devices/0000:03:00.0
/sys/kernel/iommu_groups/7/devices/0000:02:00.0
/sys/kernel/iommu_groups/5/devices/0000:00:1f.2
/sys/kernel/iommu_groups/5/devices/0000:00:1f.0
/sys/kernel/iommu_groups/5/devices/0000:00:1f.3
/sys/kernel/iommu_groups/3/devices/0000:00:03.0
/sys/kernel/iommu_groups/11/devices/0000:07:00.0
/sys/kernel/iommu_groups/11/devices/0000:02:04.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/8/devices/0000:04:00.0
/sys/kernel/iommu_groups/8/devices/0000:02:01.0
/sys/kernel/iommu_groups/6/devices/0000:01:00.0
/sys/kernel/iommu_groups/4/devices/0000:00:04.0
/sys/kernel/iommu_groups/2/devices/0000:00:02.0
/sys/kernel/iommu_groups/10/devices/0000:02:03.0
/sys/kernel/iommu_groups/10/devices/0000:06:00.0
/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/9/devices/0000:02:02.0
/sys/kernel/iommu_groups/9/devices/0000:05:00.0
Unbinding and binding
With the system configured then you can use the xnvme-driver
script to bind
and unbind devices. The xnvme-driver
script is a merge of the SPDK
setup.sh
script and its dependencies.
By running the command below 8GB of hugepages will be configured, the
Kernel NVMe driver unbound, and vfio-pci
bound to the device:
HUGEMEM=4096 xnvme-driver
The command above should produce output similar to:
0000:03:00.0 (1b36 0010): nvme -> vfio-pci
0000:04:00.0 (1b36 0010): nvme -> vfio-pci
0000:05:00.0 (1b36 0010): nvme -> vfio-pci
0000:06:00.0 (1b36 0010): nvme -> vfio-pci
0000:07:00.0 (1b36 0010): nvme -> vfio-pci
0000:00:02.0 (1af4 1001): Active mountpoints on /dev/vda, so not binding
Current user memlock limit: 743 MB
This is the maximum amount of memory you will be
able to use with DPDK and VFIO if run as current user.
To change this, please adjust limits.conf memlock limit for current user.
To unbind from vfio-pci
and back to the Kernel NVMe driver, then run:
xnvme-driver reset
Should output similar to:
0000:03:00.0 (1b36 0010): vfio-pci -> nvme
0000:04:00.0 (1b36 0010): vfio-pci -> nvme
0000:05:00.0 (1b36 0010): vfio-pci -> nvme
0000:06:00.0 (1b36 0010): vfio-pci -> nvme
0000:07:00.0 (1b36 0010): vfio-pci -> nvme
0000:00:02.0 (1af4 1001): Already using the virtio-pci driver
Device Identifiers
Since the Kernel NVMe driver is unbound from the device, then the kernel no
longer know that the PCIe device is an NVMe device, thus, it no longer lives in
Linux devfs, that is, no longer available in /dev
as e.g. /dev/nvme0n1
.
Instead of the filepath in devfs, then you use PCI ids and xNVMe options.
As always, use the xnvme
cli tool to enumerate devices:
xnvme enum
xnvme_cli_enumeration:
- {uri: '0000:04:00.0', dtype: 0x2, nsid: 0x1, csi: 0x0, subnqn: 'nqn.2019-08.org.qemu:adcdbeef'}
- {uri: '0000:05:00.0', dtype: 0x2, nsid: 0x1, csi: 0x0, subnqn: 'nqn.2019-08.org.qemu:beefcace'}
- {uri: '0000:06:00.0', dtype: 0x2, nsid: 0x1, csi: 0x0, subnqn: 'nqn.2019-08.org.qemu:subsys0'}
- {uri: '0000:03:00.0', dtype: 0x2, nsid: 0x1, csi: 0x0, subnqn: 'nqn.2019-08.org.qemu:deadbeef'}
- {uri: '0000:03:00.0', dtype: 0x2, nsid: 0x2, csi: 0x2, subnqn: 'nqn.2019-08.org.qemu:deadbeef'}
- {uri: '0000:03:00.0', dtype: 0x2, nsid: 0x3, csi: 0x1, subnqn: 'nqn.2019-08.org.qemu:deadbeef'}
- {uri: '0000:07:00.0', dtype: 0x2, nsid: 0x1, csi: 0x0, subnqn: 'nqn.2019-08.org.qemu:feebdaed'}
- {uri: '0000:07:00.0', dtype: 0x2, nsid: 0x2, csi: 0x0, subnqn: 'nqn.2019-08.org.qemu:feebdaed'}
- {uri: '0000:07:00.0', dtype: 0x2, nsid: 0x3, csi: 0x0, subnqn: 'nqn.2019-08.org.qemu:feebdaed'}
- {uri: '0000:07:00.0', dtype: 0x2, nsid: 0x4, csi: 0x0, subnqn: 'nqn.2019-08.org.qemu:feebdaed'}
- {uri: '0000:07:00.0', dtype: 0x2, nsid: 0x5, csi: 0x0, subnqn: 'nqn.2019-08.org.qemu:feebdaed'}
Notice that multiple URIs using the same PCI id but with different xNVMe
?opts=<val>
. This is provided as a means to tell xNVMe that you want to
use the NVMe controller at 0000:03:00.0
and the namespace identified by
nsid=1
.
xnvme-driver
xnvme info 0000:03:00.0 --dev-nsid=1
0000:03:00.0 (1b36 0010): Already using the vfio-pci driver
0000:04:00.0 (1b36 0010): Already using the vfio-pci driver
0000:05:00.0 (1b36 0010): Already using the vfio-pci driver
0000:06:00.0 (1b36 0010): Already using the vfio-pci driver
0000:07:00.0 (1b36 0010): Already using the vfio-pci driver
0000:00:02.0 (1af4 1001): Active mountpoints on /dev/vda, so not binding
Current user memlock limit: 743 MB
This is the maximum amount of memory you will be
able to use with DPDK and VFIO if run as current user.
To change this, please adjust limits.conf memlock limit for current user.
xnvme_dev:
xnvme_ident:
uri: '0000:03:00.0'
dtype: 0x2
nsid: 0x1
csi: 0x0
subnqn: 'nqn.2019-08.org.qemu:deadbeef'
xnvme_be:
admin: {id: 'nvme'}
sync: {id: 'nvme'}
async: {id: 'nvme'}
attr: {name: 'spdk'}
xnvme_opts:
be: 'spdk'
mem: 'spdk'
dev: 'FIX-ID-VS-MIXIN-NAME'
admin: 'nvme'
sync: 'nvme'
async: 'nvme'
xnvme_geo:
type: XNVME_GEO_CONVENTIONAL
npugrp: 1
npunit: 1
nzone: 1
nsect: 2097152
nbytes: 4096
nbytes_oob: 0
tbytes: 8589934592
mdts_nbytes: 524288
lba_nbytes: 4096
lba_extended: 0
ssw: 12
pi_type: 0
pi_loc: 0
pi_format: 0
Similarly, when using the API, then you would use these URIs instead of filepaths:
...
struct xnvme_dev *dev = xnvme_dev_open("pci:0000:01:00.0?nsid=1");
...
Windows Kernel
Windows 10 or later version is currently preferred as it has all the features which xNVMe utilizes. This section also gives you a brief overview of the different I/O paths and APIs which the xNVMe API unifies access to.
NVMe Driver and IOCTLs
The default for xNVMe is to communicate with devices via the operating system NVMe driver IOCTLs, specifically on Windows the following are used:
IOCTL_STORAGE_QUERY_PROPERTY
IOCTL_STORAGE_SET_PROPERTY
IOCTL_STORAGE_REINITIALIZE_MEDIA
IOCTL_SCSI_PASS_THROUGH_DIRECT
You can check that this interface is behaving as expected by running:
xnvme.exe info \\.\PhysicalDrive0
Which should yield output equivalent to:
xnvme_dev:
xnvme_ident:
uri: '\\.\PhysicalDrive0'
dtype: 0x2
nsid: 0x1
csi: 0x0
subnqn: 'nqn.1994-11.com.samsung:nvme:980M.2:S649NL0T973010L '
xnvme_be:
admin: {id: 'nvme'}
sync: {id: 'nvme'}
async: {id: 'iocp'}
attr: {name: 'windows'}
This tells you that xNVMe can communicate with the given device identifier and it informs you that it utilizes nvme_ioctl for synchronous command execution and it uses iocp for asynchronous command execution. This method can be used for raw devices via \.PhysicalDrive<disk number> device path.
Below mentioned commands are currently supported by xNVMe using IOCTL path:
Admin Commands
Get Log Page
Identify
Get Feature
Format NVM
I/O Commands
Read
Write
NVMe Driver and Regular File
xNVMe can communicate with File System mounted devices via the operating system generic APIs like ReadFile and WriteFile operations. This method can be used to do operation on Regular Files.
You can check that this interface is behaving as expected by running:
xnvme.exe info C:\README.md
Which should yield output equivalent to:
xnvme_dev:
xnvme_ident:
uri: 'C:\README.md'
dtype: 0x4
nsid: 0x1
csi: 0x1f
subnqn: ''
xnvme_be:
admin: {id: 'file'}
sync: {id: 'file'}
async: {id: 'iocp'}
attr: {name: 'windows'}
This tells you that xNVMe can communicate with the given regular file and it informs you that it utilizes nvme_ioctl for synchronous command execution and it uses iocp for asynchronous command execution. This method can be used for file operations via <driver name>:<file name> path.
Async I/O via iocp
When AIO is available then the NVMe NVM Commands for read and write are sent over the Windows IOCP interface. Doing so improves command-throughput at higher queue-depths when compared to sending the command via the NVMe driver ioctl().
One can explicitly tell xNVMe to utilize iocp
for async I/O by
encoding it in the device identifier, like so:
xnvme_io_async read \\.\PhysicalDrive0 --slba 0x0 --qdepth 1 --async iocp
Yielding the output:
# Allocating and filling buf of nbytes: 512
# Initializing queue and setting default callback function and arguments
# Read uri: '\\.\PhysicalDrive0', qd: 1
xnvme_lba_range:
slba: 0x0000000000000000
elba: 0x0000000000000000
naddrs: 1
nbytes: 512
attr: { is_zones: 0, is_valid: 1}
wall-clock: {elapsed: 0.0002, mib: 0.00, mib_sec: 2.08}
# cb_args: {submitted: 1, completed: 1, ecount: 0}
Async I/O via iocp_th
Similar to iocp
interface, only difference is separate poller is used to
fetch the completed IOs.
One can explicitly tell xNVMe to utilize iocp_th
for async I/O by
encoding it in the device identifier, like so:
xnvme_io_async read \\.\PhysicalDrive0 --slba 0x0 --qdepth 1 --async iocp_th
Yielding the output:
# Allocating and filling buf of nbytes: 512
# Initializing queue and setting default callback function and arguments
# Read uri: '\\.\PhysicalDrive0', qd: 1
xnvme_lba_range:
slba: 0x0000000000000000
elba: 0x0000000000000000
naddrs: 1
nbytes: 512
attr: { is_zones: 0, is_valid: 1}
wall-clock: {elapsed: 0.0002, mib: 0.00, mib_sec: 2.14}
# cb_args: {submitted: 1, completed: 1, ecount: 0}
Async I/O via io_ring
xNVMe utilizes the Windows io_ring interface, its support for feature-probing the io_ring interface and the io_ring opcodes:
When available, then xNVMe can send the io_ring specific request using IORING_HANDLE_REF and IORING_BUFFER_REF structure for read and write via Windows io_ring interface. Doing so improves command-throughput at all io-depths when compared to sending the command via NVMe Driver IOCTLs.
One can explicitly tell xNVMe to utilize io_ring
for async I/O by
encoding it in the device identifier, like so:
xnvme_io_async read \\.\PhysicalDrive0 --slba 0x0 --qdepth 1 --async io_ring
Yielding the output:
# Allocating and filling buf of nbytes: 512
# Initializing queue and setting default callback function and arguments
# Read uri: '\\.\PhysicalDrive0', qd: 1
xnvme_lba_range:
slba: 0x0000000000000000
elba: 0x0000000000000000
naddrs: 1
nbytes: 512
attr: { is_zones: 0, is_valid: 1}
wall-clock: {elapsed: 0.0003, mib: 0.00, mib_sec: 1.92}
# cb_args: {submitted: 1, completed: 1, ecount: 0}
Building SPDK backend on Windows
SPDK can be used as a backend for xNVMe, to leverage this interface first user need to compile the SPDK as a subproject with xNVMe its depend on Windows Platform Development Kit (WPDK) that help to enables applications based on the Storage Performance Development Kit (SPDK) to build and run as native Windows executables.
Prerequisite’s to compile the SPDK
Refer below mentioned script and link to install the dependencies packages,
- MinGW cross compiler and libraries
this can be installed by running below script,
@echo off @setlocal enableextensions enabledelayedexpansion net session >nul: 2>&1 if errorlevel 1 ( echo %0 must be run with Administrator privileges goto :eof ) :: Use PowerShell to install Chocolatey set "PATH=%ALLUSERSPROFILE%\chocolatey\bin;%PATH%" echo [1/6] Install: Chocolatey Manager powershell -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command "[System.Net.ServicePointManager]::SecurityProtocol = 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" where /q choco if errorlevel 1 ( echo [1/6] Install: Chocolatey = FAIL goto :eof ) echo [1/6] Install: Chocolatey = PASS :: Use Chocolatey to install msys2 echo [2/6] Install: MSYS2 set "PATH=%SystemDrive%\msys64;%PATH%" choco install msys2 -y -r --params "/NoUpdate /InstallDir:C:\msys64" where /q msys2 if errorlevel 1 ( echo [2/6] Install: MSYS2 = FAIL goto :eof ) echo [2/6] Install: MSYS2 = PASS echo [3/6] Install: Git choco install git -y -r echo [3/6] Install: Git = PASS echo [4/6] Install: dos2unix choco install dos2unix echo [4/6] Install: dos2unix = PASS :: Use MSYS2/pacman to install gcc and clang toolchain set MSYS2=call msys2_shell -no-start -here -use-full-path -defterm echo [5/6] Install: MinGW Toolchain via msys2/pacman set "PATH=%SystemDrive%\msys64\mingw64\bin;%PATH%" %MSYS2% -c "pacman --noconfirm -Syy --needed base-devel mingw-w64-x86_64-toolchain" where /q gcc if errorlevel 1 ( echo [5/6] Install: MinGW Toolchain via msys2/pacman = FAIL goto :eof ) echo [5/6] Install: MinGW Toolchain via msys2/pacman = PASS echo [6/6] Install: MinGW/meson via msys2/pacman %MSYS2% -c "pacman --noconfirm -Syy mingw-w64-x86_64-meson" echo [6/6] Install: MinGW/meson via msys2/pacman = OK
Compilation and Installation
Configure, build and install using helper batch script
build.bat
:
# build: auto-configure xNVMe, build third-party libraries, and xNVMe itself
build.bat
# config: only configure xNVMe
build.bat config
# config: debug only configure xNVMe
build.bat config-debug
# install xNVMe
build.bat install
# uninstall xNVMe
# build.bat uninstall
Runtime Prerequisite’s
Refer below mentioned DPDK links to resolve runtime dependencies,
Building an xNVMe Program
At this point you should have xNVMe built and installed on your system and have the system correctly configured and you should by now also be familiar with how to instrument xNVMe to utilize different backends and backend options.
With all that in place, go ahead and compile your own xNVMe program.
Example code
This “hello-world” example prints out device information of the NVMe
device at /dev/nvme0n1
.
To use xNVMe include the libxnvme.h
header in your C/C++ source:
#include <stdio.h>
#include <libxnvme.h>
int
main(int argc, char **argv)
{
struct xnvme_opts opts = xnvme_opts_default();
struct xnvme_dev *dev;
dev = xnvme_dev_open("/dev/nvme0n1", &opts);
if (!dev) {
perror("xnvme_dev_open()");
return 1;
}
xnvme_dev_pr(dev, XNVME_PR_DEF);
xnvme_dev_close(dev);
return 0;
}
Compile and link
A pkg-config is provided with xNVMe, you can use pkg-config
to get the
required linker flags:
pkg-config --libs xnvme
This will output something like the output below, it will vary depending on the features enabled/disabled.
-L/usr/local/lib/x86_64-linux-gnu -lxnvme
You can pass the arguments above to your compiler, or using pkg-config like so:
gcc ../getting_started/hello.c $(pkg-config --libs xnvme) -o hello
Note
You do not need to link with SPDK/DPDK, as these are bundled
with xNVMe. However, do take note of the linker flags surrounding
-lxnvme
, these are required as SPDK makes use of
__attribute__((constructor))
. Without the linker flags, none of the SPDK
transports will work, as ctors will be “linked-out”, and xNVMe will
give you errors such as device not found.
Also, xNVMe provides two different libraries, a static, and a shared library as well. Here is what the different libraries are intended for:
libxnvme.a
, this is static library version of xNVMe except it does not come with batteries included, so you have to manually link with SPDK, liburing, etc.libxnvme.so
, this is the shared library version of xNVMe and it comes with batteries included, that is, all the third-party libraries are bundled within the shared librarylibxnvme.so
. Thus you only need to link with xNVMe, as described above, and need not worry about linking with SPDK, liburing etc.
Using libxnvme.so
is the preferred way to consume xNVMe as it comes with
the correct version of the various third-party libraries and provides for
a simpler link-target.
Run!
chmod +x hello
./hello
xnvme_dev:
xnvme_ident:
uri: '/dev/nvme0n1'
dtype: 0x2
nsid: 0x1
csi: 0x0
subnqn: ''
xnvme_be:
admin: {id: 'nvme'}
sync: {id: 'nvme'}
async: {id: 'emu'}
attr: {name: 'linux'}
xnvme_opts:
be: 'linux'
mem: 'posix'
dev: 'FIX-ID-VS-MIXIN-NAME'
admin: 'nvme'
sync: 'nvme'
async: 'emu'
xnvme_geo:
type: XNVME_GEO_UNKNOWN
npugrp: 0
npunit: 0
nzone: 0
nsect: 0
nbytes: 0
nbytes_oob: 0
tbytes: 0
mdts_nbytes: 0
lba_nbytes: 0
lba_extended: 0
ssw: 0
pi_type: 0
pi_loc: 0
pi_format: 0
This should conclude the getting started guide of xNVMe, go ahead and explore the Tools, C API, and C API: Examples.
Should xNVMe or your system still be misbehaving, then take a look in the Troubleshooting section or reach out by raising an issue, start an asynchronous discussion, or go to Discord for synchronous interaction.
Troubleshooting
User space
In case you are having issues using SPDK backend then make sure you are following the config section Config and if issues persist a solution might be found in the following subsections.
No devices found
When running xnvme enum
and the output-listing is empty, then there are no
devices. When running with vfio-pci
, this can occur when your devices are
sharing iommu-group with other devices which are still bound to in-kernel
drivers. This could be NICs, GPUs or other kinds of peripherals.
The division of devices into groups is not something that can be easily switched, but you can try to manually unbind the other devices in the iommu group from their kernel drivers.
If that is not an option then you can try to re-organize your physical connectivity of deviecs, e.g. move devices around.
Lastly you can try using uio_pci_generic
instead, this can most easily be
done by disabling iommu by adding the kernel option: iommu=off
to the
kernel command-line and rebooting.
Memory Issues
If you see a message similar to the below while unbinding devices:
Current user memlock limit: 16 MB
This is the maximum amount of memory you will be
able to use with DPDK and VFIO if run as current user.
To change this, please adjust limits.conf memlock limit for current user.
## WARNING: memlock limit is less than 64MB
## DPDK with VFIO may not be able to initialize if run as current user.
Then go you should do as suggested, that is, adjust limits.conf
, for an
example, see Config.
Build Errors
If you are getting errors while attempting to configure and build xNVMe then it is likely due to one of the following:
You are building in an offline environment and only have a shallow source-archive or a git-repository without subprojects.
The full source-archive is made available with each release and downloadable from the GitHUB Release page release page. It contains the xNVMe source code along with all the third-party dependencies, namely: SPDK, liburing, libnvme, and fio.
missing dependencies / toolchain
You are missing dependencies, see the Toolchain for installing these on FreeBSD and a handful of different Linux Distributions
The Toolchain section describes preferred ways of
installing libraries and tools. For example, on Ubuntu 18.04 it is preferred to
install meson
via pip3
since the version in the package registry is too
old for SPDK, thus if installed via the package manager then you will
experience build errors as the xNVMe build system starts building SPDK.
Once you have the full source of xNVMe, third-party library dependencies, and setup the toolchain then run the following to ensure that the xNVMe repository is clean from any artifacts left behind by previous build failures:
make clobber
And then go back to the Building xNVMe and follow the steps there.
Note
When running make clobber
then everything not comitted is “lost”. Thus,
if you are developing/modifying xNVMe, then make you commit of stash your
changes before running it.
Known Build Issues
If the above did not sort out your build-issues, then you might be facing one of the following known build-issues. If these do not apply to you, then please post an issue on GitHUB describing your build environment and output from the failed build.
When building xNVMe on Alpine Linux you might encounter some issues due to musl standard library not being entirely compatible with GLIBC / BSD.
The SPDK backend does not build on due to re-definition of STAILQ_*
macros. As a work-around, then disable the SPDK backend:
meson setup builddir -Dwith-spdk=false
The Linux backend support for io_uring
fails on Alpine Linux due to a
missing definition in musl leading to this error message:
include/liburing.h:195:17: error: unknown type name 'loff_t'; did you mean
'off_t'?
As a work-around, then disable io_uring
support:
meson setup builddir -Dwith-liburing=false
See more details on changing the default build-configuration of xNVMe in the section Custom Configuration.
Customizing the Build
Non-GCC Toolchain
To use a compiler other than gcc, then:
Set the
CC
andCXX
environment variable for the./configure
scriptPass
CC
, andCXX
as arguments tomake
For example, compiling xNVMe on a system where the default compiler is not gcc:
CC=gcc CXX=g++ ./configure <YOUR_OPTIONS_HERE>
make CC=gcc CXX=g++
make install CC=gcc CXX=g++
Recent versions of icc, clang, and pgi should be able to satisfy the C11 and pthreads requirements. However, it will most likely require a bit of fiddling.
Note
The icc works well after you bought a license and installed it correctly. There is a free option with Intel System Suite 2019.
Note
The pgi compiler has some issues linking with SPDK/DPDK due to unstable ABI for RTE it seems.
The path of least resistance is to just install the toolchain and libraries as described in the Toolchain section.
Custom Configuration
See the list of options in meson_options.txt
, this file defines the
different non-generic options that you can toggle. For traditional
build-configuration such as --prefix
then these are managed like all other
meson-based builds:
meson setup builddir -Dprefix=/foo/bar
See: https://mesonbuild.com/Builtin-options.html
For details
Cross-compiling for ARM on x86
This is managed like any other meson-based build, see:
https://mesonbuild.com/Cross-compilation.html
for details